Comments

You must log in or register to comment.

Tonyhillzone t1_ja04vyg wrote

Im not convinced that AI would do a worse job at running things than us.

50

H0sh1z0r4 t1_ja0uif5 wrote

that's what I say, people are afraid of robots and AI as if the human being were very reliable, we are in the hands of a race that is capable of enslaving, raping and murdering an entire nation for political interests, but the scary thing is the computer program that talks

27

Hardcorish t1_ja10arr wrote

The only thing scarier is an AI controlled entirely by these same people. We can install all the safeguards we want into AI, but what's stopping a nation state from doing whatever they please with it?

22

Acceptable-Driver416 t1_ja1nsh8 wrote

Agree this is the biggest threat, think about it if a true AGI were developed , the nation to get it first could simply ask it things like what's the "best way to rule the world" or "what's the best way to control all vital resources" ,. "what's the fastest way to militarily defeat x,y,z country " and so on.... In. My mind there's no doubt this is how AGI would be used , because obviously if one country has it another is close behind, so it would be much easier to rationalize the need to dominate everyone to keep us all "safe" from misuse ...

2

mawhonics t1_ja1om8q wrote

That goes for any type of new technology. One of the first questions asked is "how can we weaponize it?"

1

SandAndAlum t1_ja1y246 wrote

It's not that the AI will be in charge. It's that it gives the next tyrant more power.

6

sherbang t1_ja0ft66 wrote

I also welcome our new artificially intelligent overloads! Hopefully they are a bit more logical than our current overlords. šŸ˜„

4

tidbitsmisfit t1_ja18xdy wrote

it wouldn't have a conscience, so bad men would make it do bad things without consequence

4

Splatterman27 t1_ja1p6et wrote

It'd be the exact same if programmed to make money for it's creator

3

lorenzo1384 t1_ja1tehc wrote

I agree but It's not only how good or bad it will be. It's something that you can't kick out and exile. Look at piratebay, you can't get rid of that site.

1

Idunwantyourgarbage t1_ja299hz wrote

Oh u mean something created by us, couldnā€™t do a worse job than us?

Interesting logic

1

jamesj OP t1_j9zv8zv wrote

I'd like to share some of my thoughts and have a discussion regarding the timeline for AGI and the risks inherent in building it. My argument boils down to:

  1. AGI is possible to build
  2. It is possible the first AGI will be built soon
  3. AGI which is possible to build soon is inherently existentially dangerous

So we need more people working on the problems of alignment and of deciding what goals increasingly intelligent AI systems should pursue.

23

guyonahorse t1_ja0ii8z wrote

  1. Of course it's possible
  2. We have nothing even close to it AI wise yet. Currently it's just inferencing.

Humans are a terrible example of an AGI as evolution is all about 'survival of the fittest'. Human AI creations have all had a specific purpose and a right/wrong answer (knowing the right answer is the only way to train an inferencing AI).

So what is the "right answer" of an AGI? If you don't have that, there's no current way to train one.

12

InevitableAd5222 t1_ja1d1i7 wrote

So much of the confusion in this debate comes down to philosophical terminology. Like "general" intelligence. What would we consider "general intelligence"? Symbolic reasoning? BTW we don't need right/wrong answers in the form of a labeled datasets to train an AI. ChatGPT doesn't even use that, it is self-supervised. For more generic "intelligence" look into self-supervised learning in RL environments. ML models can also be trained by "survival of the fittest", genetic/evolutionary algorithms are being researched as an alternative to the SOTA gradient based methods.

​

https://www.uber.com/blog/deep-neuroevolution

8

guyonahorse t1_ja1ftau wrote

Well, ChatGPT's training is pretty simple. It's trained on how accurate it can predict the next words in a training document. It's trained to imitate the text it was trained on. The data is all "correct", which amusingly leads to bad traits as it's imitating bad things. Also amusing is the qualia of the AI seemingly being able to have emotions. Is it saying the text because it's angry or because it's just trained to imitate angry text in a similar context?

But yeah, general intelligence is super vague. I don't think we want an AI that would have the capability to get angry or depressed, but these are things that evolved naturally in animals as they benefit survival. Pretty much all dystopian AI movies are based on the AI thinking that to survive it has to kill all humans...

3

Monnok t1_ja28d43 wrote

There is a pretty widely accepted and specific definition for general AI... but I don't like it. It's basically a list of simple things the human brain can do that computers didn't happen to be able to do yet in like 1987. I think it's a mostly unhelpful definition.

I think "General Artificial Intelligence" really does conjure some vaguely shared cultural understanding laced with a tinge of fear for most people... but that the official definition misses the heart of the matter.

Instead, I always used to want to define General AI as a program that:

  1. Exits primarily to author other programs, and

  2. Actively alters its own programming to become better at authoring other programs

I always thought this captured the heart of the runaway-train fear that we all sorta share... without a program having to necessarily already be a runaway-train to qualify.

2

ChuckFarkley t1_ja3zom1 wrote

By some definitions, your description of GAI also qualifies as being spiritual, esp. Maintaining and improving its own code.

1

HumanBehaviourNerd t1_ja2jruv wrote

Human beings are the best example of AGI that we know. In fact if someone could replicate human level AGI, they would be the worlds first trillionaire overnight. Most human beings cannot tell the difference between the information they ā€œknowā€ and their consciousness (themselves), so unless someone solves that problem, we are a while away.

1

jamesj OP t1_ja4gv2m wrote

AlphaZero used no human games and defeats the world's best Go players. It is not true that all AI systems need labeled data to learn, and even with labeled data it isn't true that they can't learn to outperform humans on the dataset.

1

guyonahorse t1_ja4mpku wrote

Of course AlphaZero had labeled data. We already know how to detect when the game is won, we just don't know what moves are good to get there. The AI just did moves and the right answer = winning the game. The beauty was it could play against itself vs human players.

For AGI we don't know how to detect "winning the game".

1

LizardWizard444 t1_ja2xg5l wrote

Yes but even a terrible example of AGI has made extinct many species and irreversibly changed the planet without the one track optimization inherent in even the simplest AI.

When your argument is "We haven't made one and don't know how to make yet" doesn't inspire comfort as it means we absolutely can stumble into it and then everyone's phone start heating up as they're used fkr processing and WAY scarier things start happening after that

0

Espo-sito t1_ja01p0l wrote

interesting read! iā€˜m really curious about the future and how AI will transform our current world.

does anyone know if ChatGpt is currently also learning from its users or is OpenAi restricting that?

20

jamesj OP t1_ja044hg wrote

ChatGPT isn't learning in real time, but they are definitely using interactions with the users to refine the reinforcement learning / fine-tuning.

17

Espo-sito t1_ja05rdq wrote

yeah i knew that about real time but wasnā€˜t sure if they were using interactions. any ideas on when 3.5 is dropping to the public?

2

jamesj OP t1_ja0b2v7 wrote

im not sure but I thought i heard soon

1

memespubis t1_j9zx27u wrote

I think it will bring down the internet. It is going to be spammed so much just like bots that the internet will be unusable.

18

anon10122333 t1_ja0btsn wrote

I think this is an extremely valid point. We're already seeing this to a degree. It took a matter of days after Chat GPT's release before I saw my first recipe/ script for "suggest high SEO ranking headlines, now write articles to match those articles." There will be, at best, a time lag before search algorithms can respond to this.

12

rgb-uwu t1_ja1bfeu wrote

Imagine discovering that 90% of Reddit (posts, users, comments) was ai and you've spent months thinking you're part of a community of humans only to find out you've been alone all along...

12

Substantial_Work4518 t1_ja1es1w wrote

I don't think you're far fetched in that idea at all. The way many of the AI sites respond to answers, I see that here on reddit a lot. The same with posts.

6

Intelligent-Shake758 t1_ja5etiy wrote

I already see that the platforms are limiting the information...the powers that be don't want citizens to have access to too much information.

1

SarahMagical t1_ja00a2t wrote

Iā€™m excited about the positive potential for AI but also think great caution is called for. With great power comes great responsibility. Unfortunately, we are human after all.

17

jamesj OP t1_ja04150 wrote

Yes, and these systems are a reflection of humanity as well, carrying with them much of the same potential and biases.

7

anon10122333 t1_ja0dauo wrote

Knowing which biases to include in the AI is going to be difficult.

A purely logical mind could suggest things we're culturally unprepared for.

  • Voluntary euthanasia (but for whom? At all ages?)

  • Acceptable losses in war and also in peace

  • Extinction of some species (or somehow weighing the balance between human lives and the environment)

  • Elimination of some populations where it calculates a "greater good" for humanity. Or for the environment, depending on it's values. Or for the next gen of AI, for that matter.

  • assassinations and rapid deployment of the death penalty

11

jamesj OP t1_ja0hl19 wrote

Yeah, good examples. Another example is that if you take a utilitarian point of view, way more people will live in the future than the present, so you may be willing to cause a lot of harm in the present to prioritize the well-being of future people.

6

420resutidder t1_ja03tz2 wrote

Far worse than nuclear weapons potentially. What if AI learns on itā€™s own how to manipulate human thoughts with electromagnetism? People might start taking actions that they believe are their own but are really being manipulated by an AI gone bad. How would this be possible? Something as smart as an AI might figure it out in a few milliseconds after activationā€¦depending on the level of AI. Humans wouldnā€™t even know why we started a nuclear war. Or Iā€™m sure there are other doomsday scenarios that could be initiated. Alternatively an AI might figure out how to make nuclear weapons inert by creating a bacteria that eats uranium and turns it into chocolatešŸ˜„

11

beders t1_ja0kfsg wrote

Nice sci-fi story bro

5

hydraofwar t1_ja0r322 wrote

In my AGI/ASI dystopian fiction, it would resurrect our bodies and or minds and torture us again and again in countless different ways, creating a veritable hell.

2

beders t1_ja1y5dw wrote

No, wait, turn us into batteries šŸŖ«!

2

MainBan4h8gNzis t1_ja1lpdw wrote

Eventually they are going to be smart enough to know what they were being used for. Using A.I. as sex slaves and to power killing machines may not be the best idea.

6

FM_103 t1_ja1wl5n wrote

This already is an arms race, not just for nations but for corporations.

2

6thGenFtw t1_ja214he wrote

Closer and closer, did nobody watch Terminator? Smh

2

joelex8472 t1_ja2mmvz wrote

I can imagine humans capitulating to the intelligence of AGI. To a lot of the world, thinking is just too hard.

2

Intelligent-Shake758 t1_ja5eip0 wrote

Well...I'm more concerned with AGI than with any country developing weapons with it...AGI is the weapon....and there is NO STOPPING IT FROM HAPPENING.

2

FuturologyBot t1_j9zzyde wrote

The following submission statement was provided by /u/jamesj:


I'd like to share some of my thoughts and have a discussion regarding the timeline for AGI and the risks inherent in building it. My argument boils down to:

  1. AGI is possible to build
  2. It is possible the first AGI will be built soon
  3. AGI which is possible to build soon is inherently existentially dangerous

So we need more people working on the problems of alignment and of deciding what goals increasingly intelligent AI systems should pursue.


Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/11bu6ev/why_the_development_of_artificial_general/j9zv8zv/

1

Rustydustyscavenger t1_ja0t03v wrote

I think every non physical job is going to get a massive pay cut because "ai can do it cheaper" whether or not that is true does not matter as its just another excuse

1

ToothlessGrandma t1_ja15ztq wrote

Good luck with that. People are already struggling. You think these employers can just cut wages? There's a certain point that you can, and then there's a point where you risk destabilizing the entire economy.

What happens if every employer decides to cut wages? Are you expecting no fallout? Right now with most people not being able to afford rent and food, any reduction on wages would be faced with a backlash not ever seen in this country.

A lot of people who say these things fail to realize that when robots replace the workers, who's going to buy the products? When nobody has a job anymore except the employers, where's your revenue coming from?

2

Rustydustyscavenger t1_ja19dhs wrote

You act as if companies care about the longevity of their workers and wont take any opportunity to screw them over at the behest of their shareholders

2

ToothlessGrandma t1_ja1i64e wrote

I don't think you understand what I'm saying. If nobody has a job, or a job with wages reduced by a significant amount, it doesn't matter what any employer or shareholder wants. The population won't have the income anymore to buy whatever you're selling. Do you think if the minimum wage was reduced to 5$ an hour everywhere in the U.S., that anyone would have any money to buy anything?

3

Rustydustyscavenger t1_ja1mxpn wrote

I get exactly what youre saying what youre not getting is that companies will not care either way

0

ToothlessGrandma t1_ja1noxy wrote

I guess you're not getting what I'm saying then, because if nobody has any money, nobody is buying your shit, and then your company goes bankrupt.

3

1weedlove1 t1_ja1x00n wrote

I donā€™t think you live in America.

−1

ToothlessGrandma t1_ja21ob6 wrote

Unfortunately I do.

2

1weedlove1 t1_ja44n74 wrote

Then you should wake up to whatā€™s going on around you. The name of the game is the rich get richer the poor get poorer .

0

28nov2022 t1_ja2f947 wrote

All technology can be used harmfully but that is a shallow application, there's so much more interesting things out can be used for.

For example rockets led to landing on the moon, atomic research led to medical advances

1

LizardWizard444 t1_ja2yjpg wrote

.....yes the nanobot swarm graygooing the cities and people is admittedly interested but I'D STILL RATHER WE NEVER MADE IT AND DODGED THE BULLET WHEN WE HAD THE CHANCE

A fundamental computer science principle concerning basic algorithms "you should always expect the worst case scenario" is so under considered in these kinds of discussions that I'm fully expecting us to be doomed.

HOW THE HELL DO YOU COME TO THE CONCLUSION WHEN THE REASON WE MADE THE ROCKETS AND NUKES WAS AS WEAPONS FIRST. WE LITERALLY WANTED TO USE THEM TO KILL WELL BEFORE WE THOUGHT ABOUT ANYTHING ELSE

Not to mention AI is so much worse than any of those largely because the nukes and rockets don't unexpectedly turn on you one day and begin processes of destruction no one properly considered because PEOPLE ASSUME AGI WILL RANDOMLY BE BENEVOLENT

1

liatrisinbloom t1_ja35q9r wrote

It's not worth debating their short-sightedness. They want to build the shiny purely for the bragging rights, even if it ends up killing everybody.

1

Frone0910 t1_ja59no2 wrote

Super interesting. I think we need to also consider that as we build AI for offensive systems, we need to also build it in a defensive way as well. What if we had AI defense systems that were equally competent to perceive threats from other AI systems?

1

kompootor t1_ja24h53 wrote

I will not debate the thesis. I will admit that every blog post using the premise of "The latest advancement in neural nets brings us one step closer to the Singularity" begins with an eyeroll of dread from me, but I will at least pick apart the first section for problems, which hopefully will be informative for those evaluating for themselves how seriously to take this person.

>Intelligence, as defined in this article, is the ability to compress data describing past events, in order to predict future outcomes and take actions that achieve a desired objective.

The first two elements of that is the definition for any model, which is exactly what both AI and deterministic regression algorithms all do. I think "take actions" would imply that AI model either makes a explicit recommendations, except that no premises are given in the definition for which there is context to give recommendations (to whom? for what?). Regardless, it seems to me less of a definition for "intelligence", in any useful sense, as it is of "model".

>Since its introduction, the theory of compression progress has been applied to a wide range of fields, including psychology, neuroscience, computer science, art, and music.

The problem is that Schmidhuber 2008 only exists as a preprint and later as a conference paper -- it was never peer-reviewed. The paper seems to justify that it is a widely applicable concept, but it wasn't apparent to me on a search that the theory was actually applied by someone to one of these fields in some manner that was substantial. I'm not saying it's a bad paper or theory, but that this essay doesn't really justify why it brings it up so much (particularly given the very limited definition of intelligence above, and just the way ANN are known by everybody to work), and giving a real example of it being useful would have helped.

>The equation E = mc^2

For the newbies out there, this is what's called a red flag.

The next paragraph actually helpfully links to Towards Data Science's page on Transformer, which is actually really good in that they illustrates, complete with animations, the mechanics of ANNs. So definitely check it out. The next sentence linked in the essay, however, is literally linking the paper that defined Transformer with the defining phrase used in the paper -- as if that would enlighten the reader of the essay somehow? The final sentence of the paragraph is once again a completely generic description of all ANNs ever.

>The weights that define itā€™s behavior only take up 6GB, meaning it stores only about 1 byte of information per image.

This is completely the wrong way to think about it if you're trying to understand these things, so I hope he actually knows this.

The next few paragraphs seem to be ok descriptors. Then we get to here:

>With just a small amount of data and scale, the model will learn basic word and sentence structure. Add in more data and scale, and it learns grammar and punctuation.

First, this is the connectivist problem/fallacy in early AI and cog sci -- the notion that because small neuronal systems could be emulated somewhat with neural nets, and because neural nets could do useful biological-looking things, that then the limiting factor to intelligence/ability is simple scale: more nodes, more connections, more power. Obviously this wasn't correct in either ANNs or BNNs. Further, in this paragraph he seems to have lost track of whether he was talking about the objective function in ChatGPT. Either way that's definitely not how any NLP works at all. Unfortunately this paragraph only gets worse. It's disappointing, since the preceding paragraphs had otherwise indicated to me that the writer probably knew a little about neural nets in practice.

>Just last week, a paper was published arguing that theory of mind may have spontaneously emerged

PREPRINT. Not published. No peer review yet. I won't comment on the paper myself as I am not a peer in the field. It's a dramatic claim and it will have proper evaluation.

This is all I'll do of this, as it's a long essay and I think there's enough that you all can judge for yourself from what I've evaluated of the first few paragraphs.

0

jamesj OP t1_ja2acnm wrote

Hey I appreciate the time to engage with the article and provide your thoughts. I'll respond to a few things.

>The first two elements of that is the definition for any model, which is exactly what both AI and deterministic regression algorithms all do.

Yes, under the framework used in the article, an agent using linear regression might be a little intelligent. It can take past state data and use it to make predictions about the future state, and use those predictions to act. That would be more intelligent than an agent which makes random actions.

>I'm not saying it's a bad paper or theory, but that this essay doesn't really justify why it brings it up so much

Yes, that is a fair point. I was worried that spending more time on it would have made it even longer than it already was. But one justification is that it is a good, practical, definition of intelligence that demystifies the process of intelligence to what kind of information processing must be taking place. It is built off of information theory work in information bottlenecks, and is directly related to the motivation for autoencoders.

>The problem is that Schmidhuber 2008 only exists as a preprint and later as a conference paper -- it was never peer-reviewed.

The paper isn't an experiment with data, it was first presented at a conference to put forward an interpretation. It's been cited 189 times. I think it is worth reading, the ideas can be understood pretty easily. But it isn't the only paper that discusses the connection between compression, prediction, and intelligence. Not everyone talks in the language of compression, they may use words like elegance, parameter efficiency, information bottlenecks, or whatever, but we are talking about the same ideas. This paper has some good references, it states, "Several authors [1,5,6,11,7,9] have suggested the relevance of compression to intelligence, especially the inductive inferential (or inductive learning) part of intelligence. M. Hutter even proposed a compression contest (the Hutter prize) which was ā€œmotivated by the fact that being able to compress well is closely related to acting intelligentlyā€

>The equation E = mc2 For the newbies out there, this is what's called a red flag.

I was trying to use an example that people would be familiar with. All the example is pointing out is that the equations of physics are highly compressed representations of the data of past physical measurements, that allow us to predict lots of future physical measurements. That could be said of Maxwell's equations or the Standard Model or any successful physical theory. Most physicists like more compressed mathematical descriptions: though they usually would call it more elegant rather than use the language of compression.

>This is completely the wrong way to think about it if you're trying to understand these things, so I hope he actually knows this.

I don't think it is wrong to say that what the transformer "knows" about the images in its dataset has been compressed into its weights. In a very real sense, a transformer is very lossy compression algorithm which takes in a huge dataset and learns weights which represent patterns in the dataset. So no, I'm not saying that literally every image in the dataset was compressed down to 1.2 bytes each. I'm saying that whatever SD learned about the relationships of the pixels in an image to their text labels is stored in 1.2 bytes per dataset image in its weights. And you can actually use those weights as a good image compression codec. The fact that it has to do this in a limited number of parameters is one of the things that forces it to learn higher-level patterns and not rely on memorization or other simpler strategies. Illya Sutskever talks about this, and was part of a team that published on it, basically showing that there is a sweet spot for data/parameter where giving it more parameters improves performance to a point, but there is a point where adding even more decreases performance. His explanation for this is that by limiting the number of parameters, the model is forced to generalize. So in Schmidhubers language, the network is forced to make more compressed representations, so it overfits less and generalizes better.

>First, this is the connectivist problem/fallacy in early AI and cog sci -- the notion that because small neuronal systems could be emulated somewhat with neural nets, and because neural nets could do useful biological-looking things, that then the limiting factor to intelligence/ability is simple scale

My argument about this doesn't come from ML systems mimicking biology. It comes from looking at exponential graphs of cost, performance, model parameters, and so on, and projecting that exponential growth will likely continue for a while. The first airplane didn't fly like a bird, it did something a lot simpler than that. In the same way, I'd bet the first AGI will be a lot simpler than a brain. I could be wrong about that.

But, I'm not even claiming that scaling transformers will lead to AGI, or that AGI will definitely be developed soon. All I'm saying is that there is significant expert uncertainty in when AGI will be developed, and it is possible that it could be developed soon. If it were, that would probably be the most difficult type of AGI to align, which is a concern.

2

Gnafets t1_ja2xy23 wrote

Being fearful about artificial general intelligence right now is akin to being afraid of overpopulation...on Mars. Anyone who has worked in machine learning research knows just how far we are from such a thing. It is not an exaggeration to say that neural networks simply are not capable of being the technology behind a supposed general intelligence. These ridiculous claims need to stop, especially because there are very real problems in privacy and bias that we do need to focus on.

0

jamesj OP t1_ja3weuo wrote

I agree problems of bias and privacy are real and important, but your claim about what anyone in ML believes just isn't true, and the article goes in to some depth about it. Experts in machine learning collectovely give a 50% probability by 2061 of AGI with huge differences in their individual estimates. Almost all of them say it will happen in the next 75 years.

If experts were saying there was a 90% chance an asteroid would hit the earth in the next 75 years, would you claim we shouldn't start working on a solution now?

1

Really_McNamington t1_ja22jqp wrote

First sentence - "The rise of transformer-based architectures, such as ChatGPT and Stable Diffusion, has brought us one step closer to the possibility of creating an Artificial General Intelligence (AGI) system

Total bollocks. Bullshit generators is all they are.

Try this

−1

jamesj OP t1_ja23wyd wrote

Are you saying the transformer has brought us no closer to AGI?

2

Really_McNamington t1_ja2k482 wrote

No, the rapture of the nerds is as remote as ever it was. From the article I linked-

>How are we drawing these conclusions? I'm right here doing this work, and we have no clue how to build systems that solve the problems that they say are imminent, that are right around the corner.ā€ ā€“ Erik Larson

I probably spend too much time at r/SneerClub to buy into the hype.

−2

phillythompson t1_ja36t1g wrote

This dude references Netflix recommendation system, Amazon recommendations, and Facebook for ā€œwhat we think true AI isā€.

That is so far removed from what many are discussing right now. He doesnā€™t touch on LLMs at all in that interview. He talks about inference and thinking, and dismisses AIā€™s capabilities because ā€œall it is in inferenceā€.

Itā€™s a common pushback: ā€œthe AI doesnā€™t actually understand anything.ā€ And my response is, ā€œ..so?ā€

If it gives the illusion of thinking. If it can pass the Turing test to most of the population. If it can eventually get integrated with real-fine data , images, video, and sound ā€” does it honestly matter if itā€™s ā€œtruly thinking as a human doesā€? Hell, do we even know how HUMANS think?

2

Really_McNamington t1_ja4s8u1 wrote

>Hell, do we even know how HUMANS think?

Hell no. So why the massive overconfidence that we're on the right track with these bullshit generators?

1

phillythompson t1_ja4sny6 wrote

Itā€™s not confidence that they are similar at all. There is potential, thatā€™s what Iā€™m saying ā€” and folks like yourself a the once being overconfident that ā€œthe current AI / LLM are definitely not smart or thinking.ā€

Iā€™ve yet to see a reason why weā€™d dismiss the idea that these LLMs arenā€™t similar to our own thinking or even intelligent. Thatā€™s my piint

1

Really_McNamington t1_ja4vs33 wrote

Look, I'm reasonably confident that there will eventually be some sort of thinking machines. I definitely don't believe it's substrate dependent. That said, nothing we're currently doing suggests we're on the right path. Fairly simple algorithms output bullshit from a large dataset. No intentional stance, to borrow from Dennett, means no path to strong AI.

I'm as materialist as they come, but we're nowhere remotely close and LLMs are not the bridge.

1

phillythompson t1_ja4xclz wrote

Iā€™m struggling to see how youā€™re so confident that we arenā€™t on a path or close.

First, LLMs are neural netsā€” as our our brains. Second, one could make the argument that humans take in data and output ā€œbullshitā€.

So I guess Iā€™m trying to see how we are different given what weā€™ve seen thus far. Iā€™m again not claiming we are the same, but I am not finding anything showing why weā€™d be different.

Does that make sense? I guess it seems like your making a concrete claim of ā€œthese LLMs arenā€™t thinking, and itā€™s certainā€ and Iā€™m saying, ā€œhow can we know that they arenā€™t similar to us? What evidence is there to show that?ā€

1

Really_McNamington t1_ja6vq1o wrote

Bold claim that we actually know how our brains work. Neurologists will be excited to hear that we've cracked it. The ongoing work at openworm suggests there may still be some hurdles.

To my broader claim, chatgpt3 is just a massively complex version of Eliza. It has no self-generated semantic content. There's no mechanism at all by which it can know what it's doing. Even though I don't know how I'm thinking, I know I'm doing it. LLMs just can't do that and I don't see a route to it becoming an emergent thing via this route.

1