Submitted by [deleted] t3_115ez2r in MachineLearning
Optimal-Asshole t1_j91boue wrote
Be the change you want to see in the subreddit. Avoid your own low quality posts. Actually post your own high quality research discussions before you complain.
"No one with working brain will design an ai that is self aware.(use common sense)" CITATION NEEDED. Some people would do it on purpose, and it can happen by accident.
csreid t1_j91llzp wrote
>Be the change you want to see in the subreddit.
The change I want to see is just enforcing the rules about beginner questions. I can't do that bc I'm not a mod.
gwern t1_j91ozq3 wrote
> Some people would do it on purpose, and it can happen by accident.
Forget 'can', it would happen by accident if it ever does. I mean like bro, we can't even 'design an AI' which learns the 'tl;dr:' summarization prompt, that just happens when you train a Transformer on Reddit comments and we discover that afterwards investigating what GPT-2 can do, you think we'd be designing 'consciousness'?
Sphere343 t1_j92y4se wrote
A AI can literally theoretically change from being not sentient to being so if it gains enough information in a certain way. As for the specific way? No clue cause it hasn’t been found yet. But in data gathering and self improvement a AI could become sentient if the creators didn’t but some limits or if the creators programmed the self improvement in a certain way.
Would it truly be sentient? Unknown. But what is for certain is even if the AI isn’t sentient but has gained enough information to respond in any circumstance it will seem as if it is. Except for the true creative skills of course. Kinda have to be truly sentient to create brand new detailed ideas and stuff.
TheRealSerdra t1_j944f39 wrote
What defines sentience? If I ask ChatGPT “what are you” it’ll say it’s ChatGPT, a LLM trained by OpenAI or something to that affect. Does that count as sentience or self awareness?
Sphere343 t1_j94dx4y wrote
Uh cause the programmers literally added that in. It’s a obvious question. So no of course not.
cass1o t1_j91miyk wrote
> Be the change you want to see
Literally a strat that never works.
blueSGL t1_j921j8u wrote
> Be the change you want to see in the subreddit.
For that to work I'd need to script up a bot, sign up to multiple VPNs, curate an army of aged accounts and flag from a control panel new low quality posts to be steadily hit with downvotes, and upvotes to be given to new high quality posts.
Otherwise you are just fighting with the masses that are upvoting posts that are causing the problems and ignoring higher quality posts.
Thought provoking 2 hour in depth podcast with AI researchers working at the coal face: 8 upvotes, Yet another ChatGPT screenshot: hundreds of votes.
This is an issue on every sub on reddit.
KPTN25 t1_j91q5hn wrote
Yeah, that quote is completely irrelevant.
The bottom line is that LLMs are technically and completely incapable of producing sentience, regardless of 'intent'. Anyone claiming otherwise is fundamentally misunderstanding the models involved.
Metacognitor t1_j92wykk wrote
Oh yeah? What is capable of producing sentience?
KPTN25 t1_j92yfz4 wrote
None of the models or frameworks developed to date. None are even close.
the320x200 t1_j93a7sy wrote
Given our track record of mistreating animals and our fellow people, treating them as just objects, it's very likely when the day does come we will cross the line first and only realize it afterwards.
Metacognitor t1_j941yl1 wrote
My question was more rhetorical, as in, what would be capable of producing sentience? Because I don't believe anyone actually knows, which makes any definitive statements of the nature (like yours above) come across as presumptuous. Just my opinion.
KPTN25 t1_j94a1y0 wrote
Nah. Negatives are a lot easier to prove than positives in this case. LLMs aren't able to produce sentience for the same reason a peanut butter sandwich can't produce sentience.
Just because I don't know positively how to achieve eternal youth, doesn't invalidate the fact that I'm quite confident it isn't McDonalds.
Metacognitor t1_j94ois4 wrote
That's a fair enough point, I can see where you're coming from on that. Although my perspective is perhaps as the models become increasingly large, to the point of being almost entirely a "black box" from a dev perspective, maybe something resembling sentience could emerge spontaneously as a function of some type of self-referential or evaluative model within the primary. It would obviously be a more limited form of sentience (not human-level) but perhaps.
overactor t1_j95hrop wrote
I really don't think you can say that with such confidence. If you were saying they no existing LLMs have achieved sentience and they can't at the scale we're working today, I'd agree, but I really don't see how you can be so sure that increasing the size and training data couldn't result in sentience somewhere down the line.
KPTN25 t1_j95kx5j wrote
Reproducing language is a very different problem than true thought or self-awareness, is why.
LLMs are no more likely to become sentient than a linear regression or random forest model. Frankly, they're no more likely than a peanut butter sandwich to achieve sentience.
Is it possible that we've bungled our study of peanut butter sandwiches so badly that we may have missed some incredible sentience-granting mechanism? I guess, but it's so absurd and infinitesimal it's not worth considering or entertaining practically.
The black box argument is intellectually lazy. We have a better understanding of what is happening in LLMs and other models than most clickbaity headlines imply.
overactor t1_j95oem0 wrote
Your ridiculous hyperbole is not helping your argument. It's entirely possible that sentience is an instrumental goal for achieving a certain level of text prediction. And I don't see why a sufficiently large LLM definitely couldn't achieve it. It could be that another few paradigm shifts will be needed, but it could also be an we need to do is scaling up. I think anyone who claims to know if LLMs can achieve sentience is either ignorant or lying.
[deleted] OP t1_j94yoqs wrote
[removed]
[deleted] OP t1_j91gjtb wrote
[deleted]
Ok_Dependent1131 t1_j92ftfi wrote
this history of human advancements weren't intentional - vulcanization, xrays, microwave ovens...
[deleted] OP t1_j93qs3d wrote
[deleted]
[deleted] OP t1_j91dl7k wrote
[deleted]
Kerbal634 t1_j92bt3g wrote
Stopping discussion is interfering more than participating in low level discussion
Viewing a single comment thread. View all comments