Comments
RadRandy2 t1_jdxgxpi wrote
cries
He's gonna grow up to be just as psychotic as we are :)
ExposingMyActions t1_jdz4rwv wrote
Like the alleged gods did when creating us
D_Ethan_Bones t1_jdwt99g wrote
>The current danger is the nature of GPT networks to make obviously false claims with absolute confidence.
The internet will never be the same.
Smellz_Of_Elderberry t1_jdz2zvg wrote
Lol
1II1I11II1I1I111I1 t1_jdwowqy wrote
No, it's not.
The current danger is our progress towards AI development continues, while AI allignment trails behind.
No one is scared of ChatGPT, or GPT-4. This is what AI doom looks like, and it only has very little to do with 'truth'.
acutelychronicpanic t1_jdwrdgp wrote
Inaccuracy, misinformation, and deliberate misuse are all obviously bad things.
But yeah, misalignment is the only real concern when you put it all in perspective. Its the only thing that we can never come back from if it goes too far.
Imagine if, when nuclear weapons were first developed, the primary concern was the ecological impact of uranium mining...
Edit: Reading through the link you posted, I find it a bit funny that we all have been talking about AI gaining unauthorized access to the internet as a huge concern. Given where things are right now..
1II1I11II1I1I111I1 t1_jdws8u9 wrote
Yep, agreed.
The reason I don't worry too much about hallucinations and truthfullness is because Ilya Sutskever (OpenAI) says it's very likely to be solved in the 'nearer future'; current limitations are just current limitations. Exactly like the limitations of 2 years ago, we will look back at this moment as just another minor development hurdle.
Edit: Yep, suss this tweet https://twitter.com/ciphergoth/status/1638955427668033536?s=20 People just confidently said "don't connect it to the internet and it won't be a problem'. We've been dazzled by current changes and now such a fundamental defence has been bypassed because? Convenience? Optimism? Blind faith?
Kolinnor t1_jdwva5a wrote
On the contrary, I think it's not going to change anything, or even slightly force people to actually cross-check sources (I expect many people still won't, though)...
Internet is currently flooded with misinformation that's smartly designed to look attractive and to "make sense". People tend to accept that automatically when it's well done.
We can hope that "badly designed" misinformation will force people to be more suspicious, but that's probably too optimistic...
yaosio t1_jdxjatp wrote
It does that because not doesn't know it's making it up. It needs the ability to reflect on its answer to know if it's true or not.
robdogcronin t1_jdy0icd wrote
Oh phew, thought we were gonna have to worry about GPT4 taking jobs, thank God this one simple trick revealed that won't be the case!
Smellz_Of_Elderberry t1_jdz2z12 wrote
My God, what ever would we do if suddenly ai started lying on the internet! No one ever lies on the internet!
We are screwed!
nomadiclizard t1_jdz7viq wrote
So ask another ChatGPT to assess the truthyness of what the first ChatGPT just wrote. Let them talk to each other, sort out the disagreement, and tell us what they come up with.
drizel t1_jdxc9qi wrote
So human like!