1II1I11II1I1I111I1
1II1I11II1I1I111I1 t1_jeea3wq wrote
Reply to comment by amplex1337 in Goddamn it's really happening by BreadManToast
>the truth is no one knows how close we really are to it, or if we are even on the right path at all yet.
Watch this interview with Ilya Sutskever. He seems pretty confident about the future, and the obstacles between here and AGI. The inside knowledge at OpenAI definitely knows how close we are to AGI, and scaling LLMs is no longer outside the realm of feasibility to achieve it.
1II1I11II1I1I111I1 t1_jee9x9f wrote
Reply to comment by Professional_Copy587 in Goddamn it's really happening by BreadManToast
Watch this interview with Ilya Sutskever if you get the chance. The chief engineer (the brains) of OpenAI. If you read between the lines, or even take what he says at face value, it seems to him like there are very few hurdles between the paradigm of scaling LLMs and achieving AGI. We're very clearly on track, and very clearly the pace is only increasing. Unless regulation slows down AGI, it's most likely here before 2030.
1II1I11II1I1I111I1 t1_je0qbry wrote
1II1I11II1I1I111I1 t1_je0g9bo wrote
Reply to comment by Professional_Copy587 in Chat-GPT 4 is here, one theory of the Singularity is things will accelerate exponentially, are there any signs of this yet and what should we be watching? by Arowx
Would you say the Microsoft paper from LESS THAN TWO WEEKS AGO saying early forms of AGI can be observed in GPT-4 isn't the "thoughts of professionals and academics"?
All an AGI needs to be able to do is build another AI. The whole point is that ASI comes very soon after AGI.
1II1I11II1I1I111I1 t1_je0fjf3 wrote
Reply to Which communities have you found where people are both smart about what AI is and isn't currently capable of, but where everyone in there is convinced we'll have AI soon that's smarter than 95% of humans at all computer based tasks within a few years? by TikkunCreation
Twitter (Takes a while to curate your feed, but you get the freshest information there, as well as quality informed content if you follow the right people i.e. academics and researchers)
r/singularity (the rest of Reddit is far too behind talking about AI; r/ChatGPT can have good content amongst all the garbage)
YouTube (AI Explained, Firecode, Robert Miles. Content is very quickly outdated though)
Less Wrong
Hacker News
I actually think people on HN are pretty informed on the rate of change in AI. The recent post about a 3D artist becoming disillusioned with their work after being 'replaced' with GPT had a lot of comments clearly discussing the immediate and massive impact AI will have on society.
1II1I11II1I1I111I1 t1_je0drvf wrote
Reply to comment by Professional_Copy587 in Chat-GPT 4 is here, one theory of the Singularity is things will accelerate exponentially, are there any signs of this yet and what should we be watching? by Arowx
Bruh...
The goalposts for AGI are continually moved by people who want to remain ignorant.
Transformative technology is literally already here. Within a year GPT-4 will be involved in most peoples' personal or professional lives. Now realise that the technology is only improving (faster than predicted)
Would anyone hire you over GPT-4? How about GPT-5? What about GPT-6 with internet access, and full access and memorization of your companies database.
1II1I11II1I1I111I1 t1_jdxrjvi wrote
If you step out of the hypothetical realm, you can see containment is already impossible. GPT-4 was attached to the internet within 10 days of being created, and a norm has certainly been established.
Theoretically it might make some sense to aim for containment (Yudkowsky's AI box experiment would prove otherwise). But in the world we live in, containment is no longer an option.
1II1I11II1I1I111I1 t1_jdws8u9 wrote
Reply to comment by acutelychronicpanic in The current danger is the nature of GPT networks to make obviously false claims with absolute confidence. by katiecharm
Yep, agreed.
The reason I don't worry too much about hallucinations and truthfullness is because Ilya Sutskever (OpenAI) says it's very likely to be solved in the 'nearer future'; current limitations are just current limitations. Exactly like the limitations of 2 years ago, we will look back at this moment as just another minor development hurdle.
Edit: Yep, suss this tweet https://twitter.com/ciphergoth/status/1638955427668033536?s=20 People just confidently said "don't connect it to the internet and it won't be a problem'. We've been dazzled by current changes and now such a fundamental defence has been bypassed because? Convenience? Optimism? Blind faith?
1II1I11II1I1I111I1 t1_jdwowqy wrote
Reply to The current danger is the nature of GPT networks to make obviously false claims with absolute confidence. by katiecharm
No, it's not.
The current danger is our progress towards AI development continues, while AI allignment trails behind.
No one is scared of ChatGPT, or GPT-4. This is what AI doom looks like, and it only has very little to do with 'truth'.
1II1I11II1I1I111I1 t1_jdur9s4 wrote
Reply to comment by Cryptizard in AI being run locally got me thinking, if an event happened that would knock out the internet, we'd still have the internet's wealth of knowledge in our access. by Anjz
Give me 3 facts and I'll ask GPT4 to check them now
1II1I11II1I1I111I1 t1_jdtpyeu wrote
Reply to comment by ArcticWinterZzZ in AI being run locally got me thinking, if an event happened that would knock out the internet, we'd still have the internet's wealth of knowledge in our access. by Anjz
Wikipedia is 1% the archive that GPT-4 is though. Hallucinations will likely be solved soon according to Ilya Sutskever, keep up!
1II1I11II1I1I111I1 t1_jdtncfb wrote
Reply to comment by ArcticWinterZzZ in AI being run locally got me thinking, if an event happened that would knock out the internet, we'd still have the internet's wealth of knowledge in our access. by Anjz
GPT-4 is far, far smarter than Wikipedia.
1II1I11II1I1I111I1 t1_jdt8t49 wrote
Diamondoid Bacteria
1II1I11II1I1I111I1 t1_jeeat84 wrote
Reply to comment by mutantbeings in When will AI actually start taking jobs? by Weeb_Geek_7779
They're aware of the ethicial concerns. He's suggesting an intelligent AI would prioritze firing the ethics team to prevent being handicapped by ethical guidelines.