bluzuli
bluzuli t1_j9nj6zm wrote
Reply to comment by AnakinRagnarsson66 in Is ASI An Inevitability Or A Potential Impossibility? by AnakinRagnarsson66
Mm not really, although that is also a possibility for ASI to improve itself.
I'm just pointing out that every ANI today is already superhuman because they have access to vast compute beyond what a human brain can achieve.
Any AGI system that appears would also benefit from this.
bluzuli t1_j9mqqsr wrote
Reply to comment by AnakinRagnarsson66 in Is ASI An Inevitability Or A Potential Impossibility? by AnakinRagnarsson66
Probably immediately after AGI. Almost all ANI today are already superhuman because they have access to way more compute power and training than what a human brain is capable of, you would expect the same pattern to emerge once we have AGI
bluzuli t1_j9ihbbv wrote
Reply to comment by Victor_Hugo_79 in What are your thoughts on Eliezer Yudkowsky? by DonOfTheDarkNight
Underrated comment. Robert miles makes great and accessible content
bluzuli t1_j15vgkq wrote
Reply to Do language models lack creativity? by sheerun
Just tell it to be more creative.
bluzuli t1_iz7qt2l wrote
When AI becomes smart enough to improve itself, it will improve itself on its own, increasing its ability to improve itself, ad infinum. Like an accelerating car, it will quickly outstrip human intelligence, and become better at humans at everything.
It’s also why there’s a saying that “AGI is the last problem we ever need to solve”, because after you have AGI it’ll be able to solve problems on behalf of humans, better than humans can.
Need to cure cancer? Solve AGI, AGI is smarter than you in every way, AGI solves for the cure to cancer.
In such a future, which seems likely given the progress of AI, what role do humans play? How can the AI be controlled? What if the AI decides to eliminate humans? What if the AI is only controlled by a few people? What if there are multiple AI?
In Physics, the singularity is often used to refer to black holes, where all physics laws seem to break down. Here, in the context of AI, the singularity refers to a similar event - when AGI becomes a reality, beyond which we just don’t know what will happen.
bluzuli t1_j9njq17 wrote
Reply to Why are we so stuck on using “AGI” as a useful term when it will be eclipsed by ASI in a relative heartbeat? by veritoast
You know how when you describe scary things to a child, you try to use simpler words and concepts and try not to spook them so they don't panic and just mentally shut down?
That's how I introduce AI concepts like ANI and AGI before talking about self-improving ASI and AI alignment and convergent intermediate goals like resource acquisition, goal preservation etc.
I want them to learn the facts first before the panic sets in. No one is going to listen to you if you start the conversation by saying they might die from AI.