Submitted by Shiningc t3_124rga4 in Futurology
Trout_Shark t1_je0jtus wrote
The corporation that creates a functioning AGI first is going to be world wide player like we have never seen before. I think the biggest concern is can they control it. The only limitation to it's ability to expand itself is computational power. The future is going to be wild.
KungFuHamster t1_je0mtoj wrote
Yeah, the invention of AGI is often referred to in science fiction as the technical singularity, because the speed of AI makes the future beyond that point literally unknowable. If we can keep it from killing us, it should advance our technologies at a tremendous rate.
Trout_Shark t1_je0p2vb wrote
Implementing something like Asimov's "Three Laws of Robotics" should be a major priority. The singularity is of course a major concern. Anything that can learn at an exponential rate is going be difficult to keep under control for long.
acutelychronicpanic t1_je0z1p4 wrote
I agree with the sentiment, but a lot of work has gone into this since those 3 laws. Its still an unsolved problem.
nybbleth t1_je2tf6s wrote
The three laws don't really work, though; on multiple levels. They're far too simplistic and ambiguous, and effectively impossible to implement in a way that AI could consistently follow.
imanon33 t1_je0twsl wrote
AGI will be the last invention of man. After that, the AGI will invent everything else.
Cerulean_IsFancyBlue t1_je0upng wrote
An AGI would be an amazing feat.
The first AGI will be the equivalent of a human baby, completely helpless. It will likely use a massive array of computer hardware backed by a tremendous amount of electrical generation power, and even if it wanted to duplicate itself, will not be able to do so rapidly or without detection.
If anything, it will be even less able to survive on its own, than a human baby.
All the ideas we have about being unable to control an AI, are using Hollywood level ideas about what things are Hackable and controllable. It could thrash around and mess up a lot of systems. There’s a pretty good chance in the process that it would suicide. Every model we have for an AI right now, requires a tremendous amount of computing, power, electricity, and cooling. It’s not going to be able to run away and hide in “the internet”. If it does, it will probably contract a fatal disease from half the computers it tries to occupy.
Mercurionio t1_je46ts5 wrote
AGI is a hardware, that is not chained with awaiting for the prompt.
Imagine a loop "do... While...". Where "While" is limited by energy consumption.
Viewing a single comment thread. View all comments