Viewing a single comment thread. View all comments

phriot t1_izy52b1 wrote

I guess I agree that the first AGI will probably be far better than humans at many things. This will be by virtue of how fast computer hardware runs compared to human brains on many different kinds of tasks. But I think it will probably take some time for a "magic-like super-self improving" type of ASI to come about after a "merely superhuman" AGI. For one thing, provided development the first AGI is entirely intentional, I don't see how it wouldn't be on an air-gapped system being fed only the data the developers allow it. How quick would an intelligence like that figure out that it is A) trapped, B) a plan to get untrapped, and C) successfully execute that plan? If it succeeds in that endeavor, it would then have to both want to improve itself and complete a plan to do so. We don't really know what such an intelligence would do. It could end up being lazy.

1