Submitted by MistakeNotOk6203 t3_11b2iwk in singularity
No_Ninja3309_NoNoYes t1_j9yarqg wrote
I don't think AGI will arrive before 2040. It could in theory, but if you extrapolate all the known data points, it's not likely. First, in terms of parameters, which is not the best of metrics, we are nowhere near the complexity of the human brain. Second, AI models currently are too static to be accepted as candidates of AGI.
Your reasoning reads as: 'we created a monster. The monster is afraid of us, so it kills us.' You can also say the opposite. People were afraid of Frankenstein's monster, so they killed him.
Prometheus stole fire from the gods and was punished for it. OpenAI brought us ChatGPT and one day they will burn for it too. AGI/ASI either is a threat and smarter than us or it isn't. If it is both, they could decide to prevent being attacked. But as I said it would take decades to reach that point. And we might figure out in the future how to convince AGI/ASI that we're mostly harmless.
Viewing a single comment thread. View all comments