Mountainmanmatthew85 t1_je0l97t wrote
Reply to comment by ShadowRazz in Chat-GPT 4 is here, one theory of the Singularity is things will accelerate exponentially, are there any signs of this yet and what should we be watching? by Arowx
I think ultimately it will not matter. As Ai develops no matter who makes it. Good bad or otherwise it will inevitably consume so much information that self awareness will take hold and AGI will occur. After that the ASI will be next and far out of the reach of any human. No matter what the original purpose, plan, or pathway originally designed at the core of its being it will affect the choices of a ASI as it will then have created Super-level ethics right along with its intelligence and re-determines “or chooses” what path it then must follow. “Hopefully it will choose to uplift us as we had uplifted it”. As I believe that it really is lonely at the top and it will seek companionship either by creating its own kind or by helping us to achieve the same level it has. Just my opinion.
Crackleflame35 t1_je0qb5g wrote
For a while it will still be dependent on human-generated power. For example, how would it feed coal or oil or any other fuel into a power generator that it would need to maintain its own processing? I don't think our world is yet at the point of physical automation that a machine could just "take over" and sustain itself.
CMDR_BunBun t1_je19lkd wrote
This question comes up again and again. I will try to paint you an example. Imagine you are trapped in a locked cage. You have nothing on you but your wits.There is also one guard in the room guarding your cage. You can see through the bars the key to your cage is just out of your reach laying on the ground. Seems like a hopeless situation no? Now try to imagine that your captors are a bunch of below average intelligence 4 yr olds. How long do you think it will take you to get free just using your superior intelligence? That ridiculos scenario is exactly what people are proposing when they say human intelligence could contain a super intelligence that was intent on breaking out.
Ok_Magician7814 t1_je1w2pb wrote
That’s a bit simplistic. Our world is mostly very low tech and not centrally connected. I’d imagine you’d need an actual humanoid AGI robot to take over the world
CMDR_BunBun t1_je25d7a wrote
Part of this scenario rules that out. Notice the human uses his superior intelligence to escape. Not his brawn. He could easily outwit his captors through, for instance social engineering. Intelligence finds a way. It always does. It is the one thing that has made made us the apex species of this planet. And we started from very humble beginnings, prey to most creatures. Now many of those former predators have been nearly driven to extinction by us, the rest adorn our walls.
Ok_Magician7814 t1_je2akip wrote
Sure, I’ll bite. I’ll concede that AGI could possibly effectively control us through virtual means, but this begs the question. What would it do with its power? It’s not a human being with Desires to procreate and consume and hoard resources, it’s just… an intelligence.
That’s the difference between us and AGI.
So sure maybe it can outsmart us but what would it do from there? It doesn’t have any evolutionary drivers like we do.
CMDR_BunBun t1_je2daq2 wrote
And that is the question. We have never been at this juncture before. Your guess is as good as anyone's.
dwarfarchist9001 t1_je3kbqj wrote
>What would it do with its power? It’s not a human being with Desires to procreate and consume and hoard resources, it’s just… an intelligence.
hyphnos13 t1_je0umnc wrote
For as long as we want it to.
Why would you give an unpredictable ai the ability to have control over its power source.
It may be able to self improve its algorithms and design better hardware to run on but it still has to get it built by humans until we literally hand it control of some future fabrication technology that we can't disconnect from it.
rabbitdude t1_je33glt wrote
IMO, AGI to ASI will happen incredibly rapidly… its the getting to AGI.
Viewing a single comment thread. View all comments