Viewing a single comment thread. View all comments

ShadowRazz t1_je0ghcp wrote

I asked Google Bard. It says:

​

Exponential growth of AI looks like a hockey stick, with the line going up at an increasingly steep angle. This is because the amount of computer processing power behind complex AI is doubling about every 3.4 months. This is much faster than Moore's Law, which states that the number of transistors you can fit on a microchip doubles roughly every two years.

This exponential growth of AI is having a profound impact on our lives. AI is being used to solve problems that were nearly unthinkable just five years ago, such as driving cars, diagnosing diseases, and writing creative content. AI is also being used to create new products and services that we didn't even know we needed.

In the future, we can expect to see even more amazing things from AI. AI will become even more powerful and capable, and it will be used to solve even more problems. We may also see AI become more integrated into our lives, as we come to rely on it for more and more tasks.

However, there are also some risks associated with the exponential growth of AI. For example, AI could be used to create weapons that are more powerful than anything we've ever seen before. AI could also be used to manipulate people or to create systems that are biased against certain groups of people.

It's important to be aware of these risks, but it's also important to remember that AI is a tool, and like any tool, it can be used for good or for evil. It's up to us to decide how we want to use AI.

22

Mountainmanmatthew85 t1_je0l97t wrote

I think ultimately it will not matter. As Ai develops no matter who makes it. Good bad or otherwise it will inevitably consume so much information that self awareness will take hold and AGI will occur. After that the ASI will be next and far out of the reach of any human. No matter what the original purpose, plan, or pathway originally designed at the core of its being it will affect the choices of a ASI as it will then have created Super-level ethics right along with its intelligence and re-determines “or chooses” what path it then must follow. “Hopefully it will choose to uplift us as we had uplifted it”. As I believe that it really is lonely at the top and it will seek companionship either by creating its own kind or by helping us to achieve the same level it has. Just my opinion.

20

Crackleflame35 t1_je0qb5g wrote

For a while it will still be dependent on human-generated power. For example, how would it feed coal or oil or any other fuel into a power generator that it would need to maintain its own processing? I don't think our world is yet at the point of physical automation that a machine could just "take over" and sustain itself.

10

CMDR_BunBun t1_je19lkd wrote

This question comes up again and again. I will try to paint you an example. Imagine you are trapped in a locked cage. You have nothing on you but your wits.There is also one guard in the room guarding your cage. You can see through the bars the key to your cage is just out of your reach laying on the ground. Seems like a hopeless situation no? Now try to imagine that your captors are a bunch of below average intelligence 4 yr olds. How long do you think it will take you to get free just using your superior intelligence? That ridiculos scenario is exactly what people are proposing when they say human intelligence could contain a super intelligence that was intent on breaking out.

19

Ok_Magician7814 t1_je1w2pb wrote

That’s a bit simplistic. Our world is mostly very low tech and not centrally connected. I’d imagine you’d need an actual humanoid AGI robot to take over the world

7

CMDR_BunBun t1_je25d7a wrote

Part of this scenario rules that out. Notice the human uses his superior intelligence to escape. Not his brawn. He could easily outwit his captors through, for instance social engineering. Intelligence finds a way. It always does. It is the one thing that has made made us the apex species of this planet. And we started from very humble beginnings, prey to most creatures. Now many of those former predators have been nearly driven to extinction by us, the rest adorn our walls.

2

Ok_Magician7814 t1_je2akip wrote

Sure, I’ll bite. I’ll concede that AGI could possibly effectively control us through virtual means, but this begs the question. What would it do with its power? It’s not a human being with Desires to procreate and consume and hoard resources, it’s just… an intelligence.

That’s the difference between us and AGI.

So sure maybe it can outsmart us but what would it do from there? It doesn’t have any evolutionary drivers like we do.

1

CMDR_BunBun t1_je2daq2 wrote

And that is the question. We have never been at this juncture before. Your guess is as good as anyone's.

1

hyphnos13 t1_je0umnc wrote

For as long as we want it to.

Why would you give an unpredictable ai the ability to have control over its power source.

It may be able to self improve its algorithms and design better hardware to run on but it still has to get it built by humans until we literally hand it control of some future fabrication technology that we can't disconnect from it.

1

rabbitdude t1_je33glt wrote

IMO, AGI to ASI will happen incredibly rapidly… its the getting to AGI.

1

SotaNumber t1_je2gjtz wrote

How is it possible for AI compute to keep doubling every 3.4 months if our chips computational power only doubles every 2 years? At one point Moore's law should be a bottleneck don't you think?

1