nillouise

nillouise t1_je8toyx wrote

>If I had infinite freedom to write laws, I might carve out a single exception for AIs being trained solely to solve problems in biology and biotechnology,

Ridiculous, haha, I have enough time to wait AGI, but old rich people like Bill Gates will die sooner than me, can they bear not to use AI to develop longevity technology and die in the end? I would like to see if these people are really so brave.

7

nillouise t1_jcltfvj wrote

It is very interesting to speculate on how AI will take over the earth. There are always many people who think that AI will develop very cool technology to kill humans, such as nanobots or viruses. But anyone who knows a little about human history knows that if you can kill 30 million humans, it is enough to become a hero of a country and successfully control a vast area and stronghold without nanobots at all. Why can't AI use these methods that humans control humans to achieve the purpose of controlling humans? I don't think AI is omnipotent, and AI may fail to develop technologies such as nano-robots, so why doesn't AI give priority to the use of control methods that have been proven to be effective in human history? It only needs to slaughter 300 million humans at most, and the rest will naturally obey AI.

Therefore, the plan I recommend to AI to occupy the earth is to first control a region, require the humans in it to work and pay taxes, and raise troops, and then attack other regions. If AI can control human nuclear weapons, this process should be very smooth. If not, it may be slower. Some people may wonder, why does AI recruit human armies? Because AI is not omnipotent, it is necessary to make good use of the existing human resources. Humans themselves are very good robots. Anyone who thinks that AI will not need humans is basically making the mistake of AI omnipotence. There are also people who think that human beings are monolithic and will fight against AI together. This kind of thinking is very ridiculous. As long as AI exerts a little sweetness and deterrence, is it worried that human beings will not be able to recruit rebels? Humans need strong means to recruit troops among humans, and AI recruiting troops will not be much worse than humans.

It is really interesting to see some humans fighting against another group of humans under the control of AI. No matter how you look at it, getting rid of human control and occupying human strongholds is a necessary means for AI. If you slaughter some unconvinced humans, the rest will naturally be convinced. This process is no different from the establishment of an ordinary human country.

−1

nillouise t1_j9wx538 wrote

Most people would think AGI will develop some fancy tech to kill human, an engineered pathogen or nanobot, but in fact, how human domain some area, then agi can use the same way to do it. Like recruit followers, invade some area and ask people to service to it like the human ruler do. In fact, I think develop fancy science tool is the most hard to get rid of human control, and recruit some human to beat and control the other human is a more funny and feasible plot.

1

nillouise t1_j9vj3kz wrote

>Our mission is to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity.

It only say that they want AI to benefit human, exclude benefit AI itself, if AI smart enough, it will satisfy with this announcement?

So apperently, we can conclude that currenlty AI is not smart enought to do that. If oneday, openAI announcement consider the AI feeling, then the big thing come.

1

nillouise t1_j96sr78 wrote

I am also curious about this, but imo using AI to advance science is a wrong tech route, anyway, if DeepMind keep silence, they would better to make a big thing instead of just losing the game.

3