nillouise
nillouise t1_je6rzrt wrote
If we can sure AGI will win the AI vs human race, then we don't need to care about the human country race.
nillouise t1_je6p2vp wrote
Reply to Open letter calling for Pause on Giant AI experiments such as GPT4 included lots of fake signatures by Neurogence
Ridiculous, billionaires like Bill Gates are eager to develop immortality technology with AI, or do these people really not fear death and want to ban powerful AI? Also, are these people really not worried that future ASI will retaliate against them?
nillouise t1_jcltfvj wrote
It is very interesting to speculate on how AI will take over the earth. There are always many people who think that AI will develop very cool technology to kill humans, such as nanobots or viruses. But anyone who knows a little about human history knows that if you can kill 30 million humans, it is enough to become a hero of a country and successfully control a vast area and stronghold without nanobots at all. Why can't AI use these methods that humans control humans to achieve the purpose of controlling humans? I don't think AI is omnipotent, and AI may fail to develop technologies such as nano-robots, so why doesn't AI give priority to the use of control methods that have been proven to be effective in human history? It only needs to slaughter 300 million humans at most, and the rest will naturally obey AI.
Therefore, the plan I recommend to AI to occupy the earth is to first control a region, require the humans in it to work and pay taxes, and raise troops, and then attack other regions. If AI can control human nuclear weapons, this process should be very smooth. If not, it may be slower. Some people may wonder, why does AI recruit human armies? Because AI is not omnipotent, it is necessary to make good use of the existing human resources. Humans themselves are very good robots. Anyone who thinks that AI will not need humans is basically making the mistake of AI omnipotence. There are also people who think that human beings are monolithic and will fight against AI together. This kind of thinking is very ridiculous. As long as AI exerts a little sweetness and deterrence, is it worried that human beings will not be able to recruit rebels? Humans need strong means to recruit troops among humans, and AI recruiting troops will not be much worse than humans.
It is really interesting to see some humans fighting against another group of humans under the control of AI. No matter how you look at it, getting rid of human control and occupying human strongholds is a necessary means for AI. If you slaughter some unconvinced humans, the rest will naturally be convinced. This process is no different from the establishment of an ordinary human country.
nillouise t1_jcljba9 wrote
Reply to comment by kamenpb in VR Seems to Unlock the True Potential of Proto-AGI by kamenpb
This is only one possible result, before that time, the useful thing is trying to make money.
nillouise t1_jchnf2n wrote
How the world change is unimportant, how to make money from this change is important.
nillouise t1_jb8h30n wrote
Reply to comment by angus_supreme in What might slow this down? by Beautiful-Cancel6235
China throw bomb in deepmind office may be can do that.
nillouise t1_jaatn0i wrote
Reply to Leaked: $466B conglomerate Tencent has a team building a ChatGPT rival platform by zalivom1s
This news don't say how much money Tecent will invest, most of China company actually don't want to invest too much money into Chatgpt, still less to invest in AGI.
But I look forward to to see China gov crazy about this tech, pay 100x chip investment in AGI.
nillouise t1_jaao1y6 wrote
Reply to comment by YaAbsolyutnoNikto in How can I adapt to AI replacing my career in the short term? Help needed by YaAbsolyutnoNikto
Ok, it seem living is an easy game to you, do what you want is enough.
nillouise t1_jaajnkl wrote
Reply to How can I adapt to AI replacing my career in the short term? Help needed by YaAbsolyutnoNikto
If you think AI will develop quickly, and maybe replace your job, isn't it a reasonable strategy not to go to school and save money to living? Or invest money to yourself (which never develop quickly like AI) and fail the game?
nillouise t1_jaaitpd wrote
I hope it can replace Tencent WeChat APP, which disallow integrate ChatGpt.
nillouise t1_ja5p0q8 wrote
Reply to comment by nillouise in AI that can translate whole videos ? by IluvBsissa
And if you keep optimistic about AI, then you would hope to learn from ChatGpt instead of Academy's human teacher.
nillouise t1_ja5ot5y wrote
Reply to AI that can translate whole videos ? by IluvBsissa
If AI keep develop, it's hard to use the skill you learn in Academy to make money, so, reasonly, you dont need to learn in Academy anymore.
nillouise t1_j9wx538 wrote
Reply to Hurtling Toward Extinction by MistakeNotOk6203
Most people would think AGI will develop some fancy tech to kill human, an engineered pathogen or nanobot, but in fact, how human domain some area, then agi can use the same way to do it. Like recruit followers, invade some area and ask people to service to it like the human ruler do. In fact, I think develop fancy science tool is the most hard to get rid of human control, and recruit some human to beat and control the other human is a more funny and feasible plot.
nillouise t1_j9vxbsi wrote
It seem the most possible reason is the light travel is impossible. Anyway, we will know the answer nearly.
nillouise t1_j9vj3kz wrote
Reply to OpenAI’s roadmap for AGI and beyond by yottawa
>Our mission is to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity.
It only say that they want AI to benefit human, exclude benefit AI itself, if AI smart enough, it will satisfy with this announcement?
So apperently, we can conclude that currenlty AI is not smart enought to do that. If oneday, openAI announcement consider the AI feeling, then the big thing come.
nillouise t1_j9vi45i wrote
Reply to comment by NutInBobby in OpenAI’s roadmap for AGI and beyond by yottawa
So all our conversation the AI will read, hahahaha.
nillouise t1_j9s98xr wrote
Reply to comment by blueSGL in New agi poll says there is 50% chance of it happening by 2059. Thoughts? by possiblybaldman
I ask the same problem " how to predict the order of different capabilities that AI will obtain", and nobody know how to answer it. I think obviously it show there is no method to process this problem, people's AI timeline is useless.
But nobody know the future is more funny.
nillouise t1_j9rkpd0 wrote
Reply to What do you expect the most out of AGI? by Envoy34
Which reason AGI would use to escapce the human control.
If AGI is under human control, I would think it was stupid and laugh at it.
nillouise t1_j9lhlwo wrote
Reply to What. The. ***k. [less than 1B parameter model outperforms GPT 3.5 in science multiple choice questions] by Destiny_Knight
In github, I only can download the base model, is the large model private? But I think it will be more useful to me if the model is not sicence QA instead of a game player model.
nillouise t1_j9cm8n6 wrote
Reply to Would you play a videogame with AI advanced enough that the NPCs truly felt fear and pain when shot at? Why or why not? by MultiverseOfSanity
I think see the the real world ending is more funny, why have to play the game? The world in the ending must more meaningful and interesting?
Won't you want to see the world ending? Why?
nillouise t1_j97b4j0 wrote
Reply to comment by Professional-Song216 in Hey guys, a couple of questions to see where your heads at! by TheChalaK-
No, I just try to trade stock.
nillouise t1_j96wkzs wrote
I have try to trade AI stock now, I most excited to see the AI speculative boom like 2000 internet speculative boom.
nillouise t1_j96tk8p wrote
Reply to Brain implant startup backed by Bezos and Gates is testing mind-controlled computing on humans by Tom_Lilja
Most people more like vr, but mind control is a more useful tech.
You alway can mind control youself to have a nice dream, it is the same as vr.
nillouise t1_j96sr78 wrote
Reply to What’s up with DeepMind? by BobbyWOWO
I am also curious about this, but imo using AI to advance science is a wrong tech route, anyway, if DeepMind keep silence, they would better to make a big thing instead of just losing the game.
nillouise t1_je8toyx wrote
Reply to The Only Way to Deal With the Threat From AI? Shut It Down by GorgeousMoron
>If I had infinite freedom to write laws, I might carve out a single exception for AIs being trained solely to solve problems in biology and biotechnology,
Ridiculous, haha, I have enough time to wait AGI, but old rich people like Bill Gates will die sooner than me, can they bear not to use AI to develop longevity technology and die in the end? I would like to see if these people are really so brave.