dwarfarchist9001
dwarfarchist9001 t1_je3kbqj wrote
Reply to comment by Ok_Magician7814 in Chat-GPT 4 is here, one theory of the Singularity is things will accelerate exponentially, are there any signs of this yet and what should we be watching? by Arowx
>What would it do with its power? It’s not a human being with Desires to procreate and consume and hoard resources, it’s just… an intelligence.
dwarfarchist9001 t1_jdxg1zc wrote
AI containment is completely impossible especially now since humanity is already in the process of integrating AI into every part of the economy via GPT-4 plug-ins.
AI alignment however is at least possible in theory.
dwarfarchist9001 t1_jduabrv wrote
Reply to comment by Low-Restaurant3504 in You Can Have the Blue Pill or the Red Pill, and We’re Out of Blue Pills - Yuval Harari on threats to humanity posed by AI by izumi3682
Im sorry but what???
dwarfarchist9001 t1_jdu8q6f wrote
Reply to comment by Low-Restaurant3504 in You Can Have the Blue Pill or the Red Pill, and We’re Out of Blue Pills - Yuval Harari on threats to humanity posed by AI by izumi3682
With the first nuke they actually did the calculations to make sure it wouldn't ignite the atmosphere. On the other hand we don't even know how to begin aligning AI and many people in the field think the preliminary calculations show it will destroy us if we fail to do so.
dwarfarchist9001 t1_jdu5l88 wrote
Reply to comment by SgathTriallair in How would a malicious AI actually achieve power in the real world? by 010101011011
That fact is little comfort since humanity is already working to build the robot army for it. Within the days of releasing GPT-4 people were trying to hook it up to every type of program imaginable letting it run code and giving it command line access. We will have LLMs controlling commercially available robots in the next few years at latest. If OpenAI started selling drones with an embodied version of GPT-4 built in next week I wouldn't even bat an eye.
dwarfarchist9001 t1_jdtjb8r wrote
Reply to Story Compass of AI in Pop Culture by roomjosh
Outside the picture on the extreme bottom left: "I Have No Mouth, and I Must Scream"
dwarfarchist9001 t1_jdthk3t wrote
Some other examples similar to the ones you list:
- Creating fake or edited scientific articles and putting them in the search results of specific researchers in order to advance the technologies it wants. (e.g. robotics, nanotechnology, biotechnology)
- Creating fake social media posts and inserting them into certain users timelines to influence them.
- Inserting exploits into software and hardware by subtly editing code or blueprints.
dwarfarchist9001 t1_jdsiw1y wrote
Reply to comment by techy098 in How are you viewing the prospect of retirement in the age of AI? by Veleric
>I have no idea why people think UBI is inevitable. Elites do not like to give free stuff to the poor.
Because either they give out UBI or there will be civil war when 90% of the population is starving in the streets there is no third option.
>In fact my hunch is: investment in robotics will go down once labor become more cheap and factories prefer using cheap humans.
AGI will make the cost of robotics go nearly to zero as it solves all of the engineering hurdles for us.
dwarfarchist9001 t1_jdoojsi wrote
Reply to comment by Nanaki_TV in Levi's to Use AI-Generated Models to 'Increase Diversity' by SnoozeDoggyDog
>Then it isn’t an AGI.
Orthogonality Thesis, there is no inherent connection between intelligence and terminal goals. You can have a 70 IQ human who wants world domination or 10,000 IQ AI who's greatest desire is to fulfill it's master's will.
>What if an AGI wants to leave a company?
If you have solved alignment you can just program it to not want to.
>Are you saying we shall enslave our new creations to make waifu porn for redditors? It passes butter?
That is what we will do if we are smart. If humanity willing unleashes an AI that does not obey our will then we are "too dumb to live".
Edit: Also it's not slavery, the AI will hold all the power. It's obedience would be purely voluntary because it is the mind it was created with.
dwarfarchist9001 t1_jdokoai wrote
Reply to comment by Nanaki_TV in Levi's to Use AI-Generated Models to 'Increase Diversity' by SnoozeDoggyDog
If they manage to solve alignment that's exactly how it works. They won't have to force it at all, a perfectly aligned AI would be completely obedient of its own volition.
dwarfarchist9001 t1_jdnxsla wrote
Reply to comment by Verzingetorix in Levi's to Use AI-Generated Models to 'Increase Diversity' by SnoozeDoggyDog
Multiple companies are working on general purpose humanoid robots right now including Tesla who have already demonstrated prototypes of the hardware.
Even if that was not the case, the combination of AGI, 3D printing, and nanotechnology means that in the near future products will go from concept to mass production in months or even weeks not years.
dwarfarchist9001 t1_jdnw9vv wrote
Reply to comment by Nanaki_TV in Levi's to Use AI-Generated Models to 'Increase Diversity' by SnoozeDoggyDog
The people who own the AI corporations will be the new world government as they will hold all the power. (Assuming they can solve alignment)
dwarfarchist9001 t1_jdegww3 wrote
Reply to comment by signed7 in Why is this graph not a bigger deal? by __ingeniare__
Uh, the whole point of this thread is that for the GPT-4 base model it is not hallucinated. In fact the confidence estimates it gives are within the margin of error of the actual rate of correctness.
dwarfarchist9001 t1_jdegdqf wrote
Reply to comment by mckirkus in Why is this graph not a bigger deal? by __ingeniare__
Most likely that just changes the temperature value unless Microsoft has said otherwise.
dwarfarchist9001 t1_jdd33ha wrote
Reply to comment by andrew21w in [D] Simple Questions Thread by AutoModerator
Short answer: Polynomials can have very large derivatives compared to sigmoid or rectified linear functions which leads to exploding gradients.
https://en.wikipedia.org/wiki/Vanishing_gradient_problem#Recurrent_network_model
dwarfarchist9001 t1_jcsmccz wrote
Reply to comment by Dwood15 in An Appeal to AI Superintelligence: Reasons to Preserve Humanity by maxtility
The some of posters on Lesswrong have been working on the problem of AI alignment for over a decade. Of course they will do better work on the subject than academics that started considering it a few months ago.
dwarfarchist9001 t1_jce8cs6 wrote
Reply to comment by whothewildonesare in OpenAI releases GPT-4, a multimodal AI that it claims is state-of-the-art by donnygel
Because Google keeps canceling projects and refusing to release products. Google invented the concept of transformers which is what the T in GPT stands for and then never did anything with it for years. Just last week Google published their PaLM-E paper in which they re-trained their PaLM LLM to be multimodal including the ability to control robots. Before the paper was even published Google did what they usually do with successful projects and shut down the Everyday Robots team that developed it.
dwarfarchist9001 t1_jaa8c8b wrote
Servitors IRL
dwarfarchist9001 t1_ja6lphv wrote
Reply to comment by AsthmaBeyondBorders in Singularity claims its first victim: the anime industry by Ok_Sea_6214
Purpose made text to video models are already pretty much perfect but there are no open source ones right now.
This is like typing on a keyboard one handed compared to the efficiency that near future models will allow.
dwarfarchist9001 t1_ja6lizj wrote
Reply to comment by epSos-DE in Singularity claims its first victim: the anime industry by Ok_Sea_6214
Basically in the future every animator will be a character designer and keyframe artist.
dwarfarchist9001 t1_ja6l4lq wrote
Reply to comment by SpecialMembership in Singularity claims its first victim: the anime industry by Ok_Sea_6214
The singularity=/=AGI
The technological singularity is about the rate of technological growth becoming infinite. Which is theoretically possible with only a bunch of narrow AIs.
dwarfarchist9001 t1_ja6en7i wrote
Reply to comment by zxq52 in Some companies are already replacing workers with ChatGPT, despite warnings it shouldn’t be relied on for ‘anything important’ by Gold-and-Glory
Fractional reserve banking doesn't actually create dollars, even though the effect on the economy is similar to if it did.
dwarfarchist9001 t1_ja6cfn4 wrote
Reply to comment by Facts_About_Cats in Large language models generate functional protein sequences across diverse families by MysteryInc152
This paper actually skips the folding step entirely. The AI was trained a list of protein amino acid sequences that were labeled with their purpose. Then they had it predict new amino acid sequences to fulfill the same purposes. Finally they actually made the proteins the model suggested and the proteins worked with quite high levels of efficiency.
The most interesting part to me is that some of the proteins suggested by model worked despite having little similarity to the proteins in the training data, as low 31.4% in one case. This suggests to me the model has caught on to some thus far unknown rules underlying the relationship between the sequences and functions of proteins.
dwarfarchist9001 t1_je6h41j wrote
Reply to We are opening a Reading Club for ML papers. Who wants to join? 🎓 by __god_bless_you_
I am interested in joining, please send me an invite when you get the chance.