dwarfarchist9001

dwarfarchist9001 t1_jdu8q6f wrote

With the first nuke they actually did the calculations to make sure it wouldn't ignite the atmosphere. On the other hand we don't even know how to begin aligning AI and many people in the field think the preliminary calculations show it will destroy us if we fail to do so.

2

dwarfarchist9001 t1_jdu5l88 wrote

That fact is little comfort since humanity is already working to build the robot army for it. Within the days of releasing GPT-4 people were trying to hook it up to every type of program imaginable letting it run code and giving it command line access. We will have LLMs controlling commercially available robots in the next few years at latest. If OpenAI started selling drones with an embodied version of GPT-4 built in next week I wouldn't even bat an eye.

2

dwarfarchist9001 t1_jdthk3t wrote

Some other examples similar to the ones you list:

  • Creating fake or edited scientific articles and putting them in the search results of specific researchers in order to advance the technologies it wants. (e.g. robotics, nanotechnology, biotechnology)
  • Creating fake social media posts and inserting them into certain users timelines to influence them.
  • Inserting exploits into software and hardware by subtly editing code or blueprints.
7

dwarfarchist9001 t1_jdsiw1y wrote

>I have no idea why people think UBI is inevitable. Elites do not like to give free stuff to the poor.

Because either they give out UBI or there will be civil war when 90% of the population is starving in the streets there is no third option.

>In fact my hunch is: investment in robotics will go down once labor become more cheap and factories prefer using cheap humans.

AGI will make the cost of robotics go nearly to zero as it solves all of the engineering hurdles for us.

6

dwarfarchist9001 t1_jdoojsi wrote

>Then it isn’t an AGI.

Orthogonality Thesis, there is no inherent connection between intelligence and terminal goals. You can have a 70 IQ human who wants world domination or 10,000 IQ AI who's greatest desire is to fulfill it's master's will.

>What if an AGI wants to leave a company?

If you have solved alignment you can just program it to not want to.

>Are you saying we shall enslave our new creations to make waifu porn for redditors? It passes butter?

That is what we will do if we are smart. If humanity willing unleashes an AI that does not obey our will then we are "too dumb to live".

Edit: Also it's not slavery, the AI will hold all the power. It's obedience would be purely voluntary because it is the mind it was created with.

1

dwarfarchist9001 t1_jdnxsla wrote

Multiple companies are working on general purpose humanoid robots right now including Tesla who have already demonstrated prototypes of the hardware.

Even if that was not the case, the combination of AGI, 3D printing, and nanotechnology means that in the near future products will go from concept to mass production in months or even weeks not years.

1

dwarfarchist9001 t1_jce8cs6 wrote

Because Google keeps canceling projects and refusing to release products. Google invented the concept of transformers which is what the T in GPT stands for and then never did anything with it for years. Just last week Google published their PaLM-E paper in which they re-trained their PaLM LLM to be multimodal including the ability to control robots. Before the paper was even published Google did what they usually do with successful projects and shut down the Everyday Robots team that developed it.

1

dwarfarchist9001 t1_ja6cfn4 wrote

This paper actually skips the folding step entirely. The AI was trained a list of protein amino acid sequences that were labeled with their purpose. Then they had it predict new amino acid sequences to fulfill the same purposes. Finally they actually made the proteins the model suggested and the proteins worked with quite high levels of efficiency.

The most interesting part to me is that some of the proteins suggested by model worked despite having little similarity to the proteins in the training data, as low 31.4% in one case. This suggests to me the model has caught on to some thus far unknown rules underlying the relationship between the sequences and functions of proteins.

5