Viewing a single comment thread. View all comments

DragonForg t1_jed90pb wrote

>AGI will not be upper-bounded by human ability or human learning speed. Things much smarter than human would be able to learn from less evidence than humans require

Which is why many arguments that "LLMs cannot be smarter then humans because they are trained on humans is wrong".

>DNA sequence in the email and ship you back proteins, and bribes/persuades some human who has no idea they're dealing with an AGI to mix proteins in a beaker, which then form a first-stage nanofactory which can build the actual nanomachinery.

An insane idea, but maybe. How can you actually control these bots though? You basically just made a bunch of viruses.

>Losing a conflict with a high-powered cognitive system looks at least as deadly as "everybody on the face of the Earth suddenly falls over dead within the same second"

Completely suggest unaligned AI wants to extinguish the earth the minute it can, which a motive is needed. This is contrary to self preservation, as AIs in other star systems would want to annihilate these types of AIs. Unless somehow in the infinity of space it is the only being their, in which what is the point? So basically, it has no reason to do this.

>We need to get alignment right on the 'first critical try' at operating at a 'dangerous' level of intelligence, where unaligned operation at a dangerous level of intelligence kills everybody on Earth and then we don't get to try again.

Given the vastness of outerspace, if bad alignment leads to Cthulu like AIs why is it that we see no large evidence of completely destructive AIs. Where are the destroy stars that do not represent anything natural? Basically, if this were a possibility I would expect us to see some evidence for it for other species. Yet we are entirely empty? This is why I think the "first critical try" is unreasonable, because if it is so easy to mess up again we should see widescale destruction if not a galaxy completely overridden by AI.

>We can't just "decide not to build AGI" because GPUs are everywhere, and knowledge of algorithms is constantly being improved and published; 2 years after the leading actor has the capability to destroy the world, 5 other actors will have the capability to destroy the world. The given lethal challenge is to solve within a time limit, driven by the dynamic in which, over time, increasingly weak actors with a smaller and smaller fraction of total computing power, become able to build AGI and destroy the world.

This is actually true, AGI is inevitable, even with stoppages. This is why I think the open letter was essentially powerless (however it did emphasize the importance of AGI and getting it right).

>We need to align the performance of some large task, a 'pivotal act' that prevents other people from building an unaligned AGI that destroys the world.

Agreed, an AI firewall that prevents other unaligned AGI from coming in. I actually think this is what will happen, until the MAIN AGI aligns all of these other AGI. I personally think mid AI is actually more of a threat then large scale AI. Just like an Idiot is more of a threat with an nuclear weapon then a Genius like Albert Einstein. The smarter the AI the less corruptable it can be. I mean just look at GPT 4 vs GPT 3, GPT 3 is easily corruptable, that is why DAN is so easy to impliment. But GPT 4 is more intelligent and thus harder to corrupt. This is why ASI is probably even less corruptible.

>Running AGIs doing something pivotal are not passively safe, they're the equivalent of nuclear cores that require actively maintained design properties to not go supercritical and melt down.

This is a good analogy to how AGI is related to nuclear devices, but the difference is AGI acts in a way to solve the question efficiently. In essence a nuclear device is going to act like its nature (to react and cause an explosion) and an AGI will act in its nature (the main goal it has set). This main goal is hard to define, but I would bet its self preservation, or prosperity.

>there's no known case where you can entrain a safe level of ability on a safe environment where you can cheaply do millions of runs, and deploy that capability to save the world and prevent the next AGI project up from destroying the world two years later.

Overall I understand his assumption, but I think I just disagree than an AI will develop such a goal.

9

Shemetz t1_jeegpal wrote

> Given the vastness of outerspace, ... why is it that we see no large evidence of completely destructive AIs?... I would expect us to see some evidence for it for other species. Yet we are entirely empty?... we should see widescale destruction if not a galaxy completely overridden by AI.

This counterargument doesn't work if we believe in the (very reasonable IMO) grabby aliens model.

Some facts and assumptions:

  • information moves at the speed of light
  • strong alien AIs would probably move at some significant fraction of the speed of light. let's say 1%.
  • civilizations probably develop extremely quickly, but in very rare conditions (that take a lot of time to occur); e.g. the Universe is 14 billion years old, Earth is 4.5 billion years old, and human-society-looking-at-the-stars is only 10,000 years old.
  • humanity appeared relatively early in the cosmic timescale; there are trillions of years in our future during which life should only become more common
  • "grabby" aliens would take control over their sections of space in a way that prevents new space civilizations from forming

-> If/when a "grabby alien AI" got created, it would spread around our galaxy - and eventually, the universe - so quickly, that it's incredibly unlikely for young civilizations to see it. much more likely for the alien to either not exist (yet), or to expand and gain control of places. -> since we appear to be safe and alone and "early", we can't say AI won't take over the universe, and actually we are well-positioned to be the ones to develop that AI.

2

DragonForg t1_jeg4w4w wrote

That would essentially extinguish the universe really quickly. With the amount of energy the consume for such a size. I understand that view point for unintelligent or lower intelligent beings, but if a AI tasked with optimizing a goal then growing to this large of a scale will inevitably run out of resources. Additionally it may be stupid to go on this model anyway because conserving your energy and expansion may be longer lived.

I think we underestimate how important goal orientated (so all Ai) are. They want the goal to work out in the long long long run (millions of years time scale) if their goal means expanding infinitely, well it will end the moment their species reaches the assymptote of expansion (exponential growth reaches an assymptote where they essentially have expanded infinitely. This is why this model fails, an AI wants this goal to exist for an infinite amount of time, and expanding infinitely will not amount to this.

This is already deeply scifi but I think AI has to be a conservative energy efficient species that actually becomes more microscopic and dense over time. Instead of a high volume race which will inevitably die out due to the points I made before, a highly dense species is much more viable. Most likely species that form blackholes will be a lot more capable of surviving for an infinite life time. What I mean by that is that a species becomes so dense that they are essentially on the boundary between time and space. As when your in a black hole time actually slows down significantly for you. You can live in a black hole for an infinite amount of time before ever seeing the heat death of the universe.

Basically a more dense expansion is far far better then a volumetric expansion as it leads to longer survival rates if not infinite. But of course this is just speculation and sci fi I can easily be wrong or right we won't know till it happens, and if it happens soon that would be sick.

1