SoylentRox

SoylentRox t1_j9bqmp7 wrote

As I understand it:

(1) current quantum computers are useless for AI so far (not enough qbits)

(2) they are useful for limited types of problems.

AI is useful for everything. So there's a lot more interest in it.

Like a lot of things, the actual tech order is probably:

high perf computers -> narrow AI -> AGI -> self replicating robots -> nanotechnology -> quantum computers

That is, we will not have large and useful quantum computers until we have nanotechnology, and we can't afford that without self replicating robots, and we can't make that without AGI, and so on.

7

SoylentRox t1_j995j1k wrote

Yes, obviously they would. Anything but "peak human" is illness if you knew what you were doing as a medical provider, and had the tools required to manipulate their body. (mostly their active genome in each cell)

Even "peak human" isn't really good enough, you have just 1 heart and blood vessels can burst from bad luck. So really good future doctors would fix this.

0

SoylentRox t1_j96rmoj wrote

Failing to pay for top AI talent or funding large scale research projects to find a general AI. Or investing in all the infrastructure it takes to even make good software in the first place. AI research is 1 part genius researchers, 10 parts support staff.

The reason is the government doesn't realize the danger. They assume AI progress will continue to be linear and it took 70 years to get a machine capable of language.

1

SoylentRox t1_j94pgdt wrote

Right. And the issue with their position is that while it's possible for the government to have amazing things that are a secret, in reality most of the few secrets they did create leaked all over the place. For example the F-117 - tons of mentions in the press long before unveiling.

It's telling there are no mentions of anything indicating an AGI.

1

SoylentRox t1_j94lra8 wrote

>But we can agree to disagree.

You're wrong. Your whole argument is "they could have somehow kept thousands of people working on this in secret". Sure, and they could have secret antigravity research.

Publicly the DoD says they are far behind and need more money. And there is zero evidence for your theory.

1

SoylentRox t1_j94iknl wrote

They don't have it. The probability that they do is a flat 0.

Reasons:

AI is very advanced innovation that is also a collaboration between AI labs. You are not going to do that in secret.

They can't pay enough.

They do not have the budget allocated for GPUs.

Did you know that Google, Meta, and Microsoft have combined annual revenues close to the entire Department of Defense? The NSA annual budget is a mere 65 billion, chump change. Google alone pulls 280. The entire black budget is only another 50.

They are too poor.

2

SoylentRox t1_j94gr7s wrote

> That would put them really deep into singularity territory.

There is no sign that they have this. It would be impossible to miss. Unfortunately this appears to be completely false.

From the recruiters who have contacted me for AI/defense roles, the reason is obvious. They cannot offer remotely competitive compensation. Any AI coders they have are terrible.

4

SoylentRox t1_j94gme9 wrote

The problem is that if the military actually has singularity technology +10 years from now, they would have deaged all their veterans on re-enlistment, be building massive networks of bunkers and missile defense batteries with self replicating robots, and so on and so forth.
The current reality simply doesn't show any sign that they have this tech. And this is because the defense contractors that pay AI coders offer about 180k annually for someone with 5 years experience. Deepmind would pay 500k for that.

−1

SoylentRox t1_j8znqe0 wrote

Yes. And/or isolated equipment for most life support steps. So far example oxygen processing comes from growth tubes isolated in groups, and their feedstock supply gets sterilized before feeding into the machinery.

Energy and spare manufactured part intensive though.

1

SoylentRox t1_j8nzxgj wrote

We're not talking about self contained per say. We are saying "if the earth is no longer inhabitable" but we still have access to it, so we can send people out in space suits or robots and get water, air, and minerals that have to be decontaminated and then can be used.

Every human not in your hab is now dead.

1

SoylentRox t1_j8jdkwn wrote

? So your argument is to compare actual biotech to late night informercials?

Ultimately your argument comes to energy. Each gram of algae can fix so much carbon as sugar per unit of time given max usable sunlight. How many grams of algae do you need to fix enough carbon to keep a human alive.

The algae has not been genetically modified to make more sugar because humans have not needed to do this yet, so I don't know why you have to resort to comparing to random scams.

To disprove my claim you would need to find at least 1 billion USD spent annually on this type of biotech. If it's not being spent this approach has not been tried, and you cannot claim it won't work.

1

SoylentRox t1_j8j51pr wrote

Right. Plus if you drill down to individual clusters of neurons you realize that each cluster is basically "smoke and mirrors" using some repeating pattern, and the individual signals have no concept of the larger organism they are in.

It's just one weird trick a few trillion times.

So we found a "weird trick" and guess what, a few billion copies of a transformer and you start to get intelligent outputs.

2

SoylentRox t1_j8j4b4y wrote

Hardly. The bigger the system the larger your buffers can be. You are talking about trying to keep people alive in a hab the size of ISS and with I guess just a few hours worth of surplus oxygen.

A multi kilometer long hab with isolated grow machines (so toxins etc can't cause them all to fail) and months worth of food water and oxygen stored in tanks, and redundant power, and redundant manufacturing, and a few other hand nearby within a reasonable travel distance with enough population cap to house refugees... would be much more stable.

1

SoylentRox t1_j8j3aql wrote

The argument is there is no difference from the perspective of that person.

This actually means if old people have the most power and money (and they do), they will call for the fastest AGI development that is possible. The risks don't matter to them, they will die for sure in a few years otherwise.

1