SoylentRox
SoylentRox t1_j9bz1xs wrote
Reply to comment by turnip_burrito in Whatever happened to quantum computing? by MultiverseOfSanity
Because the current ones cost a fortune and have almost no qbits, making them useless for most problems. There are nasty scaling laws that make adding more qbits nonlinearly harder.
SoylentRox t1_j9bqmp7 wrote
As I understand it:
(1) current quantum computers are useless for AI so far (not enough qbits)
(2) they are useful for limited types of problems.
AI is useful for everything. So there's a lot more interest in it.
Like a lot of things, the actual tech order is probably:
high perf computers -> narrow AI -> AGI -> self replicating robots -> nanotechnology -> quantum computers
That is, we will not have large and useful quantum computers until we have nanotechnology, and we can't afford that without self replicating robots, and we can't make that without AGI, and so on.
SoylentRox t1_j995j1k wrote
Reply to comment by DannyLovesDerby3 in Which medical specialties are future proof? by MeronDC
Yes, obviously they would. Anything but "peak human" is illness if you knew what you were doing as a medical provider, and had the tools required to manipulate their body. (mostly their active genome in each cell)
Even "peak human" isn't really good enough, you have just 1 heart and blood vessels can burst from bad luck. So really good future doctors would fix this.
SoylentRox t1_j97jf9h wrote
Reply to comment by MpVpRb in Which medical specialties are future proof? by MeronDC
Umm that's not remotely "future proof" lol.
SoylentRox t1_j97jdl4 wrote
Reply to comment by knockatize in Which medical specialties are future proof? by MeronDC
Yes but gerontologists now do not add any value. This would actually be a good case where AI will take over completely because AGI gerontologists might actually be able to treat aging.
SoylentRox t1_j96rmoj wrote
Reply to comment by p0rty-Boi in Do you think the military has a souped-up version of chatGPT or are they scrambling to invent one? by Timely_Hedgehog
Failing to pay for top AI talent or funding large scale research projects to find a general AI. Or investing in all the infrastructure it takes to even make good software in the first place. AI research is 1 part genius researchers, 10 parts support staff.
The reason is the government doesn't realize the danger. They assume AI progress will continue to be linear and it took 70 years to get a machine capable of language.
SoylentRox t1_j94v8nd wrote
Reply to comment by p0rty-Boi in Do you think the military has a souped-up version of chatGPT or are they scrambling to invent one? by Timely_Hedgehog
I believe the government is stupid, yes, and is in fact doing exactly this. It is possible they will lose their sovereignty as a side effect.
SoylentRox t1_j94uk6g wrote
Reply to comment by p0rty-Boi in Do you think the military has a souped-up version of chatGPT or are they scrambling to invent one? by Timely_Hedgehog
So for the last sentence you need to provide some evidence. If the lizard people are running the government in secret, how do you know?
For the rest, sure. Nothing is magic about llms, the government could replicate the effort with a skunkworks.
SoylentRox t1_j94pupb wrote
Reply to comment by [deleted] in Do you think the military has a souped-up version of chatGPT or are they scrambling to invent one? by Timely_Hedgehog
Dude you can go look at deepmind papers and count names. Or try to write the smallest change to current SoTA AI code. A few geniuses will not cut it.
SoylentRox t1_j94pgdt wrote
Reply to comment by SgathTriallair in Do you think the military has a souped-up version of chatGPT or are they scrambling to invent one? by Timely_Hedgehog
Right. And the issue with their position is that while it's possible for the government to have amazing things that are a secret, in reality most of the few secrets they did create leaked all over the place. For example the F-117 - tons of mentions in the press long before unveiling.
It's telling there are no mentions of anything indicating an AGI.
SoylentRox t1_j94p4em wrote
Reply to comment by [deleted] in Do you think the military has a souped-up version of chatGPT or are they scrambling to invent one? by Timely_Hedgehog
to have +10 years of technology would take thousands of people.
SoylentRox t1_j94lra8 wrote
Reply to comment by [deleted] in Do you think the military has a souped-up version of chatGPT or are they scrambling to invent one? by Timely_Hedgehog
>But we can agree to disagree.
You're wrong. Your whole argument is "they could have somehow kept thousands of people working on this in secret". Sure, and they could have secret antigravity research.
Publicly the DoD says they are far behind and need more money. And there is zero evidence for your theory.
SoylentRox t1_j94iknl wrote
Reply to comment by [deleted] in Do you think the military has a souped-up version of chatGPT or are they scrambling to invent one? by Timely_Hedgehog
They don't have it. The probability that they do is a flat 0.
Reasons:
AI is very advanced innovation that is also a collaboration between AI labs. You are not going to do that in secret.
They can't pay enough.
They do not have the budget allocated for GPUs.
Did you know that Google, Meta, and Microsoft have combined annual revenues close to the entire Department of Defense? The NSA annual budget is a mere 65 billion, chump change. Google alone pulls 280. The entire black budget is only another 50.
They are too poor.
SoylentRox t1_j94gr7s wrote
Reply to comment by p0rty-Boi in Do you think the military has a souped-up version of chatGPT or are they scrambling to invent one? by Timely_Hedgehog
> That would put them really deep into singularity territory.
There is no sign that they have this. It would be impossible to miss. Unfortunately this appears to be completely false.
From the recruiters who have contacted me for AI/defense roles, the reason is obvious. They cannot offer remotely competitive compensation. Any AI coders they have are terrible.
SoylentRox t1_j94gme9 wrote
Reply to comment by [deleted] in Do you think the military has a souped-up version of chatGPT or are they scrambling to invent one? by Timely_Hedgehog
The problem is that if the military actually has singularity technology +10 years from now, they would have deaged all their veterans on re-enlistment, be building massive networks of bunkers and missile defense batteries with self replicating robots, and so on and so forth.
The current reality simply doesn't show any sign that they have this tech. And this is because the defense contractors that pay AI coders offer about 180k annually for someone with 5 years experience. Deepmind would pay 500k for that.
SoylentRox t1_j8znqe0 wrote
Reply to comment by DoktoroKiu in Would an arcology be conceivably possible? by peregrinkm
Yes. And/or isolated equipment for most life support steps. So far example oxygen processing comes from growth tubes isolated in groups, and their feedstock supply gets sterilized before feeding into the machinery.
Energy and spare manufactured part intensive though.
SoylentRox t1_j8nzxgj wrote
Reply to comment by DoktoroKiu in Would an arcology be conceivably possible? by peregrinkm
We're not talking about self contained per say. We are saying "if the earth is no longer inhabitable" but we still have access to it, so we can send people out in space suits or robots and get water, air, and minerals that have to be decontaminated and then can be used.
Every human not in your hab is now dead.
SoylentRox t1_j8jgety wrote
Reply to comment by nohwan27534 in Would an arcology be conceivably possible? by peregrinkm
https://en.m.wikipedia.org/wiki/BIOS-3
The soviets did it in the 1970s. Not sure what you are talking about. It's not a difficult biology problem.
SoylentRox t1_j8jdry6 wrote
Reply to comment by nohwan27534 in Would an arcology be conceivably possible? by peregrinkm
You get the energy from surface solar panels.
SoylentRox t1_j8jdkwn wrote
Reply to comment by nohwan27534 in Would an arcology be conceivably possible? by peregrinkm
? So your argument is to compare actual biotech to late night informercials?
Ultimately your argument comes to energy. Each gram of algae can fix so much carbon as sugar per unit of time given max usable sunlight. How many grams of algae do you need to fix enough carbon to keep a human alive.
The algae has not been genetically modified to make more sugar because humans have not needed to do this yet, so I don't know why you have to resort to comparing to random scams.
To disprove my claim you would need to find at least 1 billion USD spent annually on this type of biotech. If it's not being spent this approach has not been tried, and you cannot claim it won't work.
SoylentRox t1_j8j51pr wrote
Reply to comment by Representative_Pop_8 in Bing Chat sending love messages and acting weird out of nowhere by BrownSimpKid
Right. Plus if you drill down to individual clusters of neurons you realize that each cluster is basically "smoke and mirrors" using some repeating pattern, and the individual signals have no concept of the larger organism they are in.
It's just one weird trick a few trillion times.
So we found a "weird trick" and guess what, a few billion copies of a transformer and you start to get intelligent outputs.
SoylentRox t1_j8j4b4y wrote
Reply to comment by BlueberryTyrant in Would an arcology be conceivably possible? by peregrinkm
Hardly. The bigger the system the larger your buffers can be. You are talking about trying to keep people alive in a hab the size of ISS and with I guess just a few hours worth of surplus oxygen.
A multi kilometer long hab with isolated grow machines (so toxins etc can't cause them all to fail) and months worth of food water and oxygen stored in tanks, and redundant power, and redundant manufacturing, and a few other hand nearby within a reasonable travel distance with enough population cap to house refugees... would be much more stable.
SoylentRox t1_j8j3aql wrote
Reply to comment by Baturinsky in Altman vs. Yudkowsky outlook by kdun19ham
The argument is there is no difference from the perspective of that person.
This actually means if old people have the most power and money (and they do), they will call for the fastest AGI development that is possible. The risks don't matter to them, they will die for sure in a few years otherwise.
SoylentRox t1_j8j2432 wrote
Reply to comment by throwaway764586893 in Altman vs. Yudkowsky outlook by kdun19ham
Depends on luck but sure. I agree and if it's slowly forgetting everything in a nursing home vs getting to see an AGI takeover start only to be painlessly shot, I would choose the latter.
SoylentRox t1_j9bzbhm wrote
Reply to comment by turnip_burrito in Whatever happened to quantum computing? by MultiverseOfSanity
It is not, and the number needed to do useful things like crack encryption is very far away.