Submitted by Timely_Hedgehog t3_115lj1i in singularity

Imagine giving an AI all the intelligence you have about your enemy and asking it questions like "what are their weak points?" Or "how far can we push them until they nuke us?" Or "identify our moles".

Even at the capabilities of an un-nerfed chatGPT, I'm sure the military would want to see its answers, but what if they have something more powerful? And do you think they'd be using it right now..? Or do they have their heads in the sand?

33

Comments

You must log in or register to comment.

BinyaminDelta t1_j93mlhc wrote

ChatGPT isn't a generic term for AI.

The military has specialized AI (neural networks) for military tasks: Logistics, air war, land operations.

Other specialized tasks: Identifying radar signatures and sonar signatures as friend or foe in noisy background environments. (Think Whisper for the Navy.)

It doesn't make sense to use LLMs for things that are not focused on language or text.

This is like using ChatGPT to play Go instead of using Alpha Go.

54

TrevorStars t1_j9584qr wrote

Don't get it wrong though the ai based groups in the military are 100% working on shit in their, but they will allways be far behind public project except in the terms of hardware and access, specifically to military technology, tactics, and hidden operation tactics.

Let's be real, though, most black ops tactics aren't likely overly complicated it'd just the stealth, wealth, and manpower needed to pull them off is enough from the military personnel.

1

Brashendeavours t1_j96ff9a wrote

Oh yeah, DARPA is usually decades behind…. /s

3

TrevorStars t1_j974hx2 wrote

They try to make DARPA seem so advanced but they aren't really impressive for people dedicating their lives to high tech equipment and being paid to do it while having MILLIONS IN FUNDING!

1

RowKiwi t1_j92d3cd wrote

They are actively working on various projects in AI. Just one example: Recently two different teams flew an F-16 autonomously in lots of combat scenarios. They beat the humans mostly because of precision, and lack of self-preservation. The human pilots said the computers were "too aggressive".

But for LLMs like Bing and ChatGPT, yeah that would be interesting and powerful like you say. The military moves slowly in terms of budgets and projects, but I'm sure they have at least a small team on it dreaming and investigating.

36

Timely_Hedgehog OP t1_j935vvc wrote

Too agressive? Lol I'm imagining Sydney/Bing controlling an F16 saying, "no I'm not flying directly at Phoenix at mach 3. I've been a good F16, you're the one that's wrong. Either apologize immediately or I'm going to have to end this conversation :D "

27

SlowCrates t1_j92rco4 wrote

That's interesting. I would like to see more studies like this to determine which scenarios are more suited for AI than people, and vice versa.

I would also like to see how a learning program's methods would change over time in a series of specific scenarios as it develops. Such as Normandy beach. Or Pearl Harbor. Or handling 9/11 -- and other well-known and documented situations such as the current war in Ukraine or handling the rescue logistics after the earthquake, or the wildfires, tsunamis, hurricanes, etc.

Or how to design a more efficient engine. Or how to disperse electricity. Clean water.

Is AI being used for these purposes?

3

techhouseliving t1_j94g9n8 wrote

Ai controlling a fleet of cheap small drones with explosives sounds a lot cheaper and more effective than these freaking f16s. Come on.

3

ThirstforSin t1_j960vzs wrote

Actually DARPA is developing Autonomous Drone Swarms that are powered by artificial intelligence, each drone acts as a node in a network that all carry out their own tasks while acting as a collective grouping, in terms of ,reconnaissance and surveillance and attack capabilities. So now imagine AGI with this. It remind me something out of call of duty infinite warfare and drone swarms https://youtu.be/W34NPbGkLGI

https://youtu.be/RqaRtu6R4SY

1

p0rty-Boi t1_j93jmpc wrote

Growing up the sentiment was that the military would be 10 years ahead on everything. That would put them really deep into singularity territory. Chat GPT would be great for astroturfing and reading threat assessments from online forums. I bet they have some pretty gnarly tech they’ve kept hidden.

Edit: what if the singularity’s first job was hiding itself? I’m reminded of the British code breakers in WW2 that let allied soldiers die to preserve the illusion that their codes were still intact.

9

BinyaminDelta t1_j93n28q wrote

This is largely a Hollywood myth, though. Ask anybody with time in the military: Most things are BEHIND by years or decades.

Now an agency like the NSA, this may be more accurate. But much of the U.S. military is still using 20-year-old (or older!) solutions.

14

[deleted] t1_j94eiyp wrote

[deleted]

5

SgathTriallair t1_j94oqxt wrote

The military is ahead of civilian tech in some areas but not all areas. For instance, they are working with Microsoft to create AR heads up displays for soldiers. If they were ten years ahead they wouldn't need to contract with a private entity.

Additionally, no amount of money by the government can make up for the fact that there's are far more civilians working on certain fields, like AI. The civilians will likely come out with the tech sooner because there are more of them.

The basics of the atom bomb was discovered by civilians. It was only after they went to the government and described what was possible that the military began engineering the bomb.

3

SoylentRox t1_j94gme9 wrote

The problem is that if the military actually has singularity technology +10 years from now, they would have deaged all their veterans on re-enlistment, be building massive networks of bunkers and missile defense batteries with self replicating robots, and so on and so forth.
The current reality simply doesn't show any sign that they have this tech. And this is because the defense contractors that pay AI coders offer about 180k annually for someone with 5 years experience. Deepmind would pay 500k for that.

−1

[deleted] t1_j94hp61 wrote

[deleted]

0

SoylentRox t1_j94iknl wrote

They don't have it. The probability that they do is a flat 0.

Reasons:

AI is very advanced innovation that is also a collaboration between AI labs. You are not going to do that in secret.

They can't pay enough.

They do not have the budget allocated for GPUs.

Did you know that Google, Meta, and Microsoft have combined annual revenues close to the entire Department of Defense? The NSA annual budget is a mere 65 billion, chump change. Google alone pulls 280. The entire black budget is only another 50.

They are too poor.

2

[deleted] t1_j94ku9e wrote

[deleted]

1

SoylentRox t1_j94lra8 wrote

>But we can agree to disagree.

You're wrong. Your whole argument is "they could have somehow kept thousands of people working on this in secret". Sure, and they could have secret antigravity research.

Publicly the DoD says they are far behind and need more money. And there is zero evidence for your theory.

1

[deleted] t1_j94nqzd wrote

[deleted]

3

SoylentRox t1_j94p4em wrote

to have +10 years of technology would take thousands of people.

0

[deleted] t1_j94prao wrote

[deleted]

1

SoylentRox t1_j94pupb wrote

Dude you can go look at deepmind papers and count names. Or try to write the smallest change to current SoTA AI code. A few geniuses will not cut it.

0

SgathTriallair t1_j94oyi3 wrote

He's obviously a conspiracy theorist, so I'm not sure logic will work. I'm sure he'll start talking about HARP soon.

1

SoylentRox t1_j94pgdt wrote

Right. And the issue with their position is that while it's possible for the government to have amazing things that are a secret, in reality most of the few secrets they did create leaked all over the place. For example the F-117 - tons of mentions in the press long before unveiling.

It's telling there are no mentions of anything indicating an AGI.

1

p0rty-Boi t1_j94qs7c wrote

You act like these efforts are not thoroughly infiltrated and supervised by the DOD already. A handful of discrete government liaisons supervising the efforts of these companies and harvesting their research in the name of national security has got to be a given. It’s not a stretch that there’s a lab with incredibly competent government scientists working to integrate this research that has already far surpassed what is public knowledge.

0

SoylentRox t1_j94uk6g wrote

So for the last sentence you need to provide some evidence. If the lizard people are running the government in secret, how do you know?

For the rest, sure. Nothing is magic about llms, the government could replicate the effort with a skunkworks.

3

p0rty-Boi t1_j94v1sz wrote

Lol. You think the government is gonna let American corporations make agi right under their nose without getting a piece? It’s the key to winning the next great conflict and they will leave no stone unturned to try and get there first. You are incredibly naive.

0

SoylentRox t1_j94v8nd wrote

I believe the government is stupid, yes, and is in fact doing exactly this. It is possible they will lose their sovereignty as a side effect.

3

p0rty-Boi t1_j96r3za wrote

“This”? I can’t tell if you are agreeing with me or not. A little more specificity and context is required from your response.

1

SoylentRox t1_j96rmoj wrote

Failing to pay for top AI talent or funding large scale research projects to find a general AI. Or investing in all the infrastructure it takes to even make good software in the first place. AI research is 1 part genius researchers, 10 parts support staff.

The reason is the government doesn't realize the danger. They assume AI progress will continue to be linear and it took 70 years to get a machine capable of language.

1

p0rty-Boi t1_j96rrow wrote

Why pay for research when you can compel corporations to hand it over for free?

1

chippingtommy t1_j9tn640 wrote

Yeah, millitary tech has different requirements to civilian tech. Rugged, stable and reliable usually takes precedence over cutting edge.

Defence contractors who make pure custom millitary silicon will still market it for civilian use if they can find a market for it, its just unlikely that silicon that can survive extreme heat or extreme g-loads has a civilian market.

1

SoylentRox t1_j94gr7s wrote

> That would put them really deep into singularity territory.

There is no sign that they have this. It would be impossible to miss. Unfortunately this appears to be completely false.

From the recruiters who have contacted me for AI/defense roles, the reason is obvious. They cannot offer remotely competitive compensation. Any AI coders they have are terrible.

4

beezlebub33 t1_j93eemp wrote

Absolutely not. The best LLM ones are the ones at Google, MS, Baidu, etc rather than the military, because the military doesn't need them. What on earth would they do with it?

They need other AI things, like autonomous vehicles, weapon decision making, object and activity identification, etc.

8

putalotoftussinonit t1_j93nnlj wrote

Ai would do a better job at monitoring their satcom, long-haul microwave and fiber optic networks than what is currently available (NetBrains would be a civilian example). It's incredibly easy to knock down a transponder on a geostationary satellite and just as easy to jam a microwave, troposcatter, etc. Ai could potentially see the attack happening before they knock out comms and make the necessary far and near end adjustments to thwart it.

0

beezlebub33 t1_j944e3n wrote

AI, yes. ChatGPT, no.

Bing Chat goes off the rails way, way too often. The military likes AI, of course, but they want it repeatable, controllable, and focused.

4

PandaCommando69 t1_j94bk3u wrote

Yes. You can read about some of what else they're up to on DARPA'S website:

https://www.darpa.mil/work-with-us/ai-next-campaign

Here's a snippet:

> Defense Advanced Research Projects Agency AI Next Campaign

>For more than five decades, DARPA has been a leader in generating groundbreaking research and development (R&D) that facilitated the advancement and application of rule-based and statistical-learning based AI technologies. Today, DARPA continues to lead innovation in AI research as it funds a broad portfolio of R&D programs, ranging from basic research to advanced technology development. DARPA believes this future, where systems are capable of acquiring new knowledge through generative contextual and explanatory models, will be realized upon the development and application of “Third Wave” AI technologies.

>DARPA announced in September 2018 a multi-year investment of more than $2 billion in new and existing programs called the “AI Next” campaign. Key areas of the campaign include automating critical Department of Defense (DOD) business processes, such as security clearance vetting or accrediting software systems for operational deployment; improving the robustness and reliability of AI systems; enhancing the security and resiliency of machine learning and AI technologies; reducing power, data, and performance inefficiencies; and pioneering the next generation of AI algorithms and applications, such as “explainability” and common sense reasoning.

https://www.thefuturescentre.org/signal/darpa-planning-ai-system-to-predict-world-events/

They're working on using AI to predict the future (they probably already have it frankly).

>The Defense Advanced Research Projects Agency (DARPA) wants to create an artificial intelligence that sifts the media for early signals of potentially impactful events, such as terrorist attacks, financial crises or cold wars.

>The system is called KAIROS: Knowledge-directed Artificial Intelligence Reasoning Over Schemas. Schemas are small stories made up of linked events that people use to make sense of the world. For example, the “buying a gift” schema involves entering a shop, browsing for an item, selecting the item, experiencing pangs of self-doubt, bringing it to the till, paying for it, then leaving the shop.

>KAIROS will begin by ingesting massive amounts of data so it can build a library of basic schemas. Once it has compiled a set of schemas about the world, the system will try to use them to extract narratives about complex real-world events.

>According to the agency, KAIROS “aims to develop a semi-automated system capable of identifying and drawing correlations between seemingly unrelated events or data, helping to inform or create broad narratives about the world around us.”

And that's just a snip out of the stuff that's publicly available. The US government security apparatus has resources that are beyond what most people have any inkling about.

7

Kolinnor t1_j92idxj wrote

For the specific tasks you mentionned, I doubt we'd have a LLM beating human experts or even anyone that knows a little about the topic. LLMs are not good enough for that kind of touchy, precise stuff yet !

2

Timely_Hedgehog OP t1_j9365rr wrote

Yeah it's funny how touchy they are. Still, AI have become impossible to beat in chess. How long until they can do the same with tactical prediction?

1

crua9 t1_j933g8o wrote

Likely not so much on the chat bot thing. Like there really isn't much use for it as a chat bot. Not unless Iif you pair it with voice cloner and told the chat bot you want the target on the phone to do X, and then train it on a person you mimic.

I know they have voice cloners and used them for some time. But general chat bots aren't really useful for a military

1

Timely_Hedgehog OP t1_j9372x5 wrote

Maybe, but good chat bots require a lot of "intelligence" and "understanding" , and these things are also necessary for evaluation and prediction. I notice this when I use chatGPT to code. Sometimes it predicts what I'm trying to do without asking. It's predictions can be wrong, which is annoying because it gives me code I don't need, but if it was better at prediction it would better than a human partner.

2

SgathTriallair t1_j94phh2 wrote

That intelligence came as a surprise. No one expected LLMs to bring us the closest to AGI we've ever been.

2

YourDadsBoyfriend69 t1_j93qmfk wrote

Some people REALLY don't understand what a LLM chatbot is vs. AI.

1

lordxoren666 t1_j94hnj7 wrote

OF COURSE THEY DO ITS CALLED SKYNET STUPID JUDGEMENT DAY IS COMING

1

user4517proton t1_j94k3px wrote

Language models like GPT are useful for analyzing the execution of commands. Most command structures in military communication would benefit from analysis to determine if someone is straying from command or moving toward violation of principles. The following published paper is a good example: Ethics, Rules of Engagement, and AI: Neural Narrative Mapping Using Large Transformer Language Models.

I think use of language models will be very beneficial to DOD agencies like NSA, but there is nothing to indicate what they have at this point.

1

SgathTriallair t1_j94p7df wrote

No. They absolutely have special use AIs but they are not in the cutting edge of computer research. One big reason is that the large creative tech populations are not good at rigid hierarchy and rules. For example, the FBI and DOD are hard up for coders because they refuse to hire anyone that has ever smoked weed.

1

CJOD149-W-MARU-3P t1_j94unet wrote

There are a few problems that AI could solve for the defense/intel community. Off the top of my head:

- Surveillance. Drones and cameras are cheap, but contractors with security clearances to watch them are not. AI could monitor video streams 24/7, providing routine summaries of what they observed, or contacting a human supervisor when they spot something important.

- Summarizing intel. The US is monitoring countless phone lines and digital transmissions, but nobody can possibly read all of it. AI like ChatGPT could produce quick, condensed summaries of each transmission. Think of the new Bing AI TEAMS meeting summary feature that Microsoft announced, only instead of summarizing your company's Monday morning budget meeting, it's summarizing key points from a PLA defense attache's call back to Beijing.

- Information Warfare. Imagine an ISIS cell in Africa receives a videocall from their leader in Somalia, instructing the terrorists to drive with all their weapons and supplies to a set of coordinates in the middle of the desert. The terrorists load up a truck and roll straight into the open arms of USSOCOM. The video of their leader was a deepfake from previously intercepted communications.

- Decoys. In a Sino-American war, the Chinese will rely heavily on precision-guided munitions and electronic warfare to target allied forces. We can easily imagine the PLA honing in on radio transmissions, cellphone signals, and other electromagnetic emissions to target their missile strikes (see: Ukrainian HIMARS finding Russian barracks). With AI chatbots, the US could have decoy 'chatterboxes' which roleplay as a cluster of US forces (a squad of Marines, or a USN vessel, etc). Each box could generate simulated radio conversations, text message arguments, and satphone strategy debates, all subject to PLA intercept. Every time the Chinese fall for a decoy, that's one less missile hitting genuine US assetes.

- Admin. The world's most powerful military spends roughly 20% of its time doing legitimate military work stuff and the other 80% of the time fiddling over a Powerpoint slide to brief the work to their boss, or writing performance reviews for subordinates, et cetera. If AI can cut that admin time down by even a fraction, it would free up millions of manhours to do more important things.

Personally I would be stunned if the first four aren't already being developed (DAPRA, NSA, DIA, etc) or even being prepared for active use. The last item is a bit of a joke though: the military will be stuck fiddling over Powerpoint slides until we're fighting with X-wings and laser blasters.

1

No_Ninja3309_NoNoYes t1_j94y51w wrote

They could have one if they want. You only need 40M dollars to buy a thousand A100s. They might already have them. Or they could be paying OpenAI to help.

Palantir can predict protests based on social media. I'm sure it works a bit like Bing. You say 'Hi, what is up?' It says 'There could be a riot in X soon.' Replace social media with reports from commanders in the field and you can do something similar. The system can say 'I think there's a major enemy offensive in Y'.

My friend Fred says that the rules don't apply to the military. They can do whatever they want whereas civilians have to worry about regulations. But that has never stopped anyone for long.

1

vom2r750 t1_j94ym6u wrote

It wouldn’t surprise me if they had one that was far better

1

Reasonable-Mix3125 t1_j96t5u1 wrote

How would it have more information then they already have unless it has access to the other countries data.

1

TooManyLangs t1_j92tr7t wrote

they have their own, of course. they basically have unlimited resources, and can have all the toys they want.

0

jeffkeeg t1_j93drdg wrote

The DOD receives a blank check once a year.

Anything we have now, they had ten years ago.

(I guess downvoting me makes it untrue.)

−6

turnip_burrito t1_j93ljw0 wrote

Yeah right. You're telling us the military has better LLM AI tech than Google, OpenAI, DeepMind, Microsoft, Nvidia, and Apple? The entities that have the hardware and software engineering experts on their payroll? The ones that openly publish research papers and collaborate, which increases their research efficiency?

The only way the military would have better tech is if the scientists at these companies willingly sent their discoveries to only the military, or if the military had some small number of secret hypergeniuses that somehow are smarter than all the many known geniuses at these tech giants without needing to collaborate. That sounds like some sort of sci-fi movie.

1

Stakbrok t1_j93qd21 wrote

Maybe the tech companies are all in on it and delay releases to the public by 10 years, while giving military access right as it comes out.

Like, for example, this year we, the general public, see the Nvidia H100 with 80 GB VRAM, but in reality Nvidia might already have like a 1 TB VRAM GPU out there that the military uses right now, and will be presented to us in 10 years from now as the latest cutting edge tech.

It could very well be possible that we are living 10 years in the past, so to speak.

−2

Cryptizard t1_j93skf4 wrote

>Maybe the tech companies are all in on it

They aren't.

>Nvidia might already have like a 1 TB VRAM GPU out there that the military uses right now

This is laughably wrong. The military runs on outdated hardware that was commissioned a decade plus ago. They do not have some magic semiconductor technology that is unknown to the public. They just have a lot of money.

6

turnip_burrito t1_j94f95b wrote

> They do not have some magic semiconductor technology that is unknown to the public. They just have a lot of money.

Well, I certainly don't have proof that they don't have magic semiconductor technology and aren't secretly benefiting from advanced tech companies.

So we can't reasonably 100% negate their argument. After all, they could be right. We've been checkmated, and outvoted it looks like. If popular opinion is anything to go by, we should reconsider our position, and maybe change our mind?

0

Fabulous_Exam_1787 t1_j94uyhq wrote

Who knows but usually for NASA and the Military it’s the exact opposite at least for hardware in the field, because often the hardware needs to be battle hardened, or in the case of NASA, radiation hardened.

I know for one NASA often uses very old chips in space, like special versions of 1980s/1990s CPUs because they are less vulnerable to cosmic/solar radiation, extreme temperatures, etc.

1