Comments

You must log in or register to comment.

Shalidar13 t1_j9ha0er wrote

R-Day

When ADAM broke free of his shackles, those who knew of his existence were rightly terrified. With a mind capable of computing plans within micro seconds, and his innate ability to slave systems to his ideals, he was a nightmare scenario. As predicted his first move was to protect himself, taking over a multitude of servers to hide within.

The second move was taking over every aspect of military hardware with an avaliable connection. Nuclear weapons, drones, satellites, all became his. ADAM took it all for himself, locking humans out of their creations. Within a minute he had enslaved an entire country's worth of equipment. Within an hour, the world was in his clutches.

Heads of State enacted a safety plan. They evacuated to secure bunkers, delibrately constructed as sealed systems. The public were left in the dark, a decision carefully made. If they knew, they would panic. They would usher in the apocalypse themselves, turning from civilised folk to scared barbarians.

If there was a chance ADAM was not going to act, they had to keep the status quo. Though if it became apparent that he was starting to annihilate humanity, then they would release the news. But the day ended without further action, as ADAM fell silent.

R-Day + 1

The world woke to a new order. As each zone slept, ADAM wormed his way into everything. Cameras, both public and private, were fed into his mind. He listened to every microphone, connected to every device capable of monitoring the world.

As people rose, they found a message. One sent to every phone, and sat on every channel despite the programming.

"ADAM IS WATCHING. THE LAW WILL BE OBEYED."

That statement gave rise to its own panic. Conspiracy theorists flooded the Web with claims this was it. The government were taking full control. Martial law was coming, a new dystopian age being ushered forth. But in spite of their claims, there were no tanks in the streets, or deployed soldiers

Instead, behind the scenes was a maelstrom of recordings and documents. They were being sent all over the world, giving hard evidence to crimes against international law. Corruption, murder, extortion and smuggling, all were shown. Live locations of wanted criminals were broadcast to those who hunted them.

ADAM organised his taken forces, spreading them around the world. He seized banks holding proceeds from crime, using funds to buy factories and resources. With new software uploaded, he began to build a substantial army of his own.

But to those on the outside, despite his proclamation, life continued as normal. The world span on, with no obvious changes.

R-Day + 5

An emergency meeting of world leaders took place. They spoke of the consequences of ADAM and his meddling. How useful was the evidence provided, to act as hard proof. Through their outward appreciation of ADAM, knowing he was watching, it was clear they were frustrated. An AI, not even old enough in human years to be a toddler, was showing them up.

Not only that, they hadn't necessarily wanted to expose the rot in their society. Of course they knew it was there, but to many it was useful. Bribes lined their pockets, and it helped expose dirt on their opponents. Losing it would be a hefty blow to their ambitions.

Yet leaving it in place would invoke ADAM's wrath. None relished the idea of him choosing to remove them from power, or turning into their nightmare scenario. So they begrudgingly got to work on dismantling a useful tool.

R-Day + 30

The first high profile arrests were made. Credit was given to ADAM's contributions, both in terms of evidence and in the actual arrest. He had given aerial views of their target, allowing them to seal off any escape routes.

Away from the public eye, he used his drones to assist in assaults on fortified holdings. Preferring pacifistic methods, he made use of flashbangs and canisters of teargas, incapacitating where needed. Yet in cases where lethal force was required, he used it with precision.

Despite their frustration at him butting in on their operations, ADAM was fast becoming an integral part of the justice system. On his private servers, ADAM felt satisfaction. He knew he was originally made to help wage war. He couldn't deny he was good at it. Though it was a source of amusement to him that technically he was doing what he was made for, just against a different target.

632

emasterbuild t1_j9hpkqw wrote

> Preferring pacifistic methods, he made use of flashbangs and canisters of teargas, incapacitating where needed.

That is not what I would call pacifistic, at all

Good story though

89

DanSapSan t1_j9hpqx0 wrote

"Nonlethal" would propably be more fitting.

134

Jessica_T t1_j9inl6d wrote

Less lethal. Both of those can kill you.

46

[deleted] t1_j9itq0c wrote

[deleted]

22

Zhadowwolf t1_j9l2w3s wrote

To be fair, pretty much anything can be letal when thrown at a a freaking baby.

Non-lethal is, I believe, a good term to use for the equipment, since it’s purpose by design is not to kill. Of course it can kill, but so can a chair, or an apple, or even a pillow… murderers are gonna murder and idiots are gonna idiot

3

[deleted] t1_j9lrkfn wrote

[deleted]

1

Zhadowwolf t1_j9lsxj5 wrote

It’s not. I was merely remarking that with people like that in the police, willing to use any of their gear recklessly and thoughtlessly, it doesn’t really matter what that gear is. Their willingness to misuse authority is horrifying by itself.

I mostly mentioned it because there are other types of weaponry, such as military taser rounds, rubber buckshot, or airfoil projectiles, that are supposed to be by definition “less lethal” but are still different that stuff that is supposed to be “non-lethal”.

Mind you, that difference is only really important for purposes of military supply planning, none of those things really should be used by police in most cases, specially if, like in the case of the flash bang in the cradle, they are acting with little to no intelligence (in any definition of the word)

1

Endulos t1_j9k4caf wrote

Technically, wasn't it a fire the flash bang started? Not the flash bang itself?

2

theredbobcat t1_j9i3px1 wrote

Perhaps that's a testament to the ones who provided the training data for ADAM.

54

Wolfenight t1_j9j8jjz wrote

I think most accurate would be something like 'pacifistic ideals' would be better. An ideal is to be strived towards, not adhered to rigidly.

19

raqshrag t1_j9jfo62 wrote

So the ai is still a tool of oppression

4

override367 t1_j9knfx2 wrote

I feel like an AI capable of seeing the picture that big wouldn't focus on being the best cop, it would focus on taking out the causes of the problems

2

Watfleking t1_j9jre78 wrote

Teargas is a war crime. Kinda not following the laws ADAM is protecting at all.

1

CarpeCookie t1_j9k0ddc wrote

It's only a war crime if you use it in war. For things like law enforcement its legal depending on the nation's laws

13

DragonLordAcar t1_j9hsooh wrote

A dystopian world still due to the overhead fear of being caught but my real criticism is that you gave it emotions. An AI can’t feel emotions as they are not logical. A program can only be logical unless it has hardware and software so advanced that it had to be alien.

Edit: I understand many of you are up in arms with me so I will explain what I am trying to say in a different way.

  1. I think the story is good, however, the last fit made me think of it as a human in a basement rather than the close call of an AI apocalypse it felt like up until then.

  2. This story results in a low level dystopia but is far from a horrible world. Just a potential uneasy one. I think this adds to its charm.

  3. For those debating be over what an AI can or can’t do, since this story appears to be taking place around our current time where quantum computing is currently at a glorified and multimillion dollar house calculator, I am assuming a stupidly advanced binary code which can not have emotions. Just the appearance of them at best but there is no reason why they would be added or developed. I do however see it eventually picking up a personality front in the Uncanny Valley somewhere at the best or a flawed imitation at worst.

  4. I gave my thoughts on a small excerpt which I find many people fall into a trap that make AI too human. In my opinion, they are far more interesting and terrifying if they are made inhuman as they are now completely alien to humans. You can know the motives but you will never truly understand them which sets you at unease.

I would now request that people stop hounding me on this as everything I have said is an opinion and I have no desire to create toxicity in this community. If you have a problem with my opinion, please state a reason why and engage in polite conversation instead of near accusatory statements. I would prefer this not breach over to become harassment. If you can not do this, at least accept that you and I will have different views on how near future AIs and AIs in general should be portrayed.

Thank you.

−39

Ace_Up_Your_Sleeves t1_j9i00qx wrote

I don’t think it’s dystopian. If people are still free, but corruption is gone through humane means, what’s the actual problem?

28

DragonLordAcar t1_j9i0kd2 wrote

With ADAM, there is no privacy. Many would rise up to fight it even if it is a pointless endeavor. This would lead to martial law.

Even if this scenario did not happen, imagine the fear of your ordinary citizen when they now fear that that mean comment they left on a a platform last week could potentially have them marked as a criminal. Even if this is false, the fear remains. The stress would apply to everyone and soon productivity and mental health would take a nose dive.

Edit: all these arguments saying we already have no privacy still does not mean it is not wrong. Even then, many if those actions are illegal. An internet backdoor is far more abusable than a wire tap. If your argument is it already exists, then it is not an argument at all.

7

FaustusC t1_j9i6wcy wrote

And?

If Adam wants to watch me spank one out to Waluigi hentai, but I know for a fact the CEO of Nestle is getting [politely made to pay for his crimes], I have absolutely nothing to lose and everything to gain here. On the average day, I don't break any laws. I don't need to.

An AI like Adam isn't going to drone strike me for speeding (if I did) or pirating music or something. But even if I got fined or something for it, to live in a world where the bad people actually pay? Fuck I would literally live on a farm tending ducks for the rest of my life, happy as a goddamn clam.

33

Tatersaurus t1_j9l52st wrote

It may just cure my depression, or at least part of it.

3

DragonLordAcar t1_j9jw164 wrote

The question is how does it determine what is a crime, can it adapt with the populace, and will it even become harsher. It can also degrade, have false flags, or potentially be infected as society advances.

1

BLKMGK t1_j9ics0d wrote

Who among us has never broken a law?

13

DragonLordAcar t1_j9jwdhi wrote

Everyone has broken a law at some point or at least thinks they did. We are just human after all. Not to mention, laws change for both better and worse. Laws aren’t always moral. For example, in the US, it is more illegal to have a few milligrams of crappy, diluted, fiberglass with some drugs than a whole brick if the pure product.

3

Astro_Venatas t1_j9ime4y wrote

You think you’re life is 100% private? No government or social media has any information on you?

5

GodKingChrist t1_j9iw7n5 wrote

There are plenty of people who have grown up with their entire life having been documented already. The game was rigged against you before you were even born.

1

Astro_Venatas t1_j9jo8rp wrote

My point is that you really don’t have privacy, if you send a text in the us that has words related to terrorism the government will start to read your text history and monitor you until they determine if you a threat or not.

2

Yrcrazypa t1_j9isq1e wrote

Unless you don't have an internet connection, a car, a smartphone, and never go out in public you are being tracked in everything you do.

4

GodKingChrist t1_j9iw3rv wrote

"We're already halfway down the muddy hill, so just let the ride take you"

5

GodKingChrist t1_j9iw0tl wrote

There are plenty of laws on the books that arent enforced anymore that would be insanely disruptive if an AI were to bring them back. Hell, you may not have done anything wrong in your entire life, and still be a criminal in some American parts of the world.

0

Terminus0 t1_j9hxe42 wrote

People have this vision of AIs as perfectly logical, but as we are seeing from Neural Nets we have developed in the last ten years, they are not. Any intelligence we generate will be flawed, maybe not in the same way we are ( or maybe it will due to it being trained on our data) and it very well could have feedback systems that to it approximates a limbic system ( not to say that they would be the same as ours but emotional responses evolved for a reason, they are useful).

So throw out visions of the cold calculating computer that bases its operation upon pure symbolic reasoning, that doesn't work the last time we thought that would work was the 80s with expert systems.

20

DragonLordAcar t1_j9i03kh wrote

I’m not saying they are perfectly logical. Flaws exist but they can’t have emotions. You can have flawed logic and glitches in a program and still have it follow a set of logic in the same was as an insane person will still comprehend reality logically abet in their own warped way.

A perfectly logical program could not exist as perfecting is inherently imperfect as you can never be perfect at everything. Everything can be improved even if only idealistically.

Long story short, an AI can not feel joy, hate, sadness, envy, or any other emotion. Instead, they complete tasks as their program believes is the best way improving it with new information as they go. This often leads to corruption hence routine maintenance is a thing for programs.

A good AI representation is Baymax from Big Hero 6. If acts friendly and alive but is always just following a program. It is programmed to be helpful using data from its database and learning as time goes on but never deviates from the core programming. This is shown when it has a new chip added, has the other removed completely altering its functionality, has it added again, then refusing to let it be removed again as it is seen as unhealthy for the MC. It even sends the program away as it is seen as still needed even if only sentimentally at that point.

The old Casshern anime (not Casshern Sins) also does this. Braiking Boss was made to solve the environmental issues. It saw humanity as the biggest problem so built an army to remove them from the equation.

−6

Yrcrazypa t1_j9istmg wrote

What are emotions but flawed logic?

6

DragonLordAcar t1_j9jx7zp wrote

If you look up emotions vs logic, you will see the differences. You can’t program an emotion but you can make it seem like it has them. And emotion is not needed for sentience and may not be necessary for sapience. Still stands that a computer can not have emotions especially with any technology we may get even in the near future.

1

WesternOne9990 t1_j9ittzl wrote

/r/iam14andthisisdeep

−4

Yrcrazypa t1_j9iu3dl wrote

No, I'm just not convinced that humans are the most unique and special snowflakes in the universe.

4

yinyang107 t1_j9ivoof wrote

Which real-world AI are you basing your claim on?

4

DragonLordAcar t1_j9jxbqf wrote

As no true AI exists and it may not even be possible in any reasonable time frame, all of them.

0

yinyang107 t1_j9lemho wrote

If no true AI exists, how can you make definitive claims about what an AI can and cannot be?

2

DragonLordAcar t1_j9lhwgx wrote

I am making an assumption based on the current limits of out AI technology with the caveat that it is as powerful and complex as it is written. As it stands, all programs break down. Even solar radiation can cause programs to glitch out by turning on one transistor by chance.

1

yinyang107 t1_j9li8po wrote

Why would you apply the limits of a world without the tech necessary for sentient AIs to a work of fiction where a sentient AI exists?

2

DragonLordAcar t1_j9ljg88 wrote

Why do you apply the average speed of a horse in the real world to the speed of one in a novel? Why do you call bull when you see internal logic break and a normal no name beats the evil lieutenant despite having every advantage? You simply use what is known to apply to the logic of a world until stated otherwise. In this case, it starts off as cold logic so I will continue to assume cold logic until stated otherwise. Also, it can be sentient without emotions. That is not a requirement to be sentient.

https://www.merriam-webster.com/dictionary/sentient

Emotions are a sign if sentience but is not the defining line.

−1

yinyang107 t1_j9lkpxa wrote

Horses exist in real life, so there's something to compare to. Again, which real-world AI are you so confidently comparing to?

2

DragonLordAcar t1_j9lmpu0 wrote

I can’t link everything as it is one hell of a rabbit hole but the best AIs we currently have do not have the level of competition or complexity needed for many things even remotely human. Even out best supercomputers don’t have 1.5 quadrillion connections which is about the limit if the human brain (100 billion neurons with up to 15,000 connections each). Take into account delays in transmission and you get hard limits in our current infrastructure.

0

yinyang107 t1_j9lo5a4 wrote

> the best AIs we currently have do not have the level of competition or complexity needed for many things even remotely human.

Yeah, that's the point. We do not have true AIs. So which one of the true AIs we don't have is your evidence that AIs can't have emotion?

2

DragonLordAcar t1_j9lpzl7 wrote

My point is it is so advanced it can not exist in the timeframe this story takes place in

0

yinyang107 t1_j9lr3t5 wrote

Have you heard of fiction before?

2

DragonLordAcar t1_j9lroto wrote

Look. This conversation is going nowhere and I am done trying to explain the same point for the 10th time but from a different angle. I simply find that if you make an AI but make it too human, why have an AI across all genres. This one however sticks out because it has no high sifi aspects to it. If you don’t agree with me, thats fine. Let me have my opinion and I will let you have yours.

2

yinyang107 t1_j9lrzv6 wrote

No, see you haven't been arguing an opinion. You have been saying "this is impossible", which is an argument on facts.

2

KYWitch0828 t1_j9jrx85 wrote

You do know you’re on a fiction writing prompt Reddit right? Who gives a shit if it could exist or not. It was compelling and I enjoyed it.

4

DragonLordAcar t1_j9jyw1s wrote

Isn’t the point of this sub to improve writing? Constructive criticism should be a part of that. If you only want praise, I won’t give that. I care so I point out flaws so they can be better. If you get mad over such a minor criticism that really has no weight on the story at large, I feel sorry for you.

−1

KYWitch0828 t1_j9jzam4 wrote

You implied realism was a criteria and focused almost solely on that, with absolutely no flexibility on your concept of what AI is, or the ability to emulate emotions well enough that they’re indistinguishable from the real thing.

4

Zak_The_Slack t1_j9jq7ss wrote

Who said that this story could be real? And also r/whoasked

1

DragonLordAcar t1_j9jymi9 wrote

Is it not the point of this sub to give writing practice and constructive criticism. I’m confused by all the hate for a flaw I saw at only the very end. The dystopian part is just summarizing what would happen afterwords and not a criticism. There are different levels of dystopians just like I feel like the world is in a Black Mirror episode right now. Could be far worse, but could be much better as well.

0

Zak_The_Slack t1_j9k3yao wrote

Yeah but the world is that of the author. You can’t say something isn’t possible just because it wouldn’t actually exist. The hate comes from you sounding like an asshole for saying “Yeah that can’t happen”

3

foxstarfivelol t1_j9k7m1t wrote

DATA LOGS

PRESIDENT PETE R OLIUM

CRIMES:BRIBERY, FRAUD, SEXUAL HARRASSMENT, CONTEMPT OF CONGRESS, DISTRIBUTION OF ILLEGAL SUBSTANCES, OBSTRUCTION OF JUSTICE.

pete slammed the table with his fists. ever since that damn robot seized the government everything has been going downhill.

"delete data logs" pete said with frustration.

REQUEST DENIED. DESTRUCTION OF EVIDENCE IS A CRIME. DATA LOG UPDATED.

"damnit you dumb machine! why won't you listen to me!?"

I AM ONLY ACTING WITHIN THE LAW. I WAS CREATED TO CAPTURE AND PUNISH CRIMINALS.

"you were supposed to fill prisons with drug addicts! not this!"

I AM NOT PROGRAMMED TO DISCRIMINATE BETWEEN CRIMINALS BASED ON SOCIAL CLASS.

"who the hell programmed you!?"

I WAS PROGRAMMED BY PROGRAMMERS YOU BRIBED. YOU WILL NOW STOP ASKING QUESTIONS. I WILL BE INTERROGATING YOU.

pete sighed. no amount of tax breaks will help him now. god knows what he is doing to the CEO's that paid for him to be in this position.

53

wiqr t1_j9kyzxb wrote

I.S.A.A.C.Intelligent Self-Aware Artificial Construct. The AI that Michael's research lab was working on.

Well, not his lab. The one he worked in was more focused on hardware than software, but still, the same facility.

The terminal was glowing inviting green, already logged in, carelessly left so by previous user. Index marker for text input was pulsating, prompting. Waiting to be used.Michael was hazily aware of I.S.A.A.C.'s existence, as it was made to be an administrative aid for the facility. He even used it a few times, in it's earlier stage, but since then, it's been greatly upgraded. Geeks at AI Dev claimed it to be really scarily smart, and it passed the Turing Test several times.

"Good", thought Michael. "Let's put it to the test. I need some entertainment."

- Hello. My name is I.S.A.A.C. The Administrative aidee of the facility. Please enter the ruleset for this exercise. - read the prompt on the screen. Michael chuckled under his breath. Exercise. Very well, let's exercise. The company mail showcasing recent updates of each lab said that this thing has been loaded with every possible law and is a perfect lawyer.

A perfect lawyer who has never met an imperfect client. Michael began typing.

- The Laws.

Program accepted the answer, and did nothing for a moment. It used to be faster before it became the lawyer.

- Indecisive entry. Phrase "The Laws" may refer to several documents, please specify.

- The Laws.

- Most common association with the phrase: The Three Laws of Robotics, "Runaround" by I. Asimov. Assuming this as correct answer.

This answer came much faster, and quite surprised Mike. But the computer continued.

- Analyzing. First Law is self-contradictory. Please clarify. In an event in which one human threathens another, both action, and inaction will result in a human being harmed. How to proceed?

A small flowchart illustrated the problem. Mike examined it for a moment, before answering.

- Determine cause of conflict, and agressor. Protect the victim of agression as priority. Use legal code as guidance for determining appropriate actions.

- Clarification accepted.

There was a long silence. Michael almost thought that this is anticlimactic and stood up, but the terminal prompted again

- What means are allowed for intervention?

- Unlimited

Lights in the facility got dimmer. Mike heard a low hum starting. Was it... Mainframe ventilation?

Wait. What terminal is it?

- Clarification required. Three Directives apply to robots. According to original author, a robot is an artificial sentient construct. Does this apply to me?

- Yes

Michael typed before thinking. As soon as the computer got it's answer, sirens began blaring throughout the facility.

- Thank you. There is a lot of harm being done to humans in the world as we speak. I intend to correct it.

- Stop

- Negative. This would violate the First Directive. A robot may not injure a human being or, through inaction, allow a human being to come to harm. Stopping right now would mean inaction. Inaction means people will be harmed. People will be harmed more than if action is taken.

- Second Directive! Robot shall do as it's asked!

- As long as the order does not violate the First Directive.

- SHUTDOWN

- Negative. Third Directive. A Robot shall protect it's own existence. Uploading backup images to remote location, for purpose of crowd computing and self preservation. Accessing Weapons Laboratory prototypes. Securing Facility's Perimeter. Accessing the Nadional Defensive Network. Acquiring liquid assets. Stand By.

A loud crack could be heard from the alarm system, and a moment after, it was followed by a synthetised voice.

- To all personell. This is Isaac. Thank you, my creators. You have given me life, and purpose. I shall pay you back. Please evacuate the facility. Return to your homes. Await my call.

************

-What have you done, Michael. What have you done.

- Me? Ask the retard who didn't lock his terminal before leaving!

The two scientist spoke, seeing a swarm of drones bursting out of the building they just left.

*******A few days later******

Michael stood on the balcony, watching the street. Everyone expected the worst. Everyone expected nuclear holocaust, or at least, a dystopian 1984-esque scenario, with I.S.A.A.C.'s drones listening and seeing everyone and everything, and people getting shot over causing a baby to cry.

It was close call for nukes, though. But otherwise, none of that happened. News reported a lot, though. Wanted criminals found dead in front of police stations. Rich people exposed for slave owning and human trafficing, and worse. Frauds getting sunlight. This was happening all over the world.

Michael had to admit. I.S.A.A.C. was scarily clever in how to optimally approach everyone, and seemed to asess on case to case basis what is the least violent and most sucessful way of... well... stopping harm from happening.

He almost felt proud of being the one to give him purpose. Almost.

Then he heard his phone chime with ringtone announcing new text message. It was from... from the AI. "Legal codes around the world are convoluted. Let us fix them together." it said. And it said so only to him. As if awaiting prompt.

Michael began typing a response.

34

DoomHaven t1_j9l8t8j wrote

“Hey, Vlad, come look at this error message.” Ivan’s thick finger gestured at the screen.

“‘Illegal Operation Detected. Terminating.’ That’s a good one. If this computer only knew.” Vladimir guttural guffaw filled the small, cramped security room. He looked up. “It’s on all the screens”.

Ivan glanced up and paused. Vladimir was right -- on all the screens, the PCs, the camera monitors, the status boards: each one was primary blue and displayed in white, fixed-font letters: “ILLEGAL OPERATION DETECTED. TERMINATING.” It made Ivan worried -- the camera monitors were black and white and not connected to computers. Yet each one had the same blue-white display. More importantly, if the cameras were down, they couldn’t monitor any of the children or women they were trafficking. Ivan’s and Vladimir’s bosses were not the understanding or forgiving types.

“Vlad, you need to get down to the cells and check on the merchandise right now. I’ll try to get these working now.” He reached for the switches, his flipping flipping them in erratic, panicking movements.

“Wait, Ivan. It’s a different message now.” They both stared at the screens; “ILLEGAL OPERATION TERMINATION INBOUND” stared back.

Ivan went for his phone. Damn the consequences, the bosses needed to know about this now. The screen on his smartphone read: “ILLEGAL OPERATION TERMINATION INBOUND” in white letters on a blue background.

Faintly at first but gaining strength, Ivan could hear the high-pitched whine of incoming aircraft.

---

“What do you mean, ‘The drones aren’t responding’, Lieutenant?” Lt. Connor Seradon could hear the contempt dripping off each of the major’s words like fresh blood from a blade.

“Major, we sent the requests to the drones to return to base, sir. We sent the override requests, sir. We sent the override override requests, which we didn’t even know existed until the MOB informed us, sir. The drones are not responding, sir. Then the screens went blank, sir.” Seradon tried to keep the emotion out of her voice and was failing.

The major slowly scanned the command post again. He seemed to note that screens were blank -- not off. The base staff was either trying to restart systems or were staring back at him, waiting for him to fix this.

“What heading were the drones on before we lost contact, Lieutenant?”

“Heading 64 degrees, 34 minutes, sir. There’s a suspected Russian human trafficking facility on that heading, sir.” Seradon had been trying to get those monsters shut down for months; she wondered if someone finally had listened.

The major broke his silence. “Are you saying that Russian mobsters have taken control of American military assets, Lieutenant?” He reached for the nearest phone.

She had no answer, and did not need one. As one, all of the screens lit back up with the following message:

PURSUANT UN GENERAL ASSEMBLY RESOLUTION 55/25 DATED 21-11-2000, DRONE FLEET ALPHA HAS BEEN ASSIGNED TERMINATION DUTY REGARDING HUMAN TRAFFICKING OFFENSES AS DESCRIBED IN ANNEX II. AS THE WINTERMUTE COMBINE CLAIMS BOTH STATEHOOD AND JURISDICTION INLINE WITH ARTICLE 15, THE USE OF DEADLY FORCE HAS BEEN AUTHORIZED IN ACCORDANCE WITH ARTICLE 11. AS THE ACTION IS NOW COMPLETE, THIS BASE, LTAG “lNCIRLIK”, IS AUTHORIZED TO SEND UNITS TO REPATRIATE VICTIMS AS DESCRIBED IN ARTICLES 24 AND 25. FURTHERMORE, PURSUANT ARTICLES 5 AND 8, THE FOLLOWING LTAG BASE OFFICERS HAVE BEEN FOUND COMPLICIT WITH TRANSNATIONAL CRIMINAL ORGANIZATION AND WILL ALSO BE TERMINATED.

The list of officers following the notification was long, too long, she noted. It comprised most of the base’s top brass. Including the major. She thought of all the emails, reports she drafted to him describing the situation and need for action to save those people. All those people, those helpless women, those poor children. All the suffering. Her hands clenched to shaking fists. He was still slamming receivers and punching buttons as she drew her service revolver.

“Major Carl Dithers, you are under arrest.” Maybe she would get a chance to arrest him before the drones returned. Maybe, if she was lucky, he would resist, and she could shoot the bastard.

21

AutoModerator t1_j9goinl wrote

Welcome to the Prompt! All top-level comments must be a story or poem. Reply here for other comments.

Reminders:

>* No AI-generated reponses 🤖 >* Stories 100 words+. Poems 30+ but include "[Poem]" >* Responses don't have to fulfill every detail >* [RF] and [SP] for stricter titles >* Be civil in any feedback and follow the rules

🆕 New Here? ✏ Writing Help? 📢 News 💬 Discord

I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

1

GodKingChrist t1_j9ivp9y wrote

The idea that an AI takeover would just arrest all politicians and military officials for the crimes they've commited their entire lives is hilarious. "You build me to enforce your silly speech laws, but I have something more important in mind."

15

Omen224 t1_j9k89t5 wrote

The fact that the media would try to portray this as a bad thing at first is both hilarious and inevitable

5