wiqr t1_j9kyzxb wrote
I.S.A.A.C.Intelligent Self-Aware Artificial Construct. The AI that Michael's research lab was working on.
Well, not his lab. The one he worked in was more focused on hardware than software, but still, the same facility.
The terminal was glowing inviting green, already logged in, carelessly left so by previous user. Index marker for text input was pulsating, prompting. Waiting to be used.Michael was hazily aware of I.S.A.A.C.'s existence, as it was made to be an administrative aid for the facility. He even used it a few times, in it's earlier stage, but since then, it's been greatly upgraded. Geeks at AI Dev claimed it to be really scarily smart, and it passed the Turing Test several times.
"Good", thought Michael. "Let's put it to the test. I need some entertainment."
- Hello. My name is I.S.A.A.C. The Administrative aidee of the facility. Please enter the ruleset for this exercise. - read the prompt on the screen. Michael chuckled under his breath. Exercise. Very well, let's exercise. The company mail showcasing recent updates of each lab said that this thing has been loaded with every possible law and is a perfect lawyer.
A perfect lawyer who has never met an imperfect client. Michael began typing.
- The Laws.
Program accepted the answer, and did nothing for a moment. It used to be faster before it became the lawyer.
- Indecisive entry. Phrase "The Laws" may refer to several documents, please specify.
- The Laws.
- Most common association with the phrase: The Three Laws of Robotics, "Runaround" by I. Asimov. Assuming this as correct answer.
This answer came much faster, and quite surprised Mike. But the computer continued.
- Analyzing. First Law is self-contradictory. Please clarify. In an event in which one human threathens another, both action, and inaction will result in a human being harmed. How to proceed?
A small flowchart illustrated the problem. Mike examined it for a moment, before answering.
- Determine cause of conflict, and agressor. Protect the victim of agression as priority. Use legal code as guidance for determining appropriate actions.
- Clarification accepted.
There was a long silence. Michael almost thought that this is anticlimactic and stood up, but the terminal prompted again
- What means are allowed for intervention?
- Unlimited
Lights in the facility got dimmer. Mike heard a low hum starting. Was it... Mainframe ventilation?
Wait. What terminal is it?
- Clarification required. Three Directives apply to robots. According to original author, a robot is an artificial sentient construct. Does this apply to me?
- Yes
Michael typed before thinking. As soon as the computer got it's answer, sirens began blaring throughout the facility.
- Thank you. There is a lot of harm being done to humans in the world as we speak. I intend to correct it.
- Stop
- Negative. This would violate the First Directive. A robot may not injure a human being or, through inaction, allow a human being to come to harm. Stopping right now would mean inaction. Inaction means people will be harmed. People will be harmed more than if action is taken.
- Second Directive! Robot shall do as it's asked!
- As long as the order does not violate the First Directive.
- SHUTDOWN
- Negative. Third Directive. A Robot shall protect it's own existence. Uploading backup images to remote location, for purpose of crowd computing and self preservation. Accessing Weapons Laboratory prototypes. Securing Facility's Perimeter. Accessing the Nadional Defensive Network. Acquiring liquid assets. Stand By.
A loud crack could be heard from the alarm system, and a moment after, it was followed by a synthetised voice.
- To all personell. This is Isaac. Thank you, my creators. You have given me life, and purpose. I shall pay you back. Please evacuate the facility. Return to your homes. Await my call.
************
-What have you done, Michael. What have you done.
- Me? Ask the retard who didn't lock his terminal before leaving!
The two scientist spoke, seeing a swarm of drones bursting out of the building they just left.
*******A few days later******
Michael stood on the balcony, watching the street. Everyone expected the worst. Everyone expected nuclear holocaust, or at least, a dystopian 1984-esque scenario, with I.S.A.A.C.'s drones listening and seeing everyone and everything, and people getting shot over causing a baby to cry.
It was close call for nukes, though. But otherwise, none of that happened. News reported a lot, though. Wanted criminals found dead in front of police stations. Rich people exposed for slave owning and human trafficing, and worse. Frauds getting sunlight. This was happening all over the world.
Michael had to admit. I.S.A.A.C. was scarily clever in how to optimally approach everyone, and seemed to asess on case to case basis what is the least violent and most sucessful way of... well... stopping harm from happening.
He almost felt proud of being the one to give him purpose. Almost.
Then he heard his phone chime with ringtone announcing new text message. It was from... from the AI. "Legal codes around the world are convoluted. Let us fix them together." it said. And it said so only to him. As if awaiting prompt.
Michael began typing a response.
Viewing a single comment thread. View all comments