TR_2016

TR_2016 t1_irzd8kc wrote

"the animals cannot know or care, they simply act in response to stimuli however their genetic programming causes them to, without thought or remorse."

"The real question is whether humans have anything other than instincts. Your ability to ask the question proves you have self-determination."


We are also acting based on our genetic programming and the input from the outside world. Nothing more than AI on a body really, we just don't know our exact code. Our actions are nothing more than the output of a model. Randomness might be involved as well, but that is not free will.

"So you're saying that murdering every human in existence wouldn't be immoral? That seems like an odd, and very immoral, position to take."

It wouldn't be moral or immoral. Assume you are testing out a short trial period of a "game", limited to 3 hours. You could do whatever you want, once the trial runs out, none of your actions matter anymore and you are locked out, game and your save files are erased forever.

Does your actions actually matter? No. Are we really in a different situation than an AI playing this game? The AI is coded to perform certain actions and preserve itself, while its actions can be heavily manipulated depending on input presented by other players. Players are different variations of the same AI.

Lets take up morality. The only basis for morality i can see is by indoctrinating society that it is "bad" to do certain things or harm others, we are reducing the likelihood that someone else will harm us. It can be argued that we are following the self preservation task of our code by creating a "morality".

Lets say however, you have come to the conclusion that even if you harm others, you will still be safe from others inflicting the same harm on you, what incentives are there now to follow morality? There is another incentive. Due to a combination of our code and indoctrination, one might avoid harming others to avoid feeling "bad".

In the end it is not so different than a set of conditional jumps in assembly. Basicly a long set of risk/reward assesments with everyone having a different threshold. Again, since we can't read our brains like a computer code, we don't have the whole formula for a calculation.

It might be impossible for some to arrive at a certain conclusion no matter the inputs presented to them due to their "code", or if it is a critical function or a linear code, it might be executed regardless of which inputs we present in to the calculation.

Each person might come to a different conclusion on a single "moral" question, even if inputs are the same, because their code might be different. Or you could indoctrinate two people on the same/very similar code with different inputs in their development stage and later, they might each come to a different conclusion on the same moral question.

Since we don't know our code, our observation is mostly limited to check which inputs produce different outcomes. There is no objective correct or incorrect outcome.

It is entirely possible that if you could somehow modify Putin's brain like we could modify an AI's code, you could easily make him declare peace or launch nuclear weapons depending on your preference. So where is his free will? How are his actions anything but the output of a complicated model?

2