dragonblade_94

dragonblade_94 t1_j9h51sq wrote

You're shoving a lot of words in my mouth.

>the burden is on you to provide any evidence whatsoever that the only requirement for being able to feel and express emotions, both tangible and intangible, love or pain or both simultaneously, is a sensor and raw data.

I'm not here to play burden tennis with you, especially not in a heavily philisophical debate based on theoretical technology. Nor did I ever list out the requirements of emotion being "a sensor and raw data." The basis of my position is the idea that there is nothing inherent and exclusive about the human body that requires natural reproduction to be made manifest. This was the intent behind my homunculus example; I want to know what you consider to be the definining difference, whether it be the existence of soul, a creator, free-will, whatever.

>by your logic, the only thing the world needs to permanently solve all depression everywhere is raw data on happy because we already have senses

From a causal deterministic standpoint, yes it would be theoretically possible to control a person's emotions using controlled stimuli. This idea falls adjacent to Laplace's Demon, a concept that describes a theoretical entity that knows the state of every atom in the universe, and can therefor perfectly tell the future. Such an entity could in theory use the same knowledge to make adjustments in a way that changes a person's thought process.

The problem here is twofold; first we simply don't have the level of technology needed to fully map and understand the human brain structure. In order to affect the brain for a precise outcome, we need to know exactly how it works down to the smallest scope. Second, we would need a computer or other entity capable of storing information on and simulating not only a given brain, but every possible interaction and stimuli that could affect it (essentially the universe). Outside of some fantastical technological revelation, this makes perfect prediction and control virtually impossible.

What we can do though is crudely alter the chemicals in the brain & body to simulate different states of mind. Medications such as anti-depressants contain chemicals that, when received by the receptors in your brain, forcibly shift your brain function. Chemicals and electrical impulses would be our equivalent to internal 'data.'

>and your logic says those are exactly equal to sensors

Again, never said they were exactly equal, but rather they can be created to serve the same purpose. I wouldn't even consider this controversial; the existence of bionic eyes or cochlear implants, allowing blind and deaf people to see and hear, grounds this in present reality.

1

dragonblade_94 t1_j9gbwsi wrote

Your argument is honestly just a half-dozen ways of rephrasing "machines cannot feel" without really positing why any of your points lead to this assumption.

>you can feel pain, a physical sensation. you can feel sad, an emotion. not interchangeable, but they are both things you feel. one is tangible, one is not.

I feel like the 'tangibility' question is vague to the point of being moot in this context. Both being stabbed and grieving over a loss are the brain processing signals caused by stimuli. I would definitely classify one as largely more complex of a brain activity than the other, but I don't believe there's an inherent difference other than which part of the brain is doing the processing. I can see a philisophical argument being made over their separation, but you would need to go a lot more in depth to explain why exactly the difference is valid.

> sensors and programming/algorithms acting as if they are nerves does not equal nerves

Why? Legitimately, why would an apparatus that does the exact same job as nerves not be equivalent other than material makeup? Given your response to the homunculus example, I have to assume you don't consider the choice of materials to be important to the question, so I'm curious as to your reasoning.

> the ai also can not catch the flu and feel like it is nauseous because it doesn’t have any parts that can be nauseous, nor does it have any living cells for the virus to infect thereby presenting the opportunity to make it nauseous

I'm not sure I see the point here. Commenting on robotics being resistant to disease doesn't really mean much in a discussion about sentience.

>AI chooses based on inputs/outputs, we do not choose to feel what we feel, that decision was made for us, not by us.

If looking through a purely deterministic perspective, this is exactly how humans operate as well; everything we think and feel is caused by chemical reactions and stimuli bound by the laws of physics. But that doesn't prevent us from feeling emotion, it just implies that feeling was inevitable.

>and if you could build a human cell by cell, yes, that would be a “machine"

If i'm interpreting your argument correctly, is your view that the existence of an intentioned creator is what defines an 'unfeeling' being from one that feels?

1

dragonblade_94 t1_j9g02dl wrote

I think you need to clarify exactly how you describe feeling, as it seems like you keep swapping between physical pain and emotion, or at least treat one as if it requires the presence of the other.

I would also like to ask how exactly you would define a 'machine.' If we could recreate a human cell-by-cell, including full creative liberty on how the brain is built and functioned, would it be a machine? Would this humunculous have 'feeling', despite being functionally programmed by it's creators? Is the qualification of a machine that it must be built with inorganic materials?

1

dragonblade_94 t1_j9ffhu2 wrote

Are we talking physical pain, like getting stabbed, or emotional grief?

If the former, there's no reason to think a machine cannot be designed to detect pain. In organics, it just boils down to nerve endings sending a signal that translates to "whatever I am currently experiencing is bad." We already have capacitive tech, it wouldn't be all that exceptional to throw it on the exo-layer of a robot and have it move away from anything that applies enough pressure/heat/charge/etc.

If the latter, that's just ground-level circular reasoning: "Robot's can't feel because you need to feel to feel, which robots cannot do."

1

dragonblade_94 t1_j0xsu8l wrote

I've been seeing a large number of 'tech enthusiasts,' especially around the AI art discussion, that appear to subscribe to the idea that technological advancement will always be a net boon, and should therefor be allowed to advance unchecked. They don't want to be bothered with pesky things like ethics, regulation, or theoreticals about possible harm.

The ways things are going, I feel like we are on an inevitable path towards heavy automation. Low-skilled labor will feel it the worst. The question is, once all the blue-collar jobs are sucked up, do we expect all of these people to move to a viable trade? Do we now expect everyone to learn a high-skill subject to earn a living wage? Are those unable to do so unworthy of supporting themselves? If we want this advancement to be beneficial for everyone, real discussion about regulation and/or reform needs to be had.

1