Submitted by RamaSchneider t3_121et4t in Futurology
A lot of discussions regarding AI and human like actions and reactions seem to me to focus on some absolute uniqueness of every human that requires a special definition to explain. The rationale seems to be that humans are not machines and that we have some internal mechanism (soul, spirit, humanity, whatever) that gives us the power to operate as uniquely free operators - free from our biology and the basic physics that makes our bodies, including minds, function.
But what are we to think if we keep finding out that as humans we are better described as biological computing machines of such? What if all this OpenAI is all about self-recognition?
jeremy-o t1_jdljmke wrote
We are definitely just biological computing machines! That wasn't really at question, if you consider the science. What is at question is how soon we can replicate that. Some would say we're close. I think we're a lot further than we assume, purely based on the complexity of the human brain's neural network vs. e.g. our best AI models.
Considering the "soul" or spirit irrelevant isn't really futurism. It's more like, existentialism or even nihilism, so we're talking 19th/20thC philosophy.