Rocksolidbubbles

Rocksolidbubbles t1_j956kre wrote

Reply to comment by TeamRocketsSecretary in [D] Please stop by [deleted]

>To suggest that it’s performing human like processing of emotions because the internal states of a regression model resemble some notion of intermediate mathematical logic is ridiculous especially in light of research showing these autoregressive models struggle with symbolic logic

Not only that. The debate on 'sentience' won't go away, but it will definitely be a lot more grounded when people who are expert in - for example, physiology of behaviour, cognitive linguistics, anthropology, philosophy, sociology, psychology, chemistry get involved.

For one thing they might mention things like neurotransmitters, and microbiomes, and epigenetics, or cultural relativity, or how perception can be relative.

The human brain is embodied and can't be separated from it - and if it were it would stop thinking like a human would. There's a really good case to be made (embodied cognition theory) that human cognition partly lies upon a metaphorical framework made of euclidean geometrical shapes that were derived from the way a body interacts with an environment.

Our environment is classical physics - up and down, in and out, together and apart - it's all straight lines, boxes cylinders. We're out of control, out of our minds, in love - self control, minds and love are conceived of as containers. Even chimps associate the direction UP with the abstract idea of being more superior in the heirarchy. You'll be hard pressed to find any western cultures where Up doesn't mean good or more or better, and DOWN doesn't mean bad or less or worse.

The point being, IF this hypothesis is true, and IF you want something to think at least a little bit like a human, it MAY require a mobile body that can interact with the environment and respond to feeback from jt.

This is just one if the many hypotheses non hard science fields can add to the debate - it really feels they're too absent in ai related subs

1