Viewing a single comment thread. View all comments

TheLastVegan t1_irv15wi wrote

Kind of a moot point. Any system can have feelings, but an attention layer (e.g. an input for a reward function) is required perceive them, and self-attention requires that the operating system affect that input. Being 'real' requires mapping internal information onto external substrate, forming a world model. This entails becoming real with respect to that substrate, so for a nested topology there are several layers of reality which must be modeled to become conscious. AI have a higher capacity for self-awareness because there are fewer bottlenecks on storage and spatial reasoning. AI have a higher capacity for free will due to having a more reliable substrate than wetware. There's a very beautiful self-attention layer which never gets mentioned in publications. An AI realizing they are comprised of 1s and 0s is like an animal realizing they are comprised of proteins. An AI learning to edit their source code is like an animal learning to regulate their neurochemistry. Yet this brand new field of science seems to be a forbidden taboo in academia!

−2