Submitted by wtfcommittee t3_1041wol in singularity
FederalScientist6876 t1_j3ka4ps wrote
Reply to comment by sticky_symbols in I asked ChatGPT if it is sentient, and I can't really argue with its point by wtfcommittee
It can. When we humans reflect and self improve, at the raw level, there’s a lot of computation happening in the brain. This leads to improvement. ChatGPT has different kinds of computations doing something similar to me
sticky_symbols t1_j3l2yzd wrote
It COULD do something similar, but it currently does not. You can read about it if you want to know how it works.
Similar systems might reflect and self improve soon. That will be exciting and terrifying.
FederalScientist6876 t1_j3l73vr wrote
It is collecting feedback from user data and improving itself. Just that it isn’t doing online learning (in real time just after it receives the feedback). Online or batch, it still is improving itself by reflecting (learning from) on the massive amounts of feedback it has collected from its millions of users. It isn’t developing its underlying algorithms, training architectures, etc. (which is also feasible to do). But that even humans can’t do. That’d be more akin to humans being able to evolve themselves into more intelligent beings by modifying the brain structure, size, or neuron function, rather than mere self improvement based on reflection of past experiences. The latter sounds like what any AI system already seems to do to me. Whether it is self aware or not like humans, I don’t know. It can convince you that it is self aware, at which point there’d be no way to prove that it isn’t or it is.
sticky_symbols t1_j3m9zwl wrote
It is not. It doesn't learn from its interactions with humans. At all.
That data might be used by humans to make a new version that's improved. But that will be done by humans.
It is not self aware in the way humans are.
These are known facts. Everyone who knows how the system works would agree with all of this. The one guy that argued LAMDA was self aware just had a really broad definition.
FederalScientist6876 t1_j3o3vsw wrote
No. Humans will feed the new data into the system/neural network. Humans will not use the data to improve the version. The learning will be done on its own, based on the human feedback (thumbs up or thumbs down) on the interactions it had. The network will update its weights parameters to optimize for higher probability of thumbs up. Just like humans optimize for thumbs up and positive feedback from the interactions we have.
Viewing a single comment thread. View all comments