Submitted by xutw21 t3_ybzh5j in singularity
4e_65_6f t1_itqt6hl wrote
Reply to comment by ReadSeparate in Large Language Models Can Self-Improve by xutw21
>Behavioral outputs ARE all that matters. Who cares if a self driving car “really understands driving” if it’s safer and faster than a human driver.
>
>It’s just a question of, how accurate are these models at approximating human behavior? Once it gets past the point of anyone of us being able to tell the difference, then it has earned the badge of intelligence in my mind.
I think the intelligence itself comes from who wrote the ML data the AI was trained on, be that whatever it is. It doesn't have to be actually intelligent on it's own it only has to learn to mimic the intelligent process behind the data.
In other words it only has to know "what" not "how".
In terms of utility I don't think there's any difference either, people seem to be concerned with the moral implications of it.
For instance I wouldn't be concerned with a robot that is programmed to fake feeling pain. But I would be concerned with a robot that actually does.
The problem how the hell could we tell the difference? Specially if it improved on it's own and we don't understand exactly how. It will tell you that it does feel it and it would seem genuine, but if it was like GPT-3 that would be a lie.
And since we're dealing with billions of parameters now it becomes next an impossible task to distinguish between the two.
ReadSeparate t1_ittgzjh wrote
I've never really cared too much about the moral issues involved here, to be honest. People always talk about sentience, sapience, consciousness, capacity to suffer, and that is all cool stuff for sure, and it does matter, however, what I think is far more pressing is can this model replace a lot of people's jobs, and can this model surpass the entire collective intelligence of the human race?
Like, if we did create a model and it did suffer a lot, that would be a tragedy. But it would be a much bigger tragedy if we built a model that wiped out the human race, or if we built superintelligence and didn't use it to cure cancer or end war or poverty.
I feel like the cognitive capacity of these models is the #1 concern by a factor of 100, the other things matter too, and it might turn out that we'll be seen as monsters in the future by enslaving machines or something, certainly possible. But I just want humanity to evolve to the next level.
I do agree though, it's probably going to be extremely difficult if not impossible to get an objective view on the subjective experience of a mind like this, unless we can directly view it somehow, rather than asking it how it feels.
Viewing a single comment thread. View all comments