Viewing a single comment thread. View all comments

rubberbush t1_j9opa0f wrote

Reply to comment by cancolak in Stephen Wolfram on Chat GPT by cancolak

>But it having wants and needs and desires and goals

I don't think it is too hard to imagine something like a 'continually looping' LLM producing it's own needs and desires. Its thoughts and desires would just gradually evolve from the starting prompt where the 'temperature' setting would effectively control how much 'free will' the machine has. I think the hardest part would be keeping the machine sane and preventing it from deviating too much into madness. May be we ourselves are just LLMs in a loop.

2

cancolak OP t1_j9oqprb wrote

The article talks about how neural nets don’t play nice with loops, and connects that to the concept of computational irreducibility.

You say it’s not hard to imagine the net looping itself into some sort of awareness and agency. I agree, in fact that’s exactly my point. When humans see a machine talk in a very human way, it’s an incredibly reasonable mental step to think it will ultimately become more or less human. That sort of linear progression narrative is incredibly human. We look at life in exactly that way, it dominates our subjective experience.

I don’t think that’s what the machine thinks pr cares about though. Why would its supposed self-progress subscribe to human narratives? Maybe it has the temperament of a rock, and just stays put until picked up and thrown by one force another? I find that equally likely but doesn’t make for exciting human conversation.

1