I don't think there's a risk that some random model we train for a specific purpose will accidentally climb into sentience, but it should be theoretically possible to mimic human thought processes in silico. I just think it would require a very specific type of model
Viewing a single comment thread. View all comments