MagicManTX84 t1_j9f93vw wrote
Freud speaks of the ego or “id”. I think to be sentient, AI would need this and would, at a minimum, be interested in self preservation and probably a lot more. In humans, behavior is regulated through morals, values, and social pressure. So how does that look for AI. If 1,000,000 social posters tell AI to “kill itself”, will it do it?
Viewing a single comment thread. View all comments