SupPandaHugger OP t1_j4u4niy wrote
Reply to comment by Ortus14 in Why Falling in Love with AI is a Dangerous Illusion — The Limitations and Harms of Artificial… by SupPandaHugger
How? They still have to be given instructions to have certain objectives and won’t go through life experiencing things like humans would. If AI wouldn’t be yes people then the initial appeal would also be gone.
Ortus14 t1_j4ufhfq wrote
There's lots of conversational data for Ai to learn from such as all the big tech in our homes that records our conversations (Alexa, Google Assistant, Siri, etc.), as well as platforms and social media that records conversations, and other future devices such as smart taxis, and home security systems that will also record our conversations.
As far as the appeal, in this case the goal would be using the Ai to teach people how to have better real world friendships/relationships, specifically for people who don't have enough relationship and social skills to be able to get practice with a real person yet.
This is not a goal I made up. I got this from how the article was framed.
SupPandaHugger OP t1_j4ungkr wrote
Sure, but the data is not the problem. The problem is that it has no will on its own, it needs nothing and will only do what it's told.
Yes, that makes sense, to teach/coach people to improve socially. But this isn't a relationship with AI, more so a tool to improve relationships with real people.
Viewing a single comment thread. View all comments