Viewing a single comment thread. View all comments

El_duderino_33 t1_jdilzpx wrote

Yes, your idea is good, the problem would not be the AI model. The problem would be the same as the one we have now, the people.

You're falling to the common misconception that the majority of other people must think in a way similar to you. Unfortunately for society, from your post's description, your willingness to entertain other view points already makes you a fairly rare individual.

This line:

"I chose to make an genuine effort to understand the rationale behind their beliefs"

Good on you, that's wise, but it's not common. The part where you had to make an effort to understand is what's gonna trip up a lot of folks.

tldr; you can lead a horse to water... cliche sums up my post

2

s1L3nCe_wb OP t1_jdiousi wrote

But my point is that the agent that will be doing the effort of genuinely trying to understand your ideas/values/beliefs would not be human in this case; it would be an AI, which is precisely why I think this could work substantially better than the average human exchange of ideas.

When a debate emerges, most people are accustomed to take a confrontational approach to the conversation, where the external agent or agents are trying to disprove your point and you try to defend yourself by either defending your point and/or disproving their point. But when the external agent invests its time in fully understanding the point you are trying to make, the tone of the conversation changes dramatically because the objective is entirely different.

My main point regarding the human aspect of this discussion is that when we show real interest in understanding a point someone is making, the quality of the interaction changes dramatically (in a good way). And, like I said, in my line of work I've seen this happen very often. Maybe that's why I'm more hopeful than the average person when it comes to this subject.

1