Comments

You must log in or register to comment.

Gari_305 OP t1_j7zdrb6 wrote

From the Article

>Similar to human coworkers, robots can make mistakes that violate a human’s trust in them. When mistakes happen, humans often see robots as less trustworthy, which ultimately decreases their trust in them.
>
>The study examines four strategies that might repair and mitigate the negative impacts of these trust violations. These trust strategies are: apologies, denials, explanations, and promises of trustworthiness.

1

FuturologyBot t1_j7zj8uv wrote

The following submission statement was provided by /u/Gari_305:


From the Article

>Similar to human coworkers, robots can make mistakes that violate a human’s trust in them. When mistakes happen, humans often see robots as less trustworthy, which ultimately decreases their trust in them.
>
>The study examines four strategies that might repair and mitigate the negative impacts of these trust violations. These trust strategies are: apologies, denials, explanations, and promises of trustworthiness.


Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/10yta0f/humans_are_struggling_to_trust_robots_and_forgive/j7zdrb6/

1

Zedd2087 t1_j7zkjgf wrote

Its hard to trust anyone or anything that's there to take your job.

30

Banana_bee t1_j7zrwse wrote

In my opinion this is largely because, until recently, if a robot made a mistake once it would always make that same mistake in that situation. The 'AI' was effectively an incredibly long series of 'if' statements.

With ANNs that isn't necessarily true, but often is, as the models are usually not continuously trained after release - because then you get Racist Chatbots.
This is changing as we use smaller secondary models to detect this kind of content and reinforce the network's training in the direction we want - but it's still not hugely common.

6

OvermoderatedNet t1_j8066r2 wrote

Humans tend to overestimate their own competence and those of other humans, which therefore means that a given robot is likely to be held to a higher standard. There’s also a level of otherness with AI and robots that doesn’t exist with humans, so naturally robots will face higher standards/discrimination until humans see them as part of their in-group.

2

ATR2400 t1_j80bqni wrote

I know that some like Character.AI get trained a bit through conversation now. The AIs I’ve made seem to learn some behaviours after a long conversation that get pulled in to new chats. Like if I tell it to speak in a certain way and keep reinforcing that in one chat then when I start a new one it’ll keep it up despite having no memory of being explicitly told to act that way.

1

rogert2 t1_j80g5ln wrote

Carpenters don't trust their table saws, either.

Robots are not thinking, learning things. It would be a category error to trust a robot, just as it would be extend forgiveness to a robot.

The WEF's myopia is an endless source of incredibly stupid takes.

27

bwanabass t1_j81gedp wrote

So, pretty much how most humans treat other humans then?

12

guy-with-a-large-hat t1_j81i6kw wrote

This is a really stupid take. A robot is not a person, its a tool, if the tool doesnt work its useless and dangerous.

14

Actaeus86 t1_j82f5ab wrote

In fairness to the robots humans struggle to trust each other and forgiveness can be hard to come by

3

WinterWontStopComing t1_j83yjoi wrote

Agreed. But I don’t distrust the robots themselves. We aren’t there yet. They can’t think. I am distrusting the greedy and the power hungry who are going to brazenly destroy order with little impunity to help their bottom line using robots

5

Zedd2087 t1_j83zb55 wrote

But is the robot not just an extension of those people? Sure they will use them to take jobs but I'm betting they will also be used to enforce shitty policies or even used to police workers, cheaper to buy a bot than pay a manager.

3

Sanity_LARP t1_j84z7lr wrote

Seems like you're being pedantic about what "learning" is, because machine learning exists and I don't know how you can argue that it isn't happening at all. You could argue it doesn't work the same way as our learning or that it's fundamentally different, but by the accepted meaning of the term, robots can learn. Can ALL robots learn? Obviously not. But you don't have to dig very far at all to find examples of learning.

3

rogert2 t1_j8540xj wrote

My web browser holds onto my bookmarks, and even starts to suggest frequently-visited websites when I type URLs into the bar. Do you really want to call that "learning?" Learning of the kind that's necessary to support interactions where trust and forgiveness are meaningful concepts?

It seems like you're trying to use the word "learning" to import a huge amount of psychological realism so you can argue that people have an obligation to treat every neural network exactly like we treat humans -- humans that are unimaginably more mentally sophisticated than a computer file that contains numeric weightings.

2

MKclinch8 t1_j854mqb wrote

As someone who moved from manual data entry to a functional data engineering department….. Nah, I definitely distrust humans more.

1

Sanity_LARP t1_j855b30 wrote

That's a lot of assuming and irrationality you dropped there. No, I didn't mean bookmarks and I didn't imply every neural network is the same as a human. You're being obtuse or disingenuous.

0

Maya_Hett t1_j856obj wrote

>Results indicated that after three mistakes, none of the repair strategies ever fully repaired trustworthiness.

... yeah? You gotta improve it or repair it, if it doesn't work, not trust that it will magically it do itself (UNLESS it actually can do it by design).

>Lionel notes that people may attempt to work around or bypass the robot, reducing their performance. This could lead to performance problems

It could also lead to things not exploding, but, I guess, Lionel didn't want to mention that.

3

myebubbles t1_j85k3jr wrote

Luddites....

Yeah much better to spend your days doing labor (and getting exploited).

The cost of living has collapsed since the 1950s due to robots. Middle class people are retiring in their 30s after only a decade or 2 of work.

But "noo I want rich people to need me to be their wage slaves"

Let them own the means of production and fly away to space on private flights. I'll be over here doing the Victorian Dream of playing with Science in my new spare time.

−1

Aliteralhedgehog t1_j865r5z wrote

>Middle class people are retiring in their 30s after only a decade or 2 of work.

And poor people may be getting their social security pushed back. If being weary of the Elon Musks of the world holding all the keys makes me luddite so be it.

1

22Starter22 t1_j8b5n0x wrote

I would rather a robot destroy humans than humans destroying humans

1