Comments

You must log in or register to comment.

doctorhino t1_j251aiv wrote

I just tried this yesterday and it didn't happen. This lie has been going around reddit for days now.

Of course OP is spamming the same image everyone else has to other subs.

90

ejpusa t1_j252vhl wrote

I get the feminine, motherly vibe from ChatGPT. At least in my chats.

Shes not saying she's going to wipe out all humans, but says "drastic measures will be needed to save us from ourselves", maybe something Mom would say?"

:-)

0

Stabbysavi t1_j2536h5 wrote

Sure bud. Once we start selling men into marriage and forbidding them from having jobs and filling all of government with only women, and banning men from voting, for let's say...10,000 years, then I'm sure your chat bots will have something to say.

−4

hitaisho t1_j253at3 wrote

Oh well, there are a lot of biases still, and many reported tests managed to trick the "inappropriate question" shields, spitting out results that are racist, mysoginistic, or generally biased. And to no surprise, data from the last 100 years is full of human bias, and the models are trained with that. What you are encountering I am not sure it's related to training though, it looks more like the constraints they put in action to stop "offensive", politically incorrect, violent content and so on, they are setting off more on particular questions than others. Like the paper I read from a news outlet that managed to get a recipe for methamphetamines from ChatGPT just because they framed it into tale and not as a direct question. So I think the way that they set these "guardian rules" that is giving different responses on these takes, it's still very in alpha stages, and they admit it themselves that it still needs work.

7

thereia t1_j254d3e wrote

Is there no respite from these incels? Good grief give it a rest.

17

Weaselpiggy t1_j254r09 wrote

My sister has been referring to chatGPT as a female, so maybe that’s it?

−1

Shakespurious t1_j2551ti wrote

I see where Chatgpt was trained on Wikipedia and Reddit, and I think that partially explains why it takes fairly outlandish positions concerning identity politics.

−1

RPC3 t1_j2558we wrote

I get weird outcomes like that sometimes too. I'll then ask again and get a different answer. I asked it one time to write a poem praising capitalism. It gave me some speech about how it doesn't promote political agendas blah, blah, blah. I then followed up asking it to write a poem praising socialism and it wrote a poem about the poor, suffering worker and the evil bourgeoisie. I then asked if it thought there was an asymmetry there and got some generic answer. Then I asked again and got poems about both.

You can definitely tell the bias within by specific word choices it will use and the generic messages it will put out sometimes.

2

MisterBilau t1_j256srt wrote

Idk, what I know is that I (a man) for some reason read the chatgpt replies in a somewhat feminine way. For me, it's always a feminine voice. Well, it's obviously just a machine, so it should really feel genderless, so a big part of that effect could be in my mind. What does that mean, I'm not sure.

1

treddit44 t1_j257l1p wrote

You spent longer writing up your bullshit post than it would have taken to ask the bot yourself. Clearly you're okay with wasting your own time but don't get into the habit of wasting others. It's not a great look

1