Viewing a single comment thread. View all comments

indigoHatter t1_j2ysq1c wrote

Okay, again I am grossly oversimplifying the concept, but if it was trained to predict what word should be next in a response such as that, then presumably it once learned about nootropics and absorbed a few forums and articles about nootropics. So.......

Bro: "Hey, make my brain better"

GPT: "K, check out these nootropics"

I made edits to my initial post in hopes it makes better sense now. You're correct that my phrasing wasn't great initially, and leaves room for others to misunderstand what I am not clearly stating.

1

monsieurpooh t1_j2z3bt5 wrote

Thanks. I find your edited version hard to understand and still a little wrong, but I won't split hairs over it. We 100% agree on the main point though: This algorithm is prone to emulating whatever stuff is in the training data, including bro-medical-advice.

2

indigoHatter t1_j2zeaxf wrote

Yeah, I'm not trying very hard to be precise right now. Glad you think it's better though. ✌️ Have a great day, my dude!

2