Theemuts t1_j6h1alh wrote
Reply to comment by WhuddaWhat in ChatGPT is on its way to becoming a virtual doctor, lawyer, and business analyst. Here's a list of advanced exams the AI bot has passed so far. by rationalworld
Yeah, don't do that.
> ChatGPT (Chat Generative Pre-trained Transformer)[1] is a chatbot launched by OpenAI in November 2022. It is built on top of OpenAI's GPT-3 family of large language models, and is fine-tuned (an approach to transfer learning)[2] with both supervised and reinforcement learning techniques.
> Nabla, a French start-up specializing in healthcare technology, tested GPT-3 as a medical chatbot, though OpenAI itself warned against such use. As expected, GPT-3 showed several limitations. For example, while testing GPT-3 responses about mental health issues, the AI advised a simulated patient to commit suicide.[51]
WhuddaWhat t1_j6h2pa9 wrote
>the AI advised a simulated patient to commit suicide
holy shit. Can you imagine being absolutely despondantly suicidal and reaching out for help and basically being told by what FEELS like an all-knowing computer, but is really just the most statistically relevant response to the series of things you've said, tells you that on reflecting upon your situation, it really would be best to go ahead and end it.
That would probably be enough to expand the crisis for anybody that is truly battling to get back a feeling of control within their life.
DasKapitalist t1_j6hwp55 wrote
If you're taking life altering advice from a probabilistic language algorithm, you're pretty well doomed to begin with.
Viewing a single comment thread. View all comments