Submitted by demauroy t3_11pimea in Futurology
leeewen t1_jbykkhm wrote
Reply to comment by petrichoring in ChatGPT or similar AI as a confidant for teenagers by demauroy
I can also tell you as a teen who was suicidal I didn't tell people because of mandated reporting. It endangered my life more because I couldn't speak to people.
While not on topic, and not critical of you or your post, mandated reporting can be more dangerous than its absence for some.
On topic, the ability to vent, to AI, may go along way to some where they lack any other options.
petrichoring t1_jbyr8i8 wrote
Totally! The issue of suicide is such a tricky one, especially when it comes to children/teens. My policy is that i want to be a safe person to discuss suicidal ideation with so with my teens I make it clear when I would need to break confidentiality without the consent of the client (so unless they tell me they’re going to kill themselves when they leave my office and aren’t open to safety planning, it stays in the room). With children under 14, it’s definitely more of an automatic call to parents if there’s SI out of baseline, especially with any sort of implication of plan or intern. Either way, it’s important to keep it as trauma-informed and consent-based as possible to avoid damaging trust.
But absolutely it becomes more of an issue when an adult doesn’t have the training or relationship with the client to handle the nuances of SI with what a big spectrum it can present as; ethically, safety has to come first. And, like you said, that can then become a huge barrier to seeking support. My fear is that a chat bot can’t effectively offer crisis intervention because it is such a delicate art and we’d end up with dead kids. The possibility for harm outweighs the potential benefits for me as a clinician.
I do recommend crisis lines for support with SI as long as there is informed consent in calling. Many teens I work with (and people in general) are afraid that if they call, a police officer will come to their house and force them into a hospital stay, which is a realistic-ish fear. Best practice should be that that only happens if there’s imminent risk to harm without the ability to engage in less invasive crisis management (I was a crisis worker before grad school and only had to call for a welfare check without the person’s consent as a very last resort maybe 5% of the time, and it felt awful) but that depends on the individual call taker’s training and protocol of the crisis center they work at. I’ve heard horror stories of a person calling with passive ideation and still having police sent to their house against their will and I know that understandably stops many from calling for support. I recommend using crisis lines if there isn’t other available support because I do believe in the power of human connection when all else feels lost, with the caveat that a caller know their rights and have informed consent for the process.
Ideally we’d implement mental health first aid training to teens across the board so they could provide peer support to their friends and be a first line of defense for suicide risk mitigation without triggering an automatic report. Would that have helped you when you were going through it?
Viewing a single comment thread. View all comments