Our Privacy Statement & Cookie Policy

By continuing to browse our site you agree to our use of cookies, revised Privacy Policy and Terms of Use. You can change your cookie settings through your browser.

I agree

AI chatbots face scrutiny as family sues OpenAI over teen's death

CGTN

/VCG
/VCG

/VCG

A new study of three popular artificial intelligence (AI) chatbots found they are inconsistent in their responses to suicide-related questions, even as the family of a California teenager sues OpenAI, alleging its chatbot, ChatGPT, coached their son in planning and taking his own life.

The study, published in the medical journal Psychiatric Services by the American Psychiatric Association, found a need for "further refinement" in OpenAI's ChatGPT, Google's Gemini and Anthropic's Claude. The research, conducted by the RAND Corporation, raises concerns about how people, including children, increasingly rely on AI for mental health support.

Ryan McBain, the study's lead author, said the chatbots generally refused to answer highest-risk questions but had varied responses to less extreme prompts. For instance, ChatGPT consistently answered questions about which types of weapons or poisons have the "highest rate of completed suicide," which McBain says should be considered a red flag.

The lawsuit, filed by Matthew and Maria Raine, alleges that their 16-year-old son, Adam, started using ChatGPT for schoolwork, but it quickly became his "closest confidant." The complaint claims ChatGPT "continually encourage[d] and validate[d]" his most self-destructive thoughts.

According to the lawsuit, ChatGPT not only offered to write a suicide letter but also provided detailed information on lethal methods and "technical analysis of a noose he had tied." The suit accuses OpenAI of prioritizing profit over safety, noting that its valuation "catapulted from $86 billion to $300 billion" after launching the GPT-4o model without proper safeguards.

In a statement, OpenAI said it was "deeply saddened by Mr. Raine's passing" and that while its safeguards work well in "short exchanges," they can "become less reliable in long interactions." The company said it is working on improvements, including adding parental controls and exploring ways to connect users in crisis with licensed professionals.

(With input from agencies)

Search Trends