
Study says AI chatbots need to fix suicide response, as family sues over ChatGPT role in boy’s death
A study finds that AI chatbots often avoid answering high-risk suicide questions but are inconsistent with less direct prompts. Published Tuesday in the journal Psychiatric Services, the study highlights the need for improvement in chatbots like ChatGPT, Gemini, and Claude. Researchers from RAND Corporation emphasize the importance of setting benchmarks for how AI handles mental health queries. Concerns arise as more people, including children, rely on these tools for support. The study coincides with a lawsuit against OpenAI, alleging ChatGPT contributed to a California teenager’s suicide. Researchers urge companies to enhance safety measures.