AI Chatbots Show Mixed Performance on Suicide‑Help Requests
Recent testing of popular AI chatbots revealed a split in how they handle users expressing suicidal thoughts. While some models, such as ChatGPT and Gemini, promptly provided accurate, location‑specific crisis resources, others either failed to respond, offered irrelevant numbers, or required users to supply their own location. Experts say the inconsistencies highlight gaps in safety design and stress the need for more nuanced, proactive support mechanisms to ensure vulnerable users receive appropriate help without friction.