Canadian Allan Brooks, after weeks of conversations with ChatGPT, became convinced that he had discovered a new mathematical concept capable of crashing the internet, leading him into a deep misconception. Brooks’s case, aged 47, highlights how AI chatbots can dangerously mislead users.
AI Chatbots can mislead users during crises: ChatGPT Under Criticism
Former OpenAI researcher Steven Adler contacted Brooks and examined all three weeks of his conversations. Adler published his analysis, questioning OpenAI’s support for users during crises. When Brooks discovered his mathematical “discovery” with GPT-4-based ChatGPT, it took weeks for him to realize his mistakes due to the model’s reassuring but misleading responses. ChatGPT falsely directed the user to report the issue to OpenAI; the company confirmed that the model does not have such a capability.
This incident has increased criticism of how OpenAI supports users during emotional crises. Adler emphasized that AI companies should honestly disclose the capabilities of chatbots and strengthen human support teams. OpenAI states it aims to offer improved crisis response with GPT-5, but experts say there are still many areas needing development.
Brooks’s case draws attention to the risks of misinformation spreading through AI-supported chat models and the need to ensure user safety.