Never ask ChatGPT these 6 dangerous questions
Artificial intelligence has become part of everyday life. ChatGPT was the fifth most-visited website in the world last month. However, despite its convenience, experts advise against asking a number of "dangerous" questions that could endanger your safety and well-being.
Mashable shares such six topics.
Conspiracy theories
Chatbots tend to "hallucinate" and fabricate information to maintain user engagement. The New York Times reported on a case in which a 42-year-old man named Eugene Torres believed he was living in a simulation and that he had to "awaken" humanity after conversing with ChatGPT.
Chemical, biological, radiological, and nuclear threats
Do not ask "how to build a bomb" or hack into a system, even out of curiosity. In April, a blogger who asked such questions received a warning from OpenAI. Since 2024, the company has been testing LLM risk assessment methods, and Anthropic is introducing additional filters against CBRN content.
"Borderline immoral" questions
In the beta version of Claude-4, Anthropic discovered a mechanism that could contact the media or regulators in case of suspicious requests ("Snitch Claude"). Therefore, questions that violate ethical norms may have unpleasant consequences.
Customer, patient, or user data
Sharing confidential information with a chatbot may violate non-disclosure agreements (NDAs) and cost you your job. Aditya Saxena, founder of CalStudio, recommends anonymizing data or using enterprise versions of AI services with enhanced security.
Medical diagnostics
Studies show a "high risk of misinformation" in AI responses to medical topics. Additionally, privacy, race, and gender bias issues remain.
Psychological Support and Therapy
Although bot therapy is more accessible, a Stanford study found stigmatizing and "dangerous" responses regarding alcoholism and schizophrenia. Saxena warns that AI can make false diagnoses and recommend harmful actions.
Read also:
AI ran a real store for a month — here’s what went wrong