7 forbidden prompts you shouldn't send to ChatGPT
AI assistants help find information in seconds, but at the same time, create new risks for privacy. Experts warn that there are seven categories of data that should never be disclosed to ChatGPT and other chatbots in order to avoid becoming a victim of leaks or fraudsters.
TSN writes about it.
Personal data
Never share your passport and identification numbers, home address, or information about your family and place of work. This information allows attackers to easily track your activity.
Passwords
Access codes to email, social networks, or any accounts are the most valuable trophy for cybercriminals. One leak can open the door to all your services.
Financial information
Bank card and account details are doubly dangerous: they can be intercepted during a leak or even inadvertently "pulled up" in the course of a conversation with another user.
Working materials
Internal company documents, reports, and customer databases must remain within the corporate environment. A notable case: a Samsung engineer accidentally "leaked" source code via ChatGPT, forcing the company to ban the service.
Medical data
ChatGPT is not a doctor, and its advice is not a medical opinion. Incorrect advice can be harmful to your health, and no one will be held responsible for it.
Explicit content
Most bots automatically filter out unacceptable material, so you risk being blocked or losing control over where the information ends up.
Personal experiences
AI does not replace a psychologist and does not serve as a personal diary: everything you say can be stored and potentially become accessible to others. It is better to use temporary chats and delete the history.
As a reminder, we wrote that MIT Media Lab research raised concerns: scientists found that frequent use of ChatGPT can reduce brain activity. Compared to people who searched for information on their own, participants in the chatbot experiment showed significantly less cognitive strain while performing tasks.