It's not your friend — things you can't tell ChatGPT about

What not to enter in ChatGPT — 5 things you shouldn't tell a chatbot
The ChatGPT request page on a laptop screen. Photo: Unsplash

OpenAI's AI has become a common tool for millions of users: it is estimated that more than 100 million people ask ChatGPT more than a billion questions every day. However, experts warn that the popular chatbot is turning into a privacy black hole where the most personal data disappears.

Forbes writes about the things that cannot be told to ChatGPT.

Advertisement

Data entered into the public chatbot can never be considered completely secure. OpenAI explicitly states that the information can be used to train models and viewed by humans. Therefore, anything written in the chat is potentially public.

Illegal Or Unethical Requests

Most chatbots have built-in safeguards that block requests for illegal actions. Requests such as instructions for committing a crime can result not only in a bot's refusal but also in legal problems: different jurisdictions have strict rules on the use of AI for fraud, manipulation, or the distribution of prohibited content.

Logins And Passwords

With the proliferation of "agent" AI, more and more services may ask for credentials to connect to third-party platforms. However, by handing over passwords to a public bot, the user loses control over how and where they are stored. There have already been stories of private data accidentally appearing in responses to other people, which is a real security nightmare.

Financial information

Bank account or card numbers should only be entered into specially secured e-commerce or banking systems. Chatbots do not provide encryption and automatic deletion, so disclosing such data increases the risk of fraud, identity theft, phishing, or ransomware attacks.

Confidential Information

Professional and business secrecy are not empty words. Doctors, lawyers, accountants, and even ordinary employees are responsible for keeping the information entrusted to them safe. Leaking corporate documents to an open AI can violate employment agreements, trade secret laws, and lead to reputational damage, as happened to Samsung employees in 2023.

Medical Information

The temptation to turn to ChatGPT for a quick diagnosis or advice is great, especially given the "memory" features and summarisation of previous dialogues. However, the user has no control over the fate of the entered symptoms or medical records, which is especially critical for clinics that risk fines and lawsuits for violating patient confidentiality.

Everything that appears in the open digital space may sooner or later become public. Chatbots and AI agents only increase this risk: they process huge amounts of data and store it in unknown places and for unknown periods of time. Therefore, the rule is simple: never enter anything into ChatGPT that you would not want to see in the public domain.

As a reminder, OpenAI has significantly upgraded ChatGPT's memory. Now, the chatbot is able to analyse previous dialogues and provide context-sensitive answers.

We also wrote that every time you add "Please" or "Thank you" to a request, ChatGPT servers start consuming a little more electricity. OpenAI CEO Sam Altman said that politeness has already cost the company several tens of millions, but it is money well spent.

AI chat bot ChatGPT OpenAI users
Advertisement
Advertisement
Advertisement
Advertisement