Most people use ChatGPT wrong — here's how to fix it
The campaign around the release of GPT-5 promised a "tipping point," but the user response was muted due to confusion and the disappearance of favorite models. Sam Altman later inadvertently showed why OpenAI's expectations diverged from reality: a significant proportion of people do not use "thoughtful" models designed to give more accurate answers.
FastCompany writes about it.
Why do people use ChatGPT incorrectly?
OpenAI presented GPT-5 with pomp, having held an online presentation of its capabilities, however, surprise and discontent prevailed on social networks due to the removal of a number of key models. The explanation appeared after Sam Altman's post in X: commenting on the reduction of query limits for Plus subscribers (the plan costs $ 20 per month), he revealed that before the release of GPT-5, only about 1% of users without a subscription and 7% of paid ones generally turned to reasoning models like o3.
the percentage of users using reasoning models each day is significantly increasing; for example, for free users we went from <1% to 7%, and for plus users from 7% to 24%.
— Sam Altman (@sama) August 10, 2025
i expect use of reasoning to greatly increase over time, so rate limit increases are important.
Reasoning models first "think through" the problem before responding. Ignoring them is like buying a car and driving it in first or second gear, then wondering why it's uncomfortable to drive, or taking a quiz and shouting out the first answer you can think of. In practice, most users value speed and convenience over quality, which is why the disappearance of GPT-4o caused a flurry of complaints (the model was later returned to paid users after persistent community demands). But when you ask a chatbot for help, it's good answers that matter, not lightning-fast ones.
By design, reasoning models spend more computing resources on planning, testing, and iteration—which is why they run slower and are more expensive. Because of this, vendors default to showing "fast" versions and offer to switch to alternatives via a drop-down menu. Adding to the confusion is OpenAI's tradition of giving its models obscure names: GPT-5 was supposed to fix this, but it didn't work out completely — users still have a hard time distinguishing the "thoughtful" version from the simplified one, so the company is tweaking the designations again.
For some users, waiting "a minute instead of a second" is not a problem: they ran a query and got busy with other things. But for many, it's too long. Even after the release of GPT-5, where the difference between the "flagship" GPT-5 and GPT-5 thinking, which offers "getting more detailed answers", was made more noticeable, only about one in four paid users turn on "thoughtfulness".
Altman's figures help answer a broader question about the benefits of AI for the mass audience: why only a third of users who have used chatbots at least once call them "very useful" (half the number among AI experts), while one in five call them "not useful" (twice the number among experts). The answer is simple: most are using the tool incorrectly — assigning complex, multi-step tasks without "pause for thought."
Read also:
ChatGPT's forbidden topics you should never ask about
ChatGPT vs Google — AI gets 2.5B daily queries