Can you tell if an image is AI-made? Here's how

Google search engine on a smartphone screen. Photo: Unsplash

The more sophisticated generative models become, the more difficult it is to distinguish a "real" image from an AI-generated one. Google is simultaneously developing both content generation platforms (Gemini, Imagen, Veo, Lyria) and tools that enable their work to be detected.

SlashGear writes about it.

How to determine that an image is not genuine

On June 4, 2025, the company announced the testing of the SynthID portal, a service that determines whether an image was generated using Google tools or partners such as Nvidia. It is based on digital watermarks introduced in 2023 directly into pixels, invisible to the human eye, and insignificant in terms of quality. The user uploads the file, and the system notifies them of the presence of a marker. So far, only a limited number of participants on the waiting list can evaluate the portal's performance, so the details are based on official statements from Google.

The most significant limitation of SynthID is its dependence on the integrity of content authors: the watermark will only appear if the image was created by services that have integrated the technology. For some generators, refusing to use any marks may even be a commercial advantage. So, until SynthID becomes mainstream, it is worth using the existing Google tools.

One of these is Fact Check Explorer. By uploading a file or pasting a URL, you can find out if it has been checked by verified fact-checkers. During information surges, when old or completely unrelated photos pop up under new headlines, this feature helps you quickly separate truth from fiction. For example, the story about the "father in a down jacket" was quickly refuted thanks to the verification database.

Detection of an image created by AI. Photo: SlashGear

Another way is to use About This Image. In Google Images or Google Lens, click on the three dots next to the image, and in Circle to Search, circle it with your finger. The service will show when the photo first appeared online and on which websites it circulated. If the original has been around for more than five years, the chance that you are looking at AI work is minimal.

Another source of data is metadata in Google Photos. Last year, Google added a visible "AI Info" section to the photo card: after editing with Magic Erase, Magic Edit, Zoom Enhance, or "Reimagine" on Pixel 9, it displays the label "Edited using generative AI". You can view it via the "three dots" menu. However, when forwarded via messengers and social networks, this label disappears from view, and those who want to hide the origin of the frame can continue to manually clear the metadata.

As a reminder, scientists at MIT Media Lab have found that using ChatGPT to write texts reduces brain activity. Compared to participants who searched for information on their own, those who relied on the chatbot showed poorer cognitive performance.

We also wrote that OpenAI is strengthening its position in the field of AI, preparing for the release of GPT-5 this summer. The company is simultaneously reviewing its relationship with Microsoft, strengthening its presence in the field of defence technologies, and winning a Pentagon tender for the first time.