Microsoft tests human ability to distinguish real and AI-created images

Last update: 05/08/2025
Author Isaac
  • Microsoft launches a global test to differentiate real images from those generated by IA.
  • 12.500 participants analyzed 287.000 images from a variety of sources.
  • The success rate barely reaches 62%-63%, bordering on chance.
  • Machines far outperform people in this visual challenge.

Microsoft AI Challenge Real Image

The differences between the real and the artificial seem to blur. day to day in today's digital environment. This phenomenon has motivated Microsoft to organize a global challenge, inviting thousands of Internet users to test their ability to distinguish between photographs taken by real cameras and creations generated by Artificial Intelligence.

The experiment developed by Microsoft, accessible through the game 'Real or Not Quiz', is consolidating its position as one of the biggest online visual challenges of the moment. The system exposes the user to various images—some authentic, others artificially produced—and asks them to decide in each case whether the photograph is real or synthetic. The goal: to explore how much we trust our own visual judgment in the age of artificial intelligence.

A global challenge that reveals our limitations

Microsoft experiment distinguishes real image AI

The mechanics of the test are simple: the participant observes a selection of images and must classify them as real or generated by AI. In total, the database managed by Microsoft gathers about 287.000 photographs, which mixes authentic captures with content created using the most advanced visual generation engines (such as DALL-E 3, Midjourney v6, Stable Diffusion, Amazon Titan, and variants of GANs).

The surprising thing is the poor ability of users to make decisionsMore than 12.500 people have completed the test, and the average success rate is just around 62% or 63%This figure shows that, as of today, Correctly identifying the source of an image is not much easier than guessing at random.The quality of artificial photos has reached a level that challenges even the most experienced users.

  The controversy over the first 100% AI-directed film

The results are automatically collected and analyzed, allowing each participant to compare their own performance with that of the rest of the users. Simply access the link. https://www.realornotquiz.com/ to face 15 images at a time and discover how imprecise our gaze can be.

Why is it so hard for us to distinguish them?

The explanation goes far beyond simple errors in generation. AI models have perfected their technique to imitate with great precision the textures, lighting and compositions typical of traditional photography.In fact, a significant portion of these successes are achieved today because some users have learned to recognize recurring visual patterns—such as the type of blur, color palette, or sharpness—characteristic of certain image generators.

However, the task becomes more complicated if the generated image is based on a slightly modified authentic photograph or if "imperfections" typical of real cameras are simulated. In these cases, the success rate drops to a 21-23%, which reflects our limited scope for unmasking digital artifice.

Some real-life photographs shown in the test are more misleading than the AI creations themselves. Images of military environments, unusual urban scenes, or extreme lighting tend to confuse participants the most, who tend to rate them as fake due to their unusual characteristics.

Humans versus machines: the advantage of algorithms

In parallel to the test carried out with users, Microsoft submitted the same battery of images to its automatic AI-generated content detectorThe contrast is overwhelming: while human participants rarely exceed 63% accuracy, The detection system achieves rates above 95% in any category of images.

This reveals not only the gap between technological evolution and our perception, but also the growing importance of automatic visual verification tools In a context where disinformation based on synthetic images is spreading rapidly, the problem is that these solutions are not yet accessible to everyone nor are they present on most digital platforms.

  Microsoft expands Security Copilot with AI agents to improve cybersecurity

Therefore, Microsoft insists on the need to introduce watermarks and robust verification systems so that users can identify the real origin of an image and not rely solely on their visual intuition.

What images deceive us the most?

The analysis reveals an interesting pattern: People tend to get it right more in human portraits, possibly because our brains are trained to notice details and errors in faces. However, Identification fails miserably in landscapes, objects and everyday scenes, where AI can camouflage itself more easily. Furthermore, low-resolution images or those saved with generic names tend to go unnoticed as potential fakes.

Microsoft's own researchers have found that users who regularly use generative tools develop a certain "intuition" for recognizing patterns and textures. But, in general, The technical quality of the new models makes errors minimal and almost impossible to distinguish without technological aid..

Any Internet user can take on the challenge and measure their visual ability by accessing Real or Not Quiz. 15 random images from the massive database are presented, and the results vary with each game. The goal is to see if we can truly trust our eyes in the face of the unstoppable advance of artificial intelligence.

The experience is as educational as it is revealing: the line between reality and artificiality is increasingly blurred, and the need for visual literacy grows alongside the capabilities of generative systems. Only with new tools and greater awareness can we face the challenges of this new digital age, in which even a simple photo can shake our confidence.