AI‑Generated Images and Videos Evoke Uncanny Valley Reaction, Experts Explain

Key Points
- AI‑generated images and videos flood social media, often appearing realistic at first glance.
- Viewers frequently notice subtle flaws—lighting, skin texture, or anatomical errors—that create unease.
- The uncanny valley effect explains why near‑human creations trigger discomfort when imperfect.
- Dr. Steph Lay notes the evolutionary basis for our sensitivity to small irregularities.
- Prof. Christoph Bartneck highlights rising expectations as AI media looks more human‑like.
- Transparent labeling of AI content improves audience acceptance; unmarked content heightens the uncanny feeling.
- Repeated exposure may sharpen user discernment, but the underlying discomfort remains.
- Experts advise stepping away from screens when the uncanny sensation becomes unsettling.
AI‑generated images and videos are increasingly populating social media feeds, often looking realistic at first glance but leaving viewers with a subtle sense of unease. Psychologists and human‑robot interaction researchers attribute this discomfort to the uncanny valley effect, where near‑human creations trigger disquiet when they are not perfectly lifelike. Experts such as Dr. Steph Lay and Prof. Christoph Bartneck explain that our brains are wired to spot small irregularities, a skill that once helped detect danger. While the technology improves, the feeling of something being “off” persists, prompting users to stay vigilant and occasionally step away from the screen.
The Rise of AI‑Generated Media
Scrolling through social platforms today, users encounter a flood of AI‑generated images and videos. These pieces often appear realistic at a glance, yet prolonged viewing reveals oddities—lighting that doesn’t quite match, skin that seems overly smooth, or anatomical errors such as extra fingers. The content is becoming harder to distinguish from genuine media, and the sheer volume means that many people encounter it without any labeling or context.
Psychological Roots of the Uncanny Valley
Researchers link the unsettling feeling to the uncanny valley, a concept originally used to explain why almost‑human robots can be creepy. Dr. Steph Lay, a psychologist and horror writer, notes that the effect describes how people respond positively to human‑like entities up to a point, after which small imperfections trigger disquiet or even disgust. Evolutionary theories suggest that this sensitivity helped early humans spot signs of disease or danger, sharpening our ability to differentiate the real from the false.
Expert Insights on AI Content
Prof. Christoph Bartneck, a specialist in human‑robot interaction, emphasizes that expectations rise as an object becomes more human‑like. When a robot or AI‑generated character looks almost human, viewers apply human standards and become highly attuned to minor glitches in facial expressions, gestures, or posture. Dr. Lay adds that AI‑generated media often contains subtle errors that, while not immediately obvious, tip viewers into that uneasy valley. She argues that this sensitivity is unlikely to fade, even as generation algorithms improve.
Impact on Audiences and Behavior
People care about the authenticity of the content they consume, especially when the purpose behind the generation is unclear. Transparent creators who label AI‑made assets, such as the “Into The Fog” YouTube channel, find audiences more accepting. In contrast, unmarked AI content pushed by algorithms can catch viewers off guard, amplifying the uncanny response. The effect can lead to a subtle echo‑chamber where users repeatedly encounter unsettling media, reinforcing the feeling of unease.
Adapting to the New Visual Landscape
While the uncanny valley may persist, researchers believe that repeated exposure will make audiences more discerning. Dr. Lay suggests that when something looks too perfect, it is likely artificial, and recommends stepping away from the screen if the feeling becomes disturbing. This ongoing tension between advanced AI generation and human perception underscores the need for clearer labeling and user awareness as the technology continues to evolve.