Americans Struggle to Identify AI-Generated Content on Social Media, Survey Finds

Americans Struggle to Identify AI-Generated Content on Social Media, Survey Finds
CNET

Key Points

  • 94% of U.S. adults using social media say they encounter AI‑created or edited content.
  • Only 44% feel confident they can distinguish real images and videos from AI‑generated ones.
  • 72% take some action to verify suspicious content; 25% do nothing, especially older users.
  • 60% rely on close visual inspection; 30% check for labels; 25% search the content elsewhere.
  • 51% want better labeling of AI‑generated media; support strongest among Millennials and Gen Z.
  • 21% favor a complete ban on AI content on social platforms; 36% prefer strict regulation.
  • Only 11% find AI‑generated content useful, informative, or entertaining.
  • Platforms like Pinterest have introduced AI‑content filters; others are still testing solutions.

A recent CNET‑commissioned survey of U.S. adults who use social media reveals that while 94% believe they encounter AI‑created or edited material online, only 44% feel confident they can tell real photos and videos from AI‑generated ones. Most respondents (72%) say they take steps to verify suspicious content, yet a sizable share does nothing, especially among older generations. Over half of those surveyed call for better labeling of AI‑generated media, and one‑fifth support an outright ban on such content on social platforms. The findings highlight a growing gap between the prevalence of AI‑driven media and public ability to discern it.

Survey Overview

A CNET‑commissioned study that surveyed U.S. adults who use social media found that an overwhelming majority—94%—believe they encounter content that was created or altered by artificial intelligence. Despite this high exposure, confidence in distinguishing authentic images and videos from AI‑generated ones is low, with only 44% of respondents saying they feel sure they can spot the difference.

Public Confidence Across Generations

Confidence varies by age group. Older users, including Boomers and Gen X, are the least certain, with 40% and 28% respectively feeling able to identify AI‑generated media. Younger users, especially Gen Z, show higher confidence but still fall short of a majority.

Verification Practices

When faced with potentially AI‑generated content, 72% of respondents report taking some form of action to verify its authenticity. The most common method—used by 60% of respondents—is a close visual inspection for cues or artifacts. Other tactics include checking for labels or disclosures (30%) and searching for the content elsewhere online, such as through reverse‑image searches (25%). Only 5% have used a dedicated deep‑fake detection tool.

However, a notable portion of respondents—25%—do nothing to verify content, with inaction highest among Boomers (36%) and Gen X (29%).

Desire for Better Labeling

Half of the surveyed adults (51%) say the internet needs better labeling of AI‑generated and edited content. Support for stronger labeling is strongest among Millennials (56%) and Gen Z (55%). The rationale is that clear disclosures could help users make more informed decisions about what they see.

Opinions on Regulation and Bans

When asked about policy approaches, 21% of respondents believe AI‑generated content should be prohibited on social media altogether, with the highest support among Gen Z (25%). Conversely, 36% favor allowing AI content but with strict regulation. Only a small minority (11%) find AI‑generated media useful, informative, or entertaining.

Current Platform Responses

Major social platforms currently permit AI‑generated content as long as it does not violate existing content guidelines. Some, like Pinterest, have introduced filters to limit AI content in users’ feeds, while others, such as TikTok, are still testing similar tools. Users can also mute or filter AI‑driven features on devices and applications, including Meta AI on Instagram and Facebook, Apple Intelligence, and Google’s Gemini suite.

Practical Tips for Users

The survey’s authors recommend a multi‑layered approach: remain vigilant for visual oddities, check for any disclosed labels, and use reputable verification tools like the Content Authenticity Initiative’s detector. They also suggest reviewing the source account for red flags, such as a lack of genuine followers or a history of posting dubious content.

Implications

The findings underscore a widening gap between the rapid advancement of AI‑generated media and the public’s ability to critically assess it. While many users are taking steps to verify content, a substantial share—particularly older adults—remain vulnerable. The call for better labeling reflects a growing demand for clearer standards that could help bridge this confidence gap.

#artificial intelligence#deepfake#social media#digital verification#content labeling#online misinformation#AI-generated media#user trust#media literacy#survey
Generated with  News Factory -  Source: CNET
Americans Struggle to Identify AI-Generated Content on Social Media, Survey Finds | AI News