Google to Use StopNCII Hashes to Remove Nonconsensual Images from Search

Key Points
- Google partners with StopNCII.org to block non‑consensual intimate imagery in search results.
- Hashes (PDQ for images, MD5 for videos) will be used to identify and remove flagged content.
- The initiative follows similar commitments from Facebook, Instagram, TikTok, Bumble, and Microsoft’s Bing integration.
- Google has been criticized for lagging behind peers in adopting hash‑based solutions.
- Current removal tools still require victims to identify and flag offending material.
- Advocates call for easier removal processes that do not rely on victim‑generated hashes.
- The rollout is set to begin over the next few months.
Google announced a partnership with StopNCII.org to combat the spread of non‑consensual intimate imagery (NCII) by employing hash technology in its search results. Over the coming months, the company will use StopNCII’s image and video hashes—PDQ for photos and MD5 for videos—to identify and block flagged content without storing the original files. While the move follows similar commitments from Facebook, Instagram, TikTok, Bumble and Microsoft’s integration into Bing, Google has faced criticism for lagging behind peers. Advocates note the new tools still place the removal burden on victims, highlighting ongoing challenges in protecting survivors.
Background
Non‑consensual intimate imagery, often referred to as NCII, has proliferated across the open web, prompting calls for stronger safeguards. StopNCII.org offers a hash‑based system that generates unique identifiers for abusive images and videos, allowing platforms to block content without retaining the actual media. The organization uses PDQ hashes for images and MD5 hashes for videos, enabling precise detection while preserving privacy.
Google's New Initiative
In a recent announcement, Google confirmed a partnership with StopNCII.org to incorporate these hashes into its search engine. Over the next few months, Google will proactively scan search results for content that matches StopNCII’s hash database and remove any matches. By leveraging hash technology, Google aims to block non‑consensual imagery without storing or sharing the original files, thereby reducing the exposure of victims to harmful content.
Industry Context
Google’s decision arrives after several major platforms have already adopted StopNCII’s hashes. Facebook, Instagram, TikTok, and Bumble signed on with the organization as early as 2022, and Microsoft integrated the system into Bing in September of last year. Bloomberg highlighted that Google had been slower than many of its industry peers to adopt this approach, a point acknowledged in Google’s blog post, which cited feedback from survivors and advocates about the need for broader action.
Challenges and Criticisms
Although Google has introduced tools that allow individuals to request removal of offending content, critics argue that the process still places the onus on victims to identify and flag the material themselves. Advocates stress that requiring victims to generate and submit hashes from their own devices is a significant barrier, especially as AI‑generated content becomes more prevalent. The difficulty of removing such content without victim‑initiated hashes remains a key concern.
Future Outlook
Google’s adoption of StopNCII hashes marks a notable step toward mitigating the spread of NCII through search results. However, the company faces ongoing pressure to streamline removal mechanisms and reduce the burden on survivors. As other platforms continue to refine their approaches, the broader tech industry will likely watch Google’s implementation closely to assess its effectiveness and potential as a model for future interventions.