Privacy Concerns Prompt Users to Quit ChatGPT and Gemini

Privacy Concerns Prompt Users to Quit ChatGPT and Gemini
Digital Trends

Key Points

  • 90% of surveyed users worry about AI using their data without consent.
  • 88% avoid sharing personal information with ChatGPT or Gemini.
  • 84% refuse to provide personal health data to AI chatbots.
  • 43% have stopped using ChatGPT; 42% have quit Gemini.
  • 44% have stopped using Instagram; 37% have left Facebook.
  • 82% opt out of data collection whenever possible.
  • 71% use ad blockers; 46% use VPNs to protect privacy.
  • Users are entering dummy data or using data‑removal services.
  • The trend signals growing distrust toward AI‑driven platforms.

A recent Malwarebytes survey reveals that a large majority of respondents are uneasy about artificial‑intelligence tools using their data without consent. Nearly nine out of ten worry about AI privacy, and a similar share avoid sharing personal information with ChatGPT or Gemini. As a result, over forty percent have stopped using each chatbot. The same respondents are also pulling back from social platforms like Instagram and Facebook, while adopting privacy measures such as ad blockers, VPNs, and opting out of data collection.

Survey Highlights Growing Privacy Anxiety

The Malwarebytes survey shows a clear shift in public sentiment toward artificial‑intelligence chatbots. Ninety percent of participants expressed worry that AI systems might use their data without permission, and eighty‑eight percent reported they do not freely share personal details with services such as ChatGPT or Gemini. The reluctance extends to health information, with eighty‑four percent refusing to provide personal health data to these tools.

Significant Drop in Chatbot Usage

Consequently, the survey indicates that forty‑three percent of respondents have stopped using ChatGPT, while forty‑two percent have quit Gemini. These figures suggest a considerable portion of the user base is moving away from AI chat interfaces due to privacy concerns.

Broader Withdrawal from Digital Platforms

The privacy‑focused behavior is not limited to AI chatbots. Forty‑four percent of those surveyed have stopped using Instagram, and thirty‑seven percent no longer use Facebook. While the survey does not directly link these decisions to concerns about Meta’s AI, the pattern points to a wider distrust of how personal content might be employed for training AI models.

Proactive Measures to Protect Data

In response to these concerns, users are taking concrete steps to safeguard their digital footprints. Eighty‑two percent are opting out of data collection wherever possible, seventy‑one percent employ ad‑blocking tools, and forty‑six percent use virtual private networks (VPNs). Some respondents also enter dummy data, rely on personal data removal services, or otherwise limit the amount of personal information they share online.

Implications for AI Providers

The findings send a clear signal to companies like OpenAI and Google. As users become more cautious and less willing to share data, the effectiveness of AI models that rely on large datasets could be impacted. Providers may need to increase transparency about data usage and strengthen privacy safeguards to retain user trust.

#privacy#artificial intelligence#chatbots#data security#user behavior#OpenAI#Google#social media#ad blocker#VPN
Generated with  News Factory -  Source: Digital Trends

Also available in:

Privacy Concerns Prompt Users to Quit ChatGPT and Gemini | AI News