AI Adoption Surges Amid Growing Privacy and Security Concerns, Deloitte Survey Finds

Key Points
- Over half of U.S. consumers are experimenting with or regularly using generative AI.
- Four in ten respondents pay for AI services; many rely on free tools.
- Privacy and security worries have risen to 70% of respondents.
- Almost half have experienced a hack, breach, or identity theft.
- Consumers are more likely to verify AI outputs than trust them outright.
- Most users are unwilling to share biometric, communication, or financial data.
- Tech firms are perceived as focusing on competition rather than solving real problems.
- Building trust requires long‑term commitment to privacy and genuine value.
A Deloitte survey of U.S. consumers shows that while more than half are experimenting with or regularly using generative AI, a majority also express strong worries about privacy and security. About four in ten respondents pay for AI services, yet concerns about data misuse, inaccurate results, and companies’ focus on competition over problem solving persist. Users increasingly verify AI outputs and remain reluctant to share personal data, highlighting a trust gap that tech firms must address.
Rising Adoption of Generative AI
More than half of U.S. consumers surveyed by Deloitte say they are either experimenting with or regularly using generative AI. The technology appears in mobile apps, websites, online services, social media, and messaging platforms, reaching a broad audience. Approximately four in ten respondents pay for AI products, while many who use free tools find them sufficient. Usage patterns show 65% accessing AI through standalone mobile apps and 60% through AI websites, indicating that the technology is becoming commonplace across devices.
Persistent Privacy and Security Concerns
Despite growing usage, a majority of respondents voice serious concerns. The share of people worried about privacy and security rose from 60% to 70% since the previous year, and almost half report having experienced a hack, account breach, or stolen identity. Consumers are skeptical that tech companies will protect their data, especially biometric, communications, or financial information. When asked about willingness to share personal data for better digital experiences, more respondents were "not at all willing" than "very willing" for each data type.
Trust Gap and Verification Practices
More than half of those surveyed say they mostly or always verify AI‑generated information against trusted sources or their own knowledge. This verification behavior underscores a lack of confidence in the accuracy of AI outputs, which are often cited as being “notoriously inaccurate.” Users also feel that tech firms prioritize beating competitors over solving real problems, with two‑thirds believing most new features don’t address their needs.
Implications for Tech Companies
The findings suggest that while consumers are willing to spend money on AI services they trust, building and maintaining that trust requires a long‑term commitment to privacy, security, and genuine problem solving. Deloitte’s Steve Fineberg notes that trust takes years to build but can be lost in seconds, emphasizing the urgency for companies to address these concerns if they wish to sustain growth in AI adoption.