Companies Ramp Up AI Security Assessments Amid Growing Threats
Key Points
- Nearly two‑thirds of firms now assess AI risks before deployment, up from about one‑third last year.
- CEOs cite fraud and AI vulnerabilities as top concerns; CISOs focus on ransomware and supply‑chain threats.
- Seventy‑seven percent of organizations use AI for cybersecurity, primarily for phishing, intrusion detection, and automated operations.
- Key barriers to AI adoption include skill shortages, need for human validation, and uncertainty about risks.
- Future AI‑enabled threats are expected to center on convincing phishing, deep‑fake scams and automated social engineering.
A recent World Economic Forum report shows that nearly two‑thirds of organizations now evaluate AI risks before deployment, up from just over a third last year. While executives acknowledge rising AI‑related vulnerabilities, many are also turning to AI tools to bolster cybersecurity, especially for phishing detection, intrusion monitoring, and automated operations. Key barriers include skill shortages, the need for human validation, and lingering uncertainty about risks. The outlook highlights increasingly convincing phishing, deep‑fake scams and automated social engineering as the most pressing AI‑enabled threats.
AI Risk Assessment Gains Traction
The World Economic Forum’s Global Cybersecurity Outlook reveals a notable shift in corporate attitudes toward artificial‑intelligence security. Approximately sixty‑four percent of firms now assess AI risks before rolling out new tools, a marked rise from thirty‑seven percent in the previous year. This change reflects growing awareness that AI‑related vulnerabilities have increased, with many leaders citing data leaks and technical security concerns as top priorities.
Executive Perspectives on AI Threats
Chief executive officers are highlighting fraud and AI vulnerabilities as their primary worries, while chief information security officers remain most concerned about ransomware and supply‑chain disruptions. Both groups list software‑vulnerability exploits as a significant third‑ranked concern. Around two‑thirds of organizations also factor in geopolitically motivated attacks, and a number are exploring sovereign‑cloud options to mitigate risk.
AI as a Defensive Tool
Despite the rising threat landscape, companies are increasingly deploying AI to defend against attacks. Seventy‑seven percent of respondents now use AI in cybersecurity, with the most common applications being phishing detection (fifty‑two percent), intrusion detection (forty‑six percent) and automation of security operations (forty‑three percent). These tools aim to counteract the very AI‑driven threats that organizations are confronting.
Barriers to Wider AI Adoption
Key obstacles hindering broader AI use include a lack of skilled personnel (fifty‑four percent), the need for human validation of AI decisions (forty‑one percent) and lingering uncertainty about AI‑related risks (thirty‑nine percent). These challenges underscore the importance of developing talent and establishing clear validation processes.
Emerging AI‑Enabled Threats
The outlook predicts that highly convincing phishing attacks, deep‑fake scams and automated social‑engineering campaigns will become the most significant AI‑enabled threats. While AI is accelerating these attack vectors, phishing remains the most common method, unchanged at its core despite technological advances.
Looking Ahead
Overall, the findings suggest a dual reality: organizations recognize the growing dangers posed by AI, yet they are also leveraging AI to strengthen their security posture. Addressing skill gaps, ensuring human oversight, and clarifying risk frameworks will be critical as businesses continue to integrate AI into their cybersecurity strategies.