Enterprise AI Security Gaps Surface at Runtime, Experts Warn

Key Points
- Traditional security protects data at rest and in transit, but not data in use during AI execution.
- Three vulnerable phases identified: training, inference and runtime, with runtime being the weakest link.
- Model weights and real‑time prompts can be exposed to the underlying system even in secured environments.
- Scaling AI across distributed, multi‑tenant infrastructures multiplies exposure opportunities.
- Confidential computing and hardware isolation are recommended to protect data during runtime.
A new analysis reveals that most organizations still rely on traditional security models that leave artificial intelligence workloads exposed at the moment they run. While data at rest and in transit enjoys encryption and access controls, the critical phase when AI models process information in memory—known as runtime—remains largely unprotected. The report highlights three vulnerable stages: training, inference and especially runtime, and urges companies to adopt hardware‑based isolation and confidential computing to safeguard model weights and real‑time data.
Enterprises have embraced artificial intelligence at a breakneck pace, embedding models into customer support, fraud detection, software development and IT operations. Yet the security framework protecting these workloads has not kept up. Traditional defenses focus on data at rest and data in transit, applying encryption and identity controls to keep information safe while it sits on disks or moves across networks. That approach overlooks a third, far more complex state: data in use, the moment AI models execute.
When a model runs, its weights—often the most valuable intellectual property a company owns—load into memory, and prompts, responses and contextual data flow through the system in real time. In many environments, this sensitive information becomes visible to the underlying operating system and hardware. Even well‑secured infrastructures can inadvertently expose their most critical assets at the exact moment they are being processed.
Security gaps appear across three key phases. During training, data moves through storage systems, shared compute clusters, orchestration layers and debugging tools. The constant shuffling creates opportunities for accidental leaks, and model weights may be handled with less rigor than they deserve. Inference, the stage where inputs become outputs, also suffers from exposure. User prompts, generated answers and internal data are often logged in plaintext, captured by monitoring dashboards or retained longer than intended. Shared infrastructure further amplifies the risk.
The most dangerous blind spot, however, is runtime. At this point, encrypted data is decrypted, model weights sit in memory, and the workload depends on the trustworthiness of the host system. If that system is compromised or misconfigured, traditional security controls—identity management, encryption policies—offer little protection because the keys are already in use. The result is a vulnerable execution environment where sensitive assets can be accessed without detection.
Scaling AI amplifies these vulnerabilities. As more models run across distributed, multi‑tenant environments, the volume of data and the number of execution points multiply, creating a larger attack surface. Proprietary models become core business assets, and the stakes of a breach rise dramatically.
Experts argue that the problem is not a shortage of security tools but a mismatch between legacy trust assumptions and the dynamic nature of AI workloads. Traditional models assume that once a workload enters a trusted perimeter, it remains secure. AI challenges that premise by constantly processing sensitive data and relying on complex, often opaque stacks.
To close the gap, the industry is turning to confidential computing and hardware‑based isolation. These technologies create protected execution environments that require cryptographic proof before allowing workloads to run. By keeping data encrypted even while it is being processed and shielding model weights from unauthorized access, they shift security from perimeter‑based assumptions to verifiable trust.
Organizations that recognize and address the runtime vulnerability early will be better positioned to scale AI safely. Those that continue to rely on outdated security models risk exposing their most valuable assets at the very moment those assets are delivering business value.