Amazon Web Services Outage Disrupts Major Apps and Websites Across US-East-1

Key Points
- AWS outage originated from a DNS resolution issue with the DynamoDB API in the US‑East‑1 region.
- Major consumer apps such as Venmo, Snapchat, Canva, Fortnite, and Alexa experienced outages or degraded performance.
- AWS mitigated the DNS problem by early morning, restoring most services, but EC2 instance launches remained rate‑limited.
- A wide range of services—including banking, airlines, Disney+, Reddit, Apple Music, and The New York Times—reported disruptions.
- The incident underscores the heavy reliance on AWS, which holds about 30 percent of the global cloud market share.
A severe outage at Amazon Web Services (AWS) disrupted a broad swath of internet services on a crisp October morning. The incident stemmed from a DNS resolution problem affecting the DynamoDB API in the US‑East‑1 region, leading to increased error rates and latency across multiple AWS services. Popular platforms such as Venmo, Snapchat, Canva, Fortnite, Alexa, Lyft, Reddit, Disney+, and many others experienced partial or complete outages. AWS identified the issue, applied mitigations, and eventually restored most services, though new EC2 instance launches remained rate‑limited for some time. The outage highlighted the extensive reliance on AWS infrastructure across the digital ecosystem.
Outage Emergence and Initial Impact
In the early hours of an October morning, Amazon Web Services began reporting “increased error rates and latencies for multiple AWS services” in its US‑East‑1 region, which houses data centers in Northern Virginia. By mid‑morning, users across the United States and beyond were encountering widespread service disruptions. Major consumer applications—including Venmo, Snapchat, Canva, and the popular game Fortnite—displayed error messages or became completely inaccessible. Even Amazon’s own voice‑assistant, Alexa, struggled to respond to basic commands such as weather inquiries or smart‑home controls.
Technical Root Cause
According to AWS’s service‑health page, the root cause was identified as a DNS resolution issue affecting the DynamoDB API. DynamoDB, a critical database service used by countless AWS customers, stored data safely but became unreachable for several hours. This effectively created a temporary “amnesia” for applications that rely on real‑time data retrieval, as explained by a university professor quoted in coverage of the event.
Mitigation Efforts and Ongoing Challenges
By early morning, AWS announced that it had fully mitigated the DNS issue and that “most AWS Service operations are succeeding normally now.” However, the ripple effect of the outage persisted. The EC2 service, which provides virtual machine capacity for many web‑based applications, continued to experience elevated errors for new instance launches. AWS responded by rate‑limiting new EC2 instance launches to aid recovery and advised customers not to tie new deployments to specific Availability Zones, allowing the system greater flexibility in allocating resources.
Scope of Affected Services
Down‑detector reports spiked for a broad array of platforms. Users of banking apps, airline reservation systems, Disney+, Reddit, Apple Music, Pinterest, Roblox, and The New York Times all reported sluggish performance or outright outages. Even internal Amazon services such as Alexa were visibly impacted, underscoring the depth of reliance on the US‑East‑1 region. Companies that host their workloads in this region faced a significant backlog of requests, and full recovery was projected to take additional time beyond the initial mitigation.
Industry Implications
The incident reinforced the central role of AWS in the modern internet architecture. As of mid‑2025, AWS held an estimated 30 percent share of the worldwide cloud infrastructure market, making it a backbone for a vast number of online services. The outage illustrated the systemic risk that arises when a large portion of internet traffic depends on a few key providers. While AWS’s response demonstrated technical competence in isolating and addressing the DNS fault, the episode also highlighted the importance of multi‑region strategies and diversified cloud deployments for critical applications.
Current Status and Outlook
By late morning, AWS reported that most services had returned to normal operation, though some EC2 launch capacity remained limited. Companies continued to monitor their systems for residual issues, and users reported a gradual restoration of functionality across the previously affected platforms. The episode serves as a reminder of the fragility inherent in highly centralized cloud infrastructures and the need for robust contingency planning.