There were 37 Amazon Web Services outages since January 2025 which are summarized below, including incident details, duration, and resolution information.
Amazon Web Services not working for you?
Amazon Web Services not working for you?
Minor
: AWS Direct Connect US-East connectivity briefly degraded
Detected by StatusGator:
Officially acknowledged:
Outage ended:
(1 hour and 35 minutes)
The AWS Direct Connect service in the US East (N. Virginia) region experienced a brief connectivity loss that affected customers using Direct Connect to reach AWS services. The outage began on and was resolved by , lasting about 1 hour and 35 minutes. The issue affected the N. Virginia / AWS Direct Connect component, causing degraded network paths for some customers. The AWS status page issued two updates during the incident, initially noting investigation into connectivity loss and later confirming restoration. The impact was localized to Direct Connect users accessing US-East-1, with limited user reports during the event.
Minor
: AWS CloudFront DNS issues and Route 53 delays impact users
Detected by StatusGator:
Officially acknowledged:
Outage ended:
(7 hours)
The outage began on when Amazon CloudFront DNS resolution failures started causing increased error rates and latencies for several CloudFront distributions. Route 53 delays also slowed propagation of changes, amplifying disruption for users accessing content via CloudFront. Users reported outages and errors across multiple services, including 403 responses for certain actions. The incident lasted roughly 7 hours and was resolved by as CloudFront returned to normal operation.
Minor
: AWS EU West 1 region degraded; image pull timeouts
Detected by StatusGator:
Officially acknowledged:
Outage ended:
(4 hours and 20 minutes)
An incident in the AWS EU-West-1 region (Ireland) led to degraded performance with timeouts when pulling container images. User reports began around , and AWS acknowledged elevated latencies for network change propagation in EU-WEST-1 at . The issue impacted multiple Ireland-based components, including EC2, AWS Fargate, Amazon ECS, and AWS WAF.
AWS issued 3 status updates during the incident, at , , and . All affected components returned to normal by , marking an outage duration of about 5 hours and 5 minutes with region-specific impact to Ireland (eu-west-1).
Minor
: AWS US East-1 degraded performance; throttling in use1-az2
Detected by StatusGator:
Officially acknowledged:
Outage ended:
(7 hours and 20 minutes)
From , the US East-1 (N. Virginia) region experienced degraded performance with increased error rates and latencies for EC2 instance launches in the use1-az2 availability zone. Some requests were throttled in this AZ, affecting EC2 launches and related operations across multiple AWS services. The impact extended to many services in the region, including EC2, Fargate, ECS, EKS, DataSync, CodeBuild, EMR Serverless, App Runner, and AWS Batch, among others. User reports indicate broad geographic activity, suggesting widespread regional impact. The outage lasted about 7 hours and 20 minutes and began to improve as mitigations were applied; recovery was largely underway by .
AWS communications tracked the incident through multiple status updates noting increased error rates and latencies and ongoing throttling in use1-az2 as mitigations were deployed. As stability returned, services progressively recovered and throttling was lifted for many components. By around 06:00 UTC on 2025-10-29, most affected services had recovered and throttling in the region was subsiding. Existing instances were not affected, and AWS indicated throttles would be removed as confidence in stability grew.
Major
: AWS US-EAST-1 region hit by widespread latency and degraded services
Detected by StatusGator:
Officially acknowledged:
Outage ended:
(11 hours and 45 minutes)
The outage began at in the US-EAST-1 region, with AWS reporting increased error rates and latencies across a broad set of services. Customers experienced issues launching EC2 instances and processing messages with Lambda and SQS, and workloads relying on compute, data services, and analytics were affected. The impact appeared region-wide, impacting dozens of services including EC2, SQS, Lambda, OpenSearch Service, CloudWatch, DynamoDB, and more. The outage continued for many hours as AWS worked to restore functionality.
Major
: AWS US-EAST-1 DNS outage disrupts DynamoDB and dozens of services
Detected by StatusGator:
Officially acknowledged:
Outage ended:
(3 hours and 10 minutes)
The outage began around when AWS reported increased error rates and latencies across the US-EAST-1 Region. Core services including DynamoDB and EC2 showed degraded performance, impacting login, API requests, and console access for users in and around the region. User reports surged in the early window, with noticeable outages and degraded functionality reported between and UTC as individuals and organizations tried to access AWS services from multiple geographies.
Minor
: AWS IAM API degraded briefly due to increased latency
Detected by StatusGator:
Officially acknowledged:
Outage ended:
(30 minutes)
The AWS Identity and Access Management (IAM) APIs experienced increased error rates and latencies, causing intermittent delays for authentication and other IAM operations. The incident began at and was resolved by , lasting roughly 30 minutes. AWS reported investigation into the elevated error rates and latencies on the IAM APIs, with the IAM component transitioning from a degraded state to normal as a result of mitigation efforts.
The IAM service status page showed the issue as a degraded condition before returning to normal, reflecting the temporary impact across IAM-related functionality.
Minor
: AWS EU-NORTH-1: SQS API errors impact multiple services
Detected by StatusGator:
Officially acknowledged:
Outage ended:
(7 hours and 30 minutes)
An incident in the EU-NORTH-1 region caused elevated API error rates and latency for Amazon SQS, with the SQS ReceiveMessage API for queues using Server-Side Encryption being particularly affected. The issue also impacted some APIs and workflows across other AWS services in the region, leading to degraded performance for applications relying on SQS and related services. The incident began at and persisted for several hours as engineers investigated and applied mitigations. The provider issued 15 status updates during the incident, reflecting ongoing analysis and gradual mitigation. The outage progressed toward recovery through the night, with some services returning to normal by and remaining services staged for full restoration.
Minor
: AWS Glue and EventBridge in US-East-1 briefly degraded
Detected by StatusGator:
Officially acknowledged:
Outage ended:
(40 minutes)
AWS Glue in the US-East-1 (N. Virginia) region and Amazon EventBridge experienced degraded performance with increased error rates. The issue began on and affected Glue job runs and event-driven workflows. The problem lasted until when services returned to normal.
The AWS status page published two updates during the incident, detailing the investigation and restoration progress. The impact was regional to US-East-1 and affected two components: N. Virginia / AWS Glue and N. Virginia / Amazon EventBridge. The outage duration was roughly 40 minutes.
Minor
: AWS US-East-1 API errors degrade multiple services
Detected by StatusGator:
Officially acknowledged:
Outage ended:
(30 minutes)
The AWS US-East-1 (N. Virginia) region experienced increased API error rates causing degraded performance across multiple services. Users attempting API calls or portal access faced errors and slower responses. The incident affected 8 components including EC2, SageMaker, Lightsail, ECS, EKS, MSK, Transit Gateway, and VPC Lattice. The outage lasted about 30 minutes, began at and was resolved by .