There were 37 Amazon Web Services outages since January 2025 which are summarized below, including incident details, duration, and resolution information.
Amazon Web Services not working for you?
Amazon Web Services not working for you?
Minor
: AWS Management Console degraded in EU-CENTRAL-1 region
Detected by StatusGator:
Officially acknowledged:
Outage ended:
(1 hour and 5 minutes)
The AWS Management Console experienced degraded performance in the EU-CENTRAL-1 region due to elevated error rates affecting sign-in and navigation. The outage began at and was resolved by , lasting about 1 hour and 5 minutes.
Minor
: AWS IAM brief degraded performance with latency spike
Detected by StatusGator:
Officially acknowledged:
Outage ended:
(15 minutes)
AWS Identity and Access Management experienced a brief degradation in performance, with increased error rates and latency affecting IAM-related operations. The incident began at and was resolved by , lasting roughly 15 minutes. The AWS Status Page issued two updates during the event and noted initial signs of recovery, while a single user report appeared at before official acknowledgment. The impact was limited to the IAM component with no broader AWS services affected per the status updates.
Minor
: AWS IAM Identity Center sign-in issues in Sydney
Detected by StatusGator:
Officially acknowledged:
Outage ended:
(50 minutes)
From to , AWS IAM Identity Center in the AP-SOUTHEAST-2 region (Sydney) reported increased sign-in errors, causing login difficulties for AWS SSO users. The outage lasted about 50 minutes, and the Sydney IAM Identity Center component was listed as degraded during this period. The provider published two status updates during the incident, noting the degraded sign-in issues and later the restoration of normal service. The impact was localized to the Sydney region with no broader global disruption.
AWS reported external network connectivity issues in the US-East-1 region caused by problems outside the AWS network. AWS said they were working with the external provider to mitigate the problem. Connectivity within the AWS region and to all other internet providers remained normal, so the impact was limited to external reachability to AWS resources. A small number of user reports appeared around , indicating visibility prior to the official acknowledgement.
The incident lasted about 30 minutes, beginning at and resolving by . The provider issued 2 status updates during the incident. The affected component was N. Virginia / AWS Internet Connectivity.
Major
: AWS outage impacts US-GOV regions and multiple services
Detected by StatusGator:
Officially acknowledged:
Outage ended:
(1 hour and 20 minutes)
An AWS outage began at in the US-GOV-EAST-1 region with increased error rates and latencies, initially affecting DynamoDB in US-East and US-West. The issue then expanded to US-GOV-WEST-1 and broadened to a wide set of AWS services across US-East and US-West, including AWS Lambda, AWS EMR Serverless, AWS X-Ray, AWS Client VPN, Amazon Elastic Container Registry, Amazon Elastic MapReduce, AWS IoT Core, and AWS IoT Device Management, among others. Users reported degraded performance and intermittent failures across workloads. The outage lasted about 1 hour and 20 minutes, with signs of recovery beginning around 18:45 UTC and by several services had returned to normal.
Major
: AWS Canada-Central outage impacts EC2 and services
Detected by StatusGator:
Officially acknowledged:
Outage ended:
(3 hours and 50 minutes)
An outage in AWS Canada-Central (CA-CENTRAL-1) affected a single Availability Zone (cac1-az1), causing connectivity issues for EC2 instances and attached EBS volumes and impacting a broad set of AWS services in the region. Customers likely experienced degraded performance and intermittent access across resources, with some control-plane operations affected. The incident began at due to a power issue in cac1-az1.
AWS issued multiple status updates throughout the incident, noting the root cause as a power loss within the Availability Zone and that engineers were actively restoring power. By power had been restored and signs of recovery appeared, and over the next several updates AWS reported progress toward full recovery for EC2, EBS, and other services. By most affected resources had recovered and AWS continued monitoring for stability.
Scope: The disruption was regional to CA-CENTRAL-1 and impacted a range of services, including EC2, EBS, EMR Serverless, OpenSearch Service, RDS, Route 53, and related components. The incident required ongoing communications and updates as recovery progressed, with restoration of power and service gradually completed over the course of the event.
Minor
: AWS Glue job run failures in AP-NORTHEAST-1 region
Detected by StatusGator:
Officially acknowledged:
Outage ended:
(40 minutes)
An outage in the AP-NORTHEAST-1 region caused AWS Glue 5.0 job runs to fail, likely impacting data pipelines relying on Glue in Tokyo. The AWS Status Page shows an investigation began at with the Tokyo AWS Glue component degraded, and by the issue was resolved with the component back to normal. The incident lasted about 40 minutes, and AWS issued two status updates during the event detailing the regional impact and restoration.
The AWS EU-CENTRAL-2 region (Zurich) experienced increased Lambda invoke errors, leading to degraded performance for Lambda-based workloads and related services. Early updates indicated invoke failures for OCI-based Lambda functions, while ZIP-based functions continued to operate normally. Over the course of the incident, multiple AWS services were listed as affected, including Amazon Cognito, Amazon API Gateway, Amazon Elastic Kubernetes Service, Amazon SageMaker, and VPC Lattice, with Elastic File System also noted in later updates.
The issue began at and was resolved by , lasting about 5 hours and 55 minutes. The provider issued 6 status updates during the incident, detailing investigation progress, root-cause identification, and deployment of targeted fixes. For critical workloads, customers were guided to fall back to ZIP-based Lambda functions while OCI-based functions were being remediated. By the end of the incident, services in the EU-CENTRAL-2 region had largely recovered and Lambda functionality returned to normal.
The AWS status page reported increased error rates and latencies impacting DynamoDB in the US-East-1 region, with a wider impact across several AWS services and workflows. Users experienced higher latency and intermittent errors as services such as DynamoDB, SageMaker, EC2, ECS, Redshift, and others were affected, and inter-service dependencies encountered disruptions.
The incident began at and lasted about 1 hour, with recovery generally completed by . The issue was attributed to a recent change in a subsystem responsible for request routing. The AWS status page issued 6 updates during the incident to communicate progress and recovery.
Minor
: AWS App Runner degraded in AP-SOUTHEAST-1 region
Detected by StatusGator:
Officially acknowledged:
Outage ended:
(55 minutes)
AWS App Runner APIs in the AP-SOUTHEAST-1 (Singapore) region experienced increased error rates, causing some requests to fail or be slower for users. The issue began at and affected the Singapore region for about 55 minutes before returning to normal. A small volume of user reports were observed during the incident. The outage was resolved by with App Runner services returning to standard operation in the region.
The AWS status page documented the incident with two updates: first noting increased error rates and ongoing investigation, then confirming recovery and restoration of services in AP-SOUTHEAST-1. Impact was regional, with no clear evidence of a broader global effect.