2025: The year of the global cloud outage

Learn more
magnifying glass icon
magnifying glass icon
  • Can't find service? Request service
    magnifying glass icon
    Sign Up
    Amazon Web Services icon
    Amazon Web Services icon

    Amazon Web Services Outage History

    Minor Amazon Web Services outage

    There were 37 Amazon Web Services outages since January 2025 which are summarized below, including incident details, duration, and resolution information.

    Amazon Web Services not working for you?

    Amazon Web Services not working for you?

    minor icon Minor

    : AWS US-West-2 Athena degraded by queue delays

    Detected by StatusGator:
    Officially acknowledged:
    Outage ended: (1 hour and 55 minutes)

    In the US-West-2 region (Oregon), Amazon Athena experienced degraded performance due to increased queue processing times. This impacted users running Athena queries in that region, resulting in slower query execution and longer wait times. The degradation began at and was resolved by , affecting the Oregon / Amazon Athena component during this period.

    minor icon Minor

    : AWS inter-region latency between US-EAST-1 and US-EAST-2

    Detected by StatusGator:
    Officially acknowledged:
    Outage ended: (45 minutes)

    The AWS network experienced increased latency between the US-EAST-1 and US-EAST-2 regions, with Ohio / AWS Internet Connectivity listed as affected. The outage began at and was resolved by , lasting about 45 minutes. AWS investigated the issue, identified the scope of impact, and issued updates as they worked to restore normal network performance. This degradation affected cross-region traffic and could have impacted services relying on inter-region connectivity. The AWS status page published 2 updates during the incident.

    minor icon Minor

    : AWS Tokyo AZ apne1-az1 network issues degraded services

    Detected by StatusGator:
    Officially acknowledged:
    Outage ended: (5 minutes)

    The AWS Tokyo region experienced degraded network connectivity in a single Availability Zone (apne1-az1) within AP-NORTHEAST-1. Between and , users saw increased packet loss and latency, impacting eight Tokyo-region services including EC2, RDS, AWS Transit Gateway, VPCE PrivateLink, ElastiCache, VPC Lattice, API Gateway, and Direct Connect.

    Two official status updates were issued during the incident, and Tokyo services returned to normal by . The outage lasted approximately 47 minutes and affected global communications within the Tokyo AP-NORTHEAST-1 region during the AZ-specific disruption.

    minor icon Minor

    : AWS HYD AP-SOUTH-2 region degraded API and connectivity

    Detected by StatusGator:
    Officially acknowledged:
    Outage ended: (45 minutes)

    An incident in the AP-SOUTH-2 Hyderabad region caused elevated API error rates and connectivity issues across dozens of AWS services, leading to degraded performance and intermittent failures for users in the region. The issue began at and was resolved by , lasting about 45 minutes. Affected services included API Gateway, EC2, S3, DynamoDB, Lambda, ECS, and many others.

    AWS documented the incident through nine status updates on the AP-SOUTH-2 status page, noting that the cause had been identified and mitigation was underway by 17:51 UTC and that recovery was under way in the following minutes. The outage was localized to the Hyderabad region with no global impact reported, and service returned to normal with the final update at .

    minor icon Minor

    : AWS EMR launch failures cause brief degraded performance in us-east-1

    Detected by StatusGator:
    Officially acknowledged:
    Outage ended: (1 hour and 25 minutes)

    Amazon Web Services experienced a brief degraded performance for Amazon Elastic MapReduce in the us-east-1 region as some cluster launches failed. The issue began at and was resolved by , lasting about 1 hour 25 minutes. The incident affected the N. Virginia / Amazon Elastic MapReduce component, with users attempting to launch clusters in US-East-1 encountering errors or delays. The provider issued two status updates during the incident: an initial notice documenting EMR cluster launch failures and ongoing investigation, followed by an update indicating that the issue had been resolved and services restored.

    minor icon Minor

    : Amazon Web Services Bedrock Claude 3.7 Sonnet v2 API issues across regions

    Detected by StatusGator:
    Officially acknowledged:
    Outage ended: (36 minutes)

    An outage affected the Amazon Web Services Bedrock API for Claude 3.7 Sonnet v2, causing degraded performance and intermittent errors across three regions: Oregon, Ohio, and N. Virginia. Applications relying on Claude 3.7 Sonnet v2 likely experienced failed invocations or elevated latency during the incident. The outage began on and was resolved by .

    minor icon Minor

    : AWS CloudFront invalidation delays with limited impact

    Detected by StatusGator:
    Officially acknowledged:
    Outage ended: (3 hours and 5 minutes)

    An incident affected AWS CloudFront where propagation times for cache invalidations were longer than usual. The issue began at and lasted until , with CloudFront invalidation changes taking longer to propagate. End-user content delivery from CloudFront edge locations remained unaffected, so customers continued to receive content normally while configuration updates were slower to take effect. The provider issued two status updates during the incident.

    minor icon Minor

    : AWS Tokyo AZ connectivity issues in AP-NORTHEAST-1

    Detected by StatusGator:
    Officially acknowledged:
    Outage ended: (40 minutes)

    The AWS status page reported connectivity issues in a single Availability Zone (apne1-az4) within the AP-NORTHEAST-1 region, affecting workloads in the Tokyo area. The outage began at and degraded multiple services across the region for about 40 minutes, with full recovery by .

    A total of four official status updates were issued during the incident. By , AWS indicated initial signs of recovery and that other services were recovering as well. By the time services returned to normal, 13 components across services such as Amazon EC2, Amazon CloudWatch, Amazon RDS, AWS Lambda, Amazon ECS, Amazon Redshift, and others were reported back to normal. The impact was localized to the Tokyo region and did not indicate a broader, global disruption.

    minor icon Minor

    : AWS Bedrock and Transcribe degraded in US East-1 due to API errors

    Detected by StatusGator:
    Officially acknowledged:
    Outage ended: (1 hour and 55 minutes)

    Amazon Web Services reported a regional degraded performance in the US East (N. Virginia) region affecting Amazon Bedrock endpoints and model usage. Customers accessing Titan, Mistral, and Cohere models via Bedrock saw increased latencies and API error rates. The incident began at and was resolved by , lasting about 1 hour and 55 minutes. A later update expanded the issue to include Amazon Transcribe, with both Bedrock and Transcribe affected before service restoration. AWS issued three status updates during the incident, and the impact was limited to the US East-1 region.

    minor icon Minor

    : AWS EC2 RunInstance API errors cause regional degradation in two regions

    Detected by StatusGator:
    Officially acknowledged:
    Outage ended: (45 minutes)

    The incident began at with increased error rates on the EC2 RunInstance API in SA-EAST-1 (Sao Paulo), where customers could see 500 responses in API calls and the Management Console when launching new instances. This issue was reported to potentially affect other AWS services that call RunInstances in their workflows. By , the degradation expanded to EU-WEST-1 (Ireland) and additional RunInstance-dependent components started experiencing errors. AWS communications indicated ongoing investigation and recommended retrying requests where possible.

    Recovery progressed over the next minutes, with a number of services returning to normal in stages. The situation culminated by as many components were restored. AWS issued seven status updates during the incident to share progress and guidance, noting that the issue affected multiple services and regions and that retries could mitigate impact while the root cause was investigated. The outage primarily affected RunInstance workflows in two regions, with ripple effects across related EC2-related services.

    Amazon Web Services icon

    Looking for more AWS outage stats?

    Get real-time insights and historical data on AWS outages.

    Looking to track Amazon Web Services downtime and outages?

    • Receive real-time status updates
    • Show current status on private or public status page
    • Be the first to know if service is down
    • Keep your team informed
    • Monitor alongside other services and websites
    Start monitoring for free