On May 7th and 8th, 2026, Amazon Web Services (AWS) experienced an outage affecting Amazon Elastic Compute Cloud (EC2) in the dreaded US East 1 region. The original region of AWS located in Northern Virginia, us-east-1 or just “US East” as it is known, has been the subject of some of the internet’s most high profile and destructive outages and remains Amazon’s least reliable region.
The public AWS Health Dashboard described an event titled “Increased Error Rate and Latency” for Amazon EC2 in N. Virginia, with the issue centered on impaired EC2 instances and degraded EBS volumes in a single Availability Zone, use1-az4.
An Availability Zone (AZ) is a single physical datacenter location, and a region is made up of numerous AZs. Best practices dictate that customers use more than one AZ to defend against issues in a single location. The nature of this outage at a single datacenter meant the impact was not as widespread as past outages, such as the October 2025 AWS incident that took down half the internet.
StatusGator captured 2,370 status changes across 748 services between 12:00 AM and 7:00 AM on May 8, 2026. Some of those were unrelated incidents, scheduled maintenance, or status-page noise. After filtering for status changes that named AWS, us-east-1, use1-az4, EC2, EBS, AWS Health, or language such as “upstream provider,” “cloud provider,” “third-party provider,” or “infrastructure provider,” the downstream reach of the incident became much clearer: More than 150 different cloud services were affected.
What AWS reported
As StatusGator reported, the official AWS status page stated that they were investigating instance impairments in a single Availability Zone, use1-az4, in US-East-1. Later updates described a “thermal event”, elevated temperatures, impaired EC2 instances, degraded EBS volumes, elevated error rates, and traffic being shifted away from the affected Availability Zone. The issue was zonal, not region-wide, but many services had workloads, dependencies, or vendor chains tied to that zone.
A single-AZ incident can still produce widespread symptoms when SaaS providers, payment systems, AI services, messaging providers, data platforms, and customer-facing applications depend on infrastructure or vendors concentrated in the affected area.
How we classified the incidents
As a status page aggregator, StatusGator uses both official status pages and user reports to detect outages. We analyze all the status page data we collected during the incident along with what users reported and divided the affected providers into two confidence levels:
- Confirmed AWS reference: The provider status text specifically referenced AWS, AWS Health, US-East-1,
use1-az4, EC2, EBS, N. Virginia, or an AWS Availability Zone. - Likely AWS-related dependency reference: The provider did not name AWS directly, but referenced an upstream provider, cloud provider, infrastructure provider, third-party provider, vendor dependency, or platform provider during the same incident window.
With over 7,000 services monitored, StatusGator detected incidents from all around the world and not all of them are related to AWS. We excluded concurrent incidents that clearly pointed somewhere else, including the Canvas/Instructure security incident, IONOS and Fasthosts WordPress accessibility issues, Vocus and Optus local outages, TurboSMTP’s Amsterdam data-center fire, Dirty Frag vulnerability notices, Microsoft webhook issues, and carrier-specific SMS incidents that did not tie back to AWS.
Services with confirmed AWS references
| Service | Reported impact |
|---|---|
| Acquia | Service interruption affecting multiple servers and balancers in AWS Availability Zone us-east-1b. |
| AlayaCare | Pre-emptive alert that parts of AlayaCare solutions hosted on AWS us-east-1 could be impacted. |
| Aptible | Instability in an AWS availability zone in us-east-1; provisioning into the affected zone was blocked. |
| Arkose Labs | Device ID service degradation in US-East-1 caused by a third-party cloud provider; failed over to a healthy region. |
| Braze | US05 processing latency affecting Campaign and Canvas processing, data processing, message sending, and dashboard navigation. |
| BrightSign | AWS thermal event caused periodic pod restarts, brief latency spikes, and possible connection resets. |
| Broadcom Cloud Secure Web Gateway, Broadcom VIP, VMware Cloud Services | AWS use1-az4 impact, including no new host provisioning, no new SDDC deployments, auto-remediation failures, and delayed CloudHealth reporting. |
| Buildkite | AWS single-AZ outage caused delayed notifications, brief latency, error blips, and migration away from the affected AZ. |
| Bullhorn | Bullhorn VMS users saw login latency and application errors; later updates cited an AWS connection issue. |
| Camunda | C8 SaaS in AWS US-East-1 had possible degraded service tied to use1-az4 and us-east-1b. |
| Canva | Some users had problems accessing Canva due to AWS service disruption. |
| Castle | Elevated errors and latency due to upstream provider issues in AWS US-East-1 / us-east-1a. |
| Chargebee | Intermittent US-region API request timeouts due to an AWS platform issue; impact described as minimal. |
| Chartbeat | Third-party AWS service outage affected Chartbeat data processing. |
| Ciptex | Ciptex reported Twilio API 5xx/degradation and later tied the issue to AWS US-East-1. |
| Circle | CCTP attestation delays across blockchains due to AWS degradation. |
| ClickHouse | Increased error rate and latency in AWS US-East-1. |
| Cockroach Labs | Degraded performance in one AWS us-east-1 AZ; clusters remained available while corrective action was taken. |
| Coinbase, Coinbase Prime | Coinbase acknowledge a degredation related to AWS and users were furious about the inability to trade during the incident. |
| Confluent | Elevated error rates and latency on produce and consume operations on AWS us-east-1 AZ4. |
| ConnectWise | Advisory that a third-party AWS degradation could affect NA platform performance, with possible delays if prolonged. |
| Convex | Intermittent downtime and error spikes while jobs and databases failed over or were moved away from affected AWS infrastructure. |
| CyberArk EMP, CyberArk Privilege Cloud, CyberArk Identity Cloud | Login issues for ISP customers in us-east-1; Identity Cloud also posted an AWS advisory with no observed customer impact at that time. |
| DataRobot | Intermittent issues launching new workloads in US Production, especially Custom Models and Custom Applications; existing workloads were not impacted. |
| Datadog US1 | Delayed monitor notifications, data processing delays, and AWS telemetry delays into Datadog. |
| Dutchie | Florida BioTrack issue affected dispensary operations and the ability to complete transactions due to active AWS outage impacting BioTrack. |
| Dynamic Yield | Real-time affinity, API feeds, and reporting delays in USE, later attributed to AWS outages. |
| Dynatrace | Degraded data ingestion for some US deployments due to AWS EC2 N. Virginia issue in use1-az4. |
| Ebanx | MPConnect instability due to AWS cloud provider instability. |
| Flare Systems | Global Search, Event Search, and Tenant Search unavailable due to AWS us-east-1 infrastructure impairment. |
| Flexera | Spot services in US-East-1 had degraded provisioning and instance launch failures, plus database connectivity and reduced request-handling capacity. |
| Gainsight | Rules Engine queue delays for a subset of US1 customers due to infrastructure issue at upstream cloud provider AWS us-east-1 AZ. |
| Gong | Intermittent performance issues, page-load delays, and call-processing impact due to a third-party network disruption within AWS. |
| Hightouch | Degraded functionality in AWS us-east-1; fix implemented and monitored. |
| HubSpot | U.S. portal degradation and loading issues related to AWS outage. |
| Huntress | Portal web errors and agent communication delays; mitigations applied for AWS outage. |
| Hyland | Degraded performance in DCA due to AWS thermal event in us-east-1. |
| Incode | SaaS US platform degradation caused by public cloud provider; failover to a secondary region was activated. |
| Instaclustr | Potential disruptions for customers in AWS US-East-1, especially use1-az4. |
| JFrog | Platform degradation for a subset of AWS US-East-1 customers due to AWS infrastructure issues; recovered through HA and configuration changes. |
| Kore.ai | US voice gateway partial disruption; an SBC server in US-East-1 was down and Kore was working with AWS. |
| LoanPro | Elevated 504 errors and Secure Payments timeouts; stabilized after failover to an unaffected AWS Availability Zone. |
| LogicMonitor | Delayed metric alert evaluation in US-East-1; portal access, log ingestion, and data collection were stated as unaffected. |
| Logz | US-East data ingestion delays for some customers due to AWS outage. |
| Marqeta | Services impacted by AWS us-east-1 outage; mitigations included moving traffic away from the affected region or AZ. |
| Materialize | Some us-east-1 clusters had degraded performance or replica restarts due to upstream thermal event; new provisioning redirected to healthy targets. |
| Mighty Networks | Elevated and intermittent errors caused by an AWS service incident. |
| MongoDB, MongoDB Cloud | Atlas clusters in AWS US-East-1 could have impaired nodes in use1-az4; scaling and configuration changes were delayed while healing actions ran. |
| Netlify | Elevated errors in IAD region for Standard and HP Edge Networks due to upstream issue at AWS. |
| Olo | Ordering disruption due to AWS outage; fix later implemented. |
| Open LMS | AWS operational issue affected Open LMS; performance improved during recovery. |
| Pagely | Subset of EC2 instances in AWS Virginia were unreachable; later updates mentioned restored EBS volumes and EC2 instances. |
| Palo Alto Networks | Cloud NGFW impacted by AWS EC2 N. Virginia use1-az4 issue. |
| Payhawk | Elevated card transaction declines due to card-processing partner issue caused by downstream AWS us-east-1 disruption. |
| Periscope, Sisense | Sisense servers were severely impacted by AWS N. Virginia hardware or power issues; users could see loss of access, dashboard failures, or high latency. |
| Perplexity | Website and API instability; root cause identified in underlying AWS infrastructure. |
| PlanetScale | AWS use1-az4 thermal event advisory; databases were expected to be unaffected. |
| Recorded Future | Application portal disruption due to AWS outage; mitigation restored platform availability. |
| Reddit, Segment | Reddit user experience degraded due to AWS us-east-1; Segment’s Reddit Conversions API destination saw delivery failures because Reddit infrastructure was affected. |
| Redis | Network issues in AWS US-East-1 affecting EC2/EBS-backed services, with possible elevated errors, latency, and intermittent connectivity. |
| Reducto | API latency degradation first tied to downstream cloud provider Modal, then explicitly marked as affected by AWS outage. |
| Rudderstack | US data processing pipeline delays; later AWS load balancer issue affected ingestion and workloads were moved away from the problematic AZ. |
| SavvyMoney | Core application unavailable or partially recovered from AWS us-east-1 infrastructure issue; response times remained elevated. |
| ScyllaDB | Elevated provisioning times and limited impact to AWS US-East-1 clusters; affected nodes replaced and no data loss observed. |
| SentinelOne | NA1 service interruption due to degradation of an AWS Availability Zone. |
| ShipStation | Store connectivity for merchants may have been affected by AWS outage. |
| Skai | Intermittent platform issues across a small number of servers due to AWS US-East-1 outage. |
| Smartling | Elevated API errors initially; later update said AWS outage was impacting some services while Smartling platform and API were stable. |
| Socure | 5xx errors across multiple modules; team worked with AWS on underlying performance issues and later recovered. |
| Splunk | Splunk Cloud Platform environment may have been inaccessible for a limited number of US-East customers due to AWS us-east-1. |
| Sprinklr | Monitored AWS US-East-1 service degradation and a Twilio voice-provider issue; one update said no impact observed at that time. |
| Sumo Logic, Sumo Logic US1, Sumo Logic US2 | Cloud provider AZ failure affected CSE APIs and UI, data indexing, metrics alerting and collection, and tracing collection. |
| Supabase | Supavisor and network connectivity in us-east-1a / us-east-1-az4; affected projects could not access databases or run auth-dependent services. |
| SuperTokens | Shared database stopped responding due to AWS infrastructure incident in us-east-1b, causing outage or partial outage of associated instances. |
| Syncro MSP | 500 errors accessing Organizations page; database disruption tied to active cloud infrastructure provider issue. |
| Synctera | AWS outage impacted card processor vendor; card transactions and digital wallet issuance had intermittent failures, then stabilized. |
| TaxBit | Elevated API errors and latency; shifted affected services to other AZs and recovered. |
| Temporal | Some AWS us-east-1 customers had elevated API latencies; Temporal proactively failed over impacted multi-region replication primary namespaces. |
| Thanx | Subset of orders saw slow or failed checkout due to upstream AWS networking issue in one us-east-1 AZ. |
| Thomson Reuters | ONESOURCE Global Trade login issue and other application degradation; later update said disruption was due to AWS US-East / N. Virginia outage. |
| Titan Mail | Intermittent network issues due to AWS us-east-1 incident; intermittent service impact possible. |
| Udemy | Service disruption from AWS US-East-1 issues; users could see errors signing in or watching content. |
| Vevo | Backstage features affected by AWS external service disruption; users could see slow responses or occasional errors. |
| Voiceflow | Elevated errors and latency across the Voiceflow Platform due to AWS US-East-1 AZ issue. |
| Voucherify | US1 increased API response time and error rate due to cloud provider operational issue in N. Virginia; limited increased latency for about 50 minutes. |
| WPMU DEV | HUB and website outage related to ongoing AWS incident. |
| Workato | Workato Go unavailable; knowledge base creation and retrieval impacted because ingestion services were down. |
| Xero | Third-party hardware issue with AWS impacted platform; fix implemented and customers should be able to use Xero again. |
| Yelp | Degraded reservation and waitlist experiences for East Coast users; mitigation applied while monitoring AWS. |
| Yotpo | Data ingestion from ecommerce platforms and webhook delivery delayed due to ongoing AWS incident; no data loss expected. |
Services that referenced upstream providers
These services did not always name AWS directly in the captured status text. We include them because their wording matched the incident pattern during the same time window. Treat these as lower-confidence than the confirmed AWS references above.
| Service | Reported impact |
|---|---|
| Affirm | Elevated latency across services, timeouts, and errors traced to cloud service provider. |
| Afterpay | Checkout outage and Afterpay Card decline-rate degradation tied to downstream provider latency. |
| Atlas, Atlassian Access, Atlassian Analytics, Atlassian Compass, Atlassian Developer, Bitbucket, Confluence, Guard, Jira Core, Jira Product Discovery, Jira Service Desk, Jira Service Management, Jira Software, Jira Work Management, Loom, Trello | Multiple Atlassian services experienced issues while Atlassian worked with a public cloud provider; asynchronous event processing and product features were affected. |
| AppFox | Workflows for Confluence macros showed broken macro errors due to Confluence infrastructure behavior, likely indirect through Atlassian. |
| Cartesia | Voice Agent service degradation caused by upstream infrastructure provider; mitigation rolled out. |
| Cash App Partners | Checkout and Cash App Pay disruption, plus increased latency, caused by third-party cloud provider. |
| Dispatch | Notification delivery and reports disrupted due to cloud provider issue. |
| DrFirst | US Live Messaging and Chat degraded due to outage in an external third-party platform. |
| Drip | Analytics and timeline event delays due to third-party vendor; core application unaffected. |
| Fivetran | HubSpot and Amazon Selling Partner connector sync failures due to third-party-side 500 errors. |
| FLORA | Generation workflows stuck in pending state due to infrastructure provider outage; backlog later processed. |
| Freshcaller, Freshchat, Freshdesk, Freshdesk Omni, Freshmarketer, Freshsales, Freshservice, Freshworks CRM | Freshworks services reported an issue with a third-party service provider; captured rows did not provide detailed component impact. |
| Genesys Cloud | Elevated error rates in Americas US-East affected ability to route interactions; engineering worked with platform provider. |
| Glia | Service provider disruption affected Glia Platform; reporting errors and intermittent reporting failures observed. |
| Help Scout, HelpScout | Inbound email delays and sluggish behavior due to degraded third-party provider. |
| Heroku | Although Heroku is widely known to use AWS, they did not reference them by name, meaning only a partial outage from upstream service provider that affected some US-region customers, including connectivity, DB latency, and third-party add-on providers. |
| IPVanish | Some services unavailable due to upstream provider issue. |
| Klarna | US service disruption affected shopping, payments, customer service, communications, and Merchant Portal due to cloud infrastructure provider. |
| Klaviyo | Processing delays for segmentation, analytics, campaign and flow audience freshness, and reporting pages due to upstream service provider. |
| LivePix | AI voice and inference provider instability delayed message playback; affected voices were temporarily disabled. |
| Modal, Relevance AI | Modal had broad function, web endpoint, storage, sandbox, and image build degradation; Relevance AI reported Agent Builder and Chat degradation due to third-party providers Modal and Orb. |
| Motive | Card transaction declines due to degradation with a third-party service. |
| Netdata | False-positive node reachability alerts due to upstream cloud provider affecting Kubernetes nodes; underlying workloads remained operational. |
| Orb | Elevated API error rates caused by cloud provider issues; recovery observed. |
| PolygonIO | Delayed or stale crypto market data due to upstream Coinbase outage, likely indirect because Coinbase was AWS-impacted. |
| Rapid7 | Agent management UI intermittent issues and SIEM data processing delays due to vendor dependency. |
| RevenueCat | Increased API error rates and reduced service capacity due to cloud provider incident; recovered while provider incident remained open. |
| Riskified | Order reviews and notifications degraded due to cloud provider outage; resolved with up-to-1-hour visibility lag for impacted orders. |
| Salesforce, Tableau Cloud | Heroku (owned by Salesforce) and MuleSoft-related degradation narrowed to upstream service provider and underlying infrastructure network routing/connectivity issues. |
| Setu | Digilocker and BAV intermittent errors due to upstream provider. |
| TimeClock Plus | Elevated response times and intermittent errors due to infrastructure provider issue. |
| Tomorrow.io | Flight movement and ground movement data unavailable due to upstream data disruption; later resumed. |
| Twilio | Widespread 5xx errors, latency, and multiple-product degradation; later updates said recovery was being monitored while awaiting vendor updates. |
| V7 GO US, V7 GO EU, V7 GO UAE | V7 Go UAE region service disruption due to upstream infrastructure provider. This is likely due to a different AWS issue in the middle east. But an upstream provider incident appeared on multiple V7 GO regional pages. |
| Visier | API health-check endpoint errors for some US-hosted customers; problem originated from service provider. |
Early Warning Signals caught many outages first
Official status pages often lag or lie outright. Some providers don’t even publish a status page, a shocking thought in 2026. But during the same incident window, StatusGator’s Early Warning Signals also detected user-reported outages for services that had not necessarily acknowledged an incident on their own status page.
In the captured Early Warning Signals queue, high-volume examples included: FanDuel, Suno, Clubhouse, Epic Games, Moodle, Wattpad, Ancestry, Trello, Infor, Riot Games, Udemy, Signal, Garmin Connect , Upstart, Capital One, Webull, QGend, OverDrive, and D2L.
Some of these incidents would go on to be officially confirmed by providers. But Early Warning Signals use StatusGator’s enormous user base to alert before the providers do.
While these early outage alerts do not prove AWS causality on their own, they show what users were experiencing while official status pages were still quiet or delayed.
What this incident shows about cloud outage visibility
While not as widesperad as some AWS outages, the May 7th-8th AWS US-East-1 incident was not simply an EC2 problem visible only to infrastructure teams. It showed up across payments, AI services, messaging, developer tools, analytics platforms, education tools, ecommerce infrastructure, reservations, data pipelines, and consumer apps.
The status-page language varied widely. Some providers named AWS directly. Some named us-east-1, use1-az4, EC2, or EBS. Others used softer dependency language such as “upstream provider,” “third-party provider,” or “cloud provider.” Still others had user-visible problems without acknowledging an incident at the time StatusGator detected reports.
For IT teams keeping track of their dependencies, StatusGator’s unique advantage is that it watches both official status pages and user reports continuously and keeps them imported. If your team depends on the cloud, give StatusGator a try.


















