On December 11, 2025, GitHub experienced intermittent disruptions that frustrated users across the globe. Developers everywhere started seeing random errors, 503s, unicorns, and CI pipeline failures. Very quickly it became clear something was wrong, even though GitHub’s status page still said ALL SYSTEMS OPERATIONAL.
After the incident was over, GitHub published a postmortem that revealed the cause: scrapers.
“GitHub experienced an increase in scraper activity on public parts of our website. This scraper activity caused a low priority web request pool to increase and eventually exceed total capacity resulting in users experiencing 500 errors.”
Automated tools hit GitHub with enough traffic to overwhelm key backend systems. This caused failures in browsing repositories, viewing raw files, downloading releases, and running GitHub Action CI workflows. The impact was large enough that GitHub eventually opened two separate incidents on its status page.
In the timeline below, we pieced together the progression of the outage using StatusGator data, real user reports, and public discussion all help paint a clear picture of how the outage unfolded.
GitHub’s Postmortem: Scrapers Overloaded Core Systems
GitHub explained that aggressive scraping traffic bypassed caching layers and overloaded core backend services. This caused a range of errors that users saw during the outage, including:
- 503 and 504 responses
- Unicorn errors when trying to view files
- Failed artifact and release downloads
- CI jobs that could not fetch required resources
GitHub specifically provided this root cause on one of the incidents, but the symptoms throughout the day matched this pattern consistently.
Timeline of the December 11 Outage
13:39 UTC: Early Trouble Begins
The first StatusGator reports arrived at 13:39 UTC. Some examples include:
- “Service outage” – Poland
- “Error messages” – Switzerland
- “Service down” – Czechia

13:45 to 13:50 UTC: Reports Increase
More users started reporting problems:
- “Unable to download” – Germany
- “Server not responding” – United States
- “No server is currently available to service your request.” – India
At this time we started seeing the first reports from users in the US and the volume of reports started to increase.
13:51 UTC: StatusGator Sends an Early Warning Signal
StatusGator’s Early Warning Signals algorithm recognized the pattern and issued an alert to teams monitoring GitHub at 13:51 UTC. At this time:
- GitHub’s status page still showed a green indicator
- Many developers were already seeing failures
- The outage was clearly global.
This was the first authoritative alert that the incident was underway. And 3 minutes later, a thread was opened on Hacker News: “Is GitHub Down?”
14:00 to 16:00 UTC: Outage Peaks Around the World
StatusGator saw a surge of reports from Europe, North America, India, Southeast Asia, and Australia. Developers reported a similar set of problems during this window:
- “503 gateway timeout when downloading release artifacts”
- “Intermittent 504s on every page load”
- “GitHub Release 503. CI failing everywhere”
- “Service down 504 GW Timeout errors”
- “Unicorn Error, even with the status page showing green”
GitHub Posts the First Official Acknowledgement
Eventually at 15:47 UTC, around 2 hours after StatusGator sent its first notification, GitHub updated its status page with an “Investigating” message. By the time this happened:
As we all know, status pages lag (some more than others) and this is why StatusGator sends Early Warning Signals. We also rate providers based on their transparency (we rate GitHub a C at the present time).
Around this time the first threads began to appear on Reddit as well.
What This Outage Says About Today’s Internet
This incident was another GitHub outage that frustrated thousands of developers around the world, something that is becoming more common. A HN user even posted a Days since the Last GitHub Outage project.
The fact that it was caused by scrapers makes it even more noteworthy as an ongoing battle is brewing between scrapers and publishers. Many scapers are tied to AI tools, data collection systems, and companies racing to gather training material for LLMs. Public code and metadata have become valuable resources, and the pressure from this automated demand is growing quickly.
GitHub’s postmortem showed how scraping activity can overwhelm backend systems and create real problems for millions of developers. It also showed how fast these failures unfold. During this outage, users and independent monitors saw the impact long before official dashboards updated.
As AI ecosystems expand and the value of open code continues to rise, incidents like this may become more common. Platforms will keep adapting, but teams also need ways to detect outages early. StatusGator surfaced this one before the first official acknowledgement, offering visibility during the most uncertain part of the event.
Understanding how and why these outages happen helps developers stay prepared in a world where automated traffic is only increasing. For an early notification about the next GitHub outage, try StatusGator.
December 11th GitHub Outage Timeline Summary
- GitHub states the scraping activity began at 13:25 UTC
- The earliest StatusGator reports arrived at 13:39 UTC
- StatusGator’s Early Warning Signal alerted at 13:51 UTC
- GitHub acknowledged the issue at 15:47 UTC
- Reports to StatusGator subsided around 18:30 UTC.
- GitHub stated the scraping activity ended around 18:35 UTC.
- The official GitHub incident was resolved at 20:05 UTC.





















