Resolved -
On April 27, 2026 between 16:15 UTC and 22:46 UTC, GitHub search services experienced degraded connectivity due to saturation of the load balancing tier deployed in front of our search infrastructure. This resulted in intermittent failures for services relying on our search data including Issues, Pull Requests, Projects, Repositories, Actions, Package Registry and Dependabot Alerts. The impact was varied by search target, with services seeing up to 65% of searches timing out or returning an error between 16:15 UTC and 18:00 UTC.
We detected the drop in search results through our ongoing monitoring and declared an incident at 16:21 UTC when we determined the issues would not self-heal. We tracked the incident as mitigated as of 21:33 UTC and monitored the systems until 22:46 UTC when we declared the incident resolved. Our existing monitoring did not classify the increased scraping as a risk and this dimension of the incident was only discovered while working to mitigate.
The saturation was caused by a large influx of anonymous distributed scraping traffic that was crafted to avoid our public API rate limits. This scraping traffic made up 30% of the day’s total search traffic, but it was concentrated within a four-hour period. The traffic originated from over 600,000 Unique IP addresses, with matching actor information across the board.
To mitigate, we immediately focused on relieving pressure from the load balancers while simultaneously working on scaling the load balancing tier, blocking the anomalous traffic and applying tuning to the balancers to fully resolve the incident.
Looking ahead, we’ve not only scaled the load balancer tier, but applied optimizations to improve our connection handling and re-use to reduce the possibility that a saturation event like this can re-occur. We’ve also added new monitors and controls within the platform to allow us to restrict anonymous traffic to mitigate the impact to our registered users.
Apr 27, 22:46 UTC
Update -
The degradation has been mitigated. We are monitoring to ensure stability.
Apr 27, 22:44 UTC
Update -
The degradation has been mitigated. We are monitoring to ensure stability.
Apr 27, 22:35 UTC
Monitoring -
The degradation affecting Actions, Issues, Packages and Pull Requests has been mitigated. We are monitoring to ensure stability.
Apr 27, 21:33 UTC
Update -
We've applied a mitigation and continuing to monitor
Apr 27, 21:32 UTC
Update -
Pull Requests is experiencing degraded performance. We are continuing to investigate.
Apr 27, 20:06 UTC
Update -
We have identified the source of the additional load causing stress on our ElasticSearch clusters. We have disabled the source of that load and are seeing signs of recovery
Apr 27, 19:50 UTC
Update -
Pull Requests is experiencing degraded availability. We are continuing to investigate.
Apr 27, 18:19 UTC
Update -
We're continuing to see connectivity issues reaching elasticsearch. Impact on downstream services will be intermittent as we find the root cause
Apr 27, 18:17 UTC
Update -
Users are experiencing intermittent failures to view issues, pull requests, projects and Actions workflow runs.
We are still investigating and attempting mitigations. We will provide further updates.
Apr 27, 17:35 UTC
Update -
Pull Requests is experiencing degraded performance. We are continuing to investigate.
Apr 27, 16:53 UTC
Update -
Packages is experiencing degraded performance. We are continuing to investigate.
Apr 27, 16:39 UTC
Update -
Issues is experiencing degraded performance. We are continuing to investigate.
Apr 27, 16:36 UTC
Update -
Customers across GitHub are experiencing failures with searches. Examples include: workflow run failures, projects failing to load, and timed out search requests. This is due to an ongoing infrastructure issue that we have been investigating.
Apr 27, 16:33 UTC
Investigating -
We are investigating reports of degraded performance for Actions
Apr 27, 16:31 UTC