GitHub header

All Systems Operational

About This Site

Check GitHub Enterprise Cloud status by region:
- Australia: au.githubstatus.com
- EU: eu.githubstatus.com
- Japan: jp.githubstatus.com
- US: us.githubstatus.com

Git Operations Operational
90 days ago
99.78 % uptime
Today
Webhooks Operational
90 days ago
99.66 % uptime
Today
Visit www.githubstatus.com for more information Operational
API Requests Operational
90 days ago
99.95 % uptime
Today
Issues Operational
90 days ago
99.77 % uptime
Today
Pull Requests Operational
90 days ago
99.45 % uptime
Today
Actions Operational
90 days ago
99.56 % uptime
Today
Packages Operational
90 days ago
99.97 % uptime
Today
Pages Operational
90 days ago
99.95 % uptime
Today
Copilot Operational
90 days ago
99.74 % uptime
Today
Codespaces Operational
90 days ago
99.67 % uptime
Today
Copilot AI Model Providers Operational
90 days ago
100.0 % uptime
Today
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Major outage
Partial outage
No downtime recorded on this day.
No data exists for this day.
had a major outage.
had a partial outage.
May 8, 2026

No incidents reported today.

May 7, 2026
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
May 7, 06:56 UTC
Update - Copilot code review and cloud agents are starting again for pull requests, we are monitoring for full recovery.
May 7, 06:14 UTC
Monitoring - The degradation has been mitigated. We are monitoring to ensure stability.
May 7, 06:13 UTC
Investigating - We are investigating reports of impacted performance for some GitHub services.
May 7, 05:02 UTC
May 6, 2026
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
May 6, 19:04 UTC
Update - Mitigations have been fully applied and we are seeing full recovery of functionality on Pull Request threads. We are continuing to monitor to ensure sustained recovery.
May 6, 19:04 UTC
Update - Creation of new Pull Request threads (including line and file comments) continues to be affected although we are seeing partial recovery.

A mitigation is being applied to continue to accelerate recovery with complete recovery expected by 8:00pm UTC.

Top-level comments on pull requests still function and should remain usable during recovery. Opening and merging pull requests, actions, and other pull request operations remain functional.




May 6, 17:52 UTC
Update - Creation of new Pull Request threads (including line and file comments) continues to be affected.

Top-level comments on pull requests still function and should remain usable during recovery. Opening and merging pull requests, actions, and other pull request operations remain functional.

A mitigation is being applied. Recovery is expected to be gradual, with complete recovery expected by 8:00pm UTC.




May 6, 16:20 UTC
Update - Pull Requests is experiencing degraded availability. We are continuing to investigate.
May 6, 16:07 UTC
Update - Creation of new Pull Request threads (including line and file comments) continues to be affected. We have identified the cause of the issue and have started taking steps to mitigate this issue.
May 6, 15:55 UTC
Update - We are investigating failures for new thread creation on Pull Requests. Responses to existing pull request threads are unaffected.
May 6, 15:28 UTC
Investigating - We are investigating reports of degraded performance for Pull Requests
May 6, 15:25 UTC
Resolved - On May 6, 2026 between 11:02 UTC and 11:13 UTC, users were unable to start or view Copilot Cloud Agent or remote sessions. During this time, requests to the session API returned errors, preventing users from creating new sessions or viewing existing ones. The issue was caused by a configuration change to the service's network routing that inadvertently removed the ingress path for the service. The team reverted the change at 11:13 UTC which restored service. The incident remained open until 11:59 UTC while the team verified full recovery. We are taking steps to improve our deployment validation process to prevent similar configuration changes from impacting production traffic in the future.
May 6, 11:59 UTC
Update - We have applied a mitigation and Copilot services have recovered.
May 6, 11:59 UTC
Update - We are investigating issues with the ability to start Copilot Cloud Agent sessions and view them.
May 6, 11:25 UTC
Investigating - We are investigating reports of impacted performance for some GitHub services.
May 6, 11:21 UTC
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
May 6, 09:44 UTC
Update - Actions wait times have fully recovered.
May 6, 09:44 UTC
Monitoring - The degradation affecting Actions has been mitigated. We are monitoring to ensure stability.
May 6, 09:19 UTC
Update - We've applied a mitigation to fix the issues with queuing and running Actions jobs. We are seeing improvements in telemetry and are monitoring for full recovery.
May 6, 09:08 UTC
Update - Actions is experiencing issues with ubuntu standard hosted runners leading to high wait times. We are actively investigating the issue
May 6, 08:00 UTC
Investigating - We are investigating reports of degraded availability for Actions
May 6, 07:19 UTC
May 5, 2026
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
May 5, 18:35 UTC
Update - We've completed our mitigation to prevent further impact. At this time the incident is considered resolved.
May 5, 18:35 UTC
Monitoring - The degradation affecting Git Operations has been mitigated. We are monitoring to ensure stability.
May 5, 18:25 UTC
Update - We're continuing to work on preventing further impact from the earlier issue. No SSH-based impact is expected at this time. We'll post new updates if impact recurs or once our mitigation is in place.
May 5, 17:26 UTC
Investigating - Git Operations is experiencing degraded performance. We are continuing to investigate.
May 5, 17:23 UTC
Monitoring - Between approximately 14:00 and 16:10 UTC, customers using SSH-based Git operations may have experienced elevated latency and failures. HTTP-based operations were not impacted. We've identified a suspected root cause and are working to implement a mitigation to prevent further impact.
May 5, 16:54 UTC
Investigating - We are investigating reports of impacted performance for some GitHub services.
May 5, 16:49 UTC
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
May 5, 17:26 UTC
Update - Actions is experiencing degraded performance. We are continuing to investigate.
May 5, 17:11 UTC
Update - Standard hosted runners have now reached full recovery. Hosted Runners with Private Networking in the East US region remain degraded as we continue working with our compute provider to restore capacity. Hosted Runners with private networking can fail over to a different Region to mitigate the issue.
May 5, 17:11 UTC
Update - We've seen signs of recovery for Standard Hosted Runners and are continuing to monitor for full recovery. Hosted Runners with Private Networking in the East US region remain affected as we continue working with our compute provider to restore capacity.
May 5, 16:33 UTC
Update - We've applied a mitigation for long queue times and failures on Standard Hosted Runners and are monitoring for full recovery. Hosted Runners with Private Networking in the East US region remain affected as we continue working with our compute provider to restore capacity.
May 5, 15:54 UTC
Update - We are working with our compute provider to alleviate elevated queue times and failures for Actions Jobs running on Hosted Runners in the East US region affecting 10% of runs. Hosted Runners with private networking can fail over to a different Region to mitigate the issue.
May 5, 15:12 UTC
Update - We are investigating elevated queue times and failures on Actions Jobs running on Hosted Runners in East US affecting 8% of runs. Hosted Runners with private networking can fail over to a different Azure region to mitigate the issue.
May 5, 14:14 UTC
Update - We are investigating elevated queue times on Actions Jobs running on Standard Hosted Runners in East US affecting 10% of runs
May 5, 13:48 UTC
Investigating - We are investigating reports of degraded availability for Actions
May 5, 13:37 UTC
May 4, 2026
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
May 4, 16:40 UTC
Monitoring - The degradation has been mitigated. We are monitoring to ensure stability.
May 4, 16:36 UTC
Update - Webhooks is operating normally.
May 4, 16:35 UTC
Update - The degradation affecting Codespaces has been mitigated. We are monitoring to ensure stability.
May 4, 16:35 UTC
Update - The degradation affecting Issues has been mitigated. We are monitoring to ensure stability.
May 4, 16:34 UTC
Update - Pull Requests is operating normally.
May 4, 16:32 UTC
Update - Pages is operating normally.
May 4, 16:29 UTC
Update - Latency across services has normalized. We are continuing to investigate the root cause and prevent reoccurrence.
May 4, 16:29 UTC
Update - Actions and Packages are operating normally.
May 4, 16:28 UTC
Update - Git Operations is operating normally.
May 4, 16:25 UTC
Update - Pages is experiencing degraded performance. We are continuing to investigate.
May 4, 16:06 UTC
Update - Codespaces is experiencing degraded performance. We are continuing to investigate.
May 4, 16:05 UTC
Update - Pull Requests is experiencing degraded performance. We are continuing to investigate.
May 4, 15:56 UTC
Update - Actions is experiencing degraded performance. We are continuing to investigate.
May 4, 15:51 UTC
Update - Pull Requests is experiencing degraded availability. We are continuing to investigate.
May 4, 15:51 UTC
Update - Packages is experiencing degraded performance. We are continuing to investigate.
May 4, 15:50 UTC
Update - Git Operations is experiencing degraded performance. We are continuing to investigate.
May 4, 15:48 UTC
Update - We are investigating Increased latency and timeouts across multiple GitHub services.
May 4, 15:48 UTC
Investigating - We are investigating reports of degraded performance for Issues and Webhooks
May 4, 15:45 UTC
May 3, 2026

No incidents reported.

May 2, 2026

No incidents reported.

May 1, 2026
Resolved - On April 28, 2026, at approximately 14:07 UTC, GitHub received reports that pull requests were missing from search results across global and repository /pulls pages.

The issue was caused by a manually invoked repair job intended for a single repository, which was executed without the required safety flags. During execution of the repair job, the database query remained correctly scoped to the repo’s PR IDs. However, the Elasticsearch reconciliation logic did not apply the same scope. It interpreted the min and max PR IDs as a continuous range, causing unrelated PR documents across other repos to be marked for deletion. This resulted in the removal of 1,789,756,838 PR documents from the search index, approximately 49% of indexed PR documents.

Customer impact was limited to PR search and list discoverability. Primary storage was unaffected, and there was no impact to opening, updating, or merging PRs.

The issue was identified ~10 minutes after initial customer reports. Because it affected search index completeness rather than service availability, it was not caught by existing monitoring.

The root cause was a flaw in the search document repair framework: it allowed a scoped reconciliation to run without enforcing a matching Elasticsearch query scope. This created a destructive mismatch between the source-of-truth and the index. The issue was compounded by the ability to trigger the job from the production console without safety defaults. Prior testing focused only on safe backfill scenarios and did not cover this reconciliation path. Additionally, there was no automated detection for large-volume deletions in Elasticsearch.

We mitigated the incident through three parallel actions: (1) Deployed a MySQL-backed search fallback for the most active repos by traffic to restore PR visibility for highly impacted users (2) Initiated a snapshot restore and reindex process to repopulate missing pull request documents in Elasticsearch (3) Added a degradation notice on PR pages to inform users of incomplete search results while recovery was in progress. The incident was resolved on May 1, 2026 at 4:15 UTC, following completion and validation of the reindex process.

To prevent recurrence, we are prioritizing improvements to the repair framework and safeguards. These include enforcing scoped query alignment between primary storage and Elasticsearch, preventing destructive operations without explicit opt-in, strengthening guardrails for manual repair jobs, and evaluating restrictions on production console access.

In parallel, we are expanding automated test coverage for reconciliation safety invariants and introducing detection for anomalous deletion patterns in Elasticsearch so similar issues can be identified or blocked earlier.

We are committed to improving the safety and reliability of our repair systems and ensuring that operational workflows are resilient to both software defects and manual invocation risks.
















May 1, 04:15 UTC
Update - This incident has been resolved. Search and indexing functionality for pull requests are now fully restored. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
May 1, 04:11 UTC
Update - We have repaired the missing search records for affected Pull Requests and are working to identify and repair records left in a stale state after the recovery.
Apr 30, 03:49 UTC
Update - We have restored search/indexing functionality for over 99% of impacted pull requests. We are continuing to address the remaining affected pull requests and are reviewing outstanding gaps as part of the restoration process.
Apr 29, 22:22 UTC
Update - Mitigation is in progress, with full recovery of impacted pull request listings expected within approximately 24 hours.
Apr 29, 00:40 UTC
Update - We have made an interim mitigation to improve availability for some impacted repositories while reindexing continues, and we are actively monitoring the indexing progress.
Apr 28, 22:46 UTC
Update - Elastic search reindexing of pull requests is continuing. All data is preserved, but may not be available on pages relying on elasticsearch until the reindex is complete.

Pages and APIs that do not rely on elasticsearch, including the GitHub CLI (gh pr list) and API (/repos/{owner}/{repo}/pulls), are not impacted and can be used to retrieve pull request data in the interim.


Apr 28, 21:43 UTC
Update - We are actively reindexing the remaining ElasticSearch indexes. Our priority is ensuring correctness and avoiding further impact.  We are taking a measured approach to safely backfill data and will share additional updates as progress continues.
Apr 28, 15:58 UTC
Update - After yesterday’s incident, we are investigating cases where /pulls and /repo/pulls pages are not showing all indexed pull requests. This is because our Elasticsearch cluster does not currently contain all indexed documents.

No pull request data has been lost. As pull requests are updated, they will be reindexed. We are also working on accelerating a full reindex so these pages return complete results again.


Apr 28, 14:51 UTC
Investigating - We are investigating reports of degraded performance for Pull Requests
Apr 28, 14:17 UTC
Apr 30, 2026
Apr 29, 2026
Apr 28, 2026
Resolved - On April 28, 2026, from approximately 12:41 UTC to 17:09 UTC, GitHub Actions jobs using Standard Ubuntu 22 and Ubuntu 24 hosted runners experienced run start delays. Approximately 8% of hosted runner jobs using Ubuntu 22 and Ubuntu 24 experienced delays greater than 5 minutes or failures. Larger and self-hosted runners were not impacted.

This was caused by a performance regression introduced in the VM reimage process. That reimage delay lowered the overall capacity of runners available to pick up new jobs. This was mitigated with a rollback to a known good image version.

We are addressing the core issue with reimage performance and improving the granularity of reimage telemetry across our services and our compute provider to more quickly diagnose similar issues in the future. Finally, we are evaluating other rollout changes to automatically detect similar regressions.




Apr 28, 17:09 UTC
Monitoring - Actions is operating normally.
Apr 28, 17:08 UTC
Update - Less than 1% of hosted ubuntu-latest runs are delayed. We’re working through remaining steps to restore runner capacity.
Apr 28, 16:36 UTC
Update - Currently less than 2% of hosted ubuntu-latest and ubuntu-24.04 runs are delayed or failing. We are continuing to monitor for full recovery.
Apr 28, 15:41 UTC
Update - We've applied a mitigation to unblock running Actions. We're continuing to monitor.
Apr 28, 15:20 UTC
Update - We're still investigating the root cause for run start delays and failures for Actions hosted Ubuntu jobs, around 5% of jobs are impacted as of now.
Apr 28, 14:49 UTC
Investigating - Actions is experiencing degraded performance. We are continuing to investigate.
Apr 28, 14:02 UTC
Monitoring - Actions is experiencing capacity constraints with hosted ubuntu-latest and ubuntu-24.02, leading to high wait times. Other hosted labels and self-hosted runners are not impacted.
Apr 28, 13:59 UTC
Investigating - We are investigating reports of impacted performance for some GitHub services.
Apr 28, 13:59 UTC
Apr 27, 2026
Resolved - On April 27, 2026 between 16:15 UTC and 22:46 UTC, GitHub search services experienced degraded connectivity due to saturation of the load balancing tier deployed in front of our search infrastructure. This resulted in intermittent failures for services relying on our search data including Issues, Pull Requests, Projects, Repositories, Actions, Package Registry and Dependabot Alerts. The impact was varied by search target, with services seeing up to 65% of searches timing out or returning an error between 16:15 UTC and 18:00 UTC.

We detected the drop in search results through our ongoing monitoring and declared an incident at 16:21 UTC when we determined the issues would not self-heal. We tracked the incident as mitigated as of 21:33 UTC and monitored the systems until 22:46 UTC when we declared the incident resolved. Our existing monitoring did not classify the increased scraping as a risk and this dimension of the incident was only discovered while working to mitigate.

The saturation was caused by a large influx of anonymous distributed scraping traffic that was crafted to avoid our public API rate limits. This scraping traffic made up 30% of the day’s total search traffic, but it was concentrated within a four-hour period. The traffic originated from over 600,000 Unique IP addresses, with matching actor information across the board.

To mitigate, we immediately focused on relieving pressure from the load balancers while simultaneously working on scaling the load balancing tier, blocking the anomalous traffic and applying tuning to the balancers to fully resolve the incident.

Looking ahead, we’ve not only scaled the load balancer tier, but applied optimizations to improve our connection handling and re-use to reduce the possibility that a saturation event like this can re-occur. We’ve also added new monitors and controls within the platform to allow us to restrict anonymous traffic to mitigate the impact to our registered users.











Apr 27, 22:46 UTC
Update - The degradation has been mitigated. We are monitoring to ensure stability.
Apr 27, 22:44 UTC
Update - The degradation has been mitigated. We are monitoring to ensure stability.
Apr 27, 22:35 UTC
Monitoring - The degradation affecting Actions, Issues, Packages and Pull Requests has been mitigated. We are monitoring to ensure stability.
Apr 27, 21:33 UTC
Update - We've applied a mitigation and continuing to monitor
Apr 27, 21:32 UTC
Update - Pull Requests is experiencing degraded performance. We are continuing to investigate.
Apr 27, 20:06 UTC
Update - We have identified the source of the additional load causing stress on our ElasticSearch clusters. We have disabled the source of that load and are seeing signs of recovery
Apr 27, 19:50 UTC
Update - Pull Requests is experiencing degraded availability. We are continuing to investigate.
Apr 27, 18:19 UTC
Update - We're continuing to see connectivity issues reaching elasticsearch. Impact on downstream services will be intermittent as we find the root cause
Apr 27, 18:17 UTC
Update - Users are experiencing intermittent failures to view issues, pull requests, projects and Actions workflow runs.

We are still investigating and attempting mitigations. We will provide further updates.


Apr 27, 17:35 UTC
Update - Pull Requests is experiencing degraded performance. We are continuing to investigate.
Apr 27, 16:53 UTC
Update - Packages is experiencing degraded performance. We are continuing to investigate.
Apr 27, 16:39 UTC
Update - Issues is experiencing degraded performance. We are continuing to investigate.
Apr 27, 16:36 UTC
Update - Customers across GitHub are experiencing failures with searches. Examples include: workflow run failures, projects failing to load, and timed out search requests. This is due to an ongoing infrastructure issue that we have been investigating.
Apr 27, 16:33 UTC
Investigating - We are investigating reports of degraded performance for Actions
Apr 27, 16:31 UTC
Resolved - On April 22, 2026 from 18:49 to 19:32 UTC , the Copilot Cloud Agent service began failing during session execution for users running the Agent HQ Codex agent. Codex agent sessions failed to start for all entry points (issue assignment, @copilot comment mentions). 0.5% of total Copilot Cloud Agent jobs were impacted (~2,000 failed jobs). Copilot and other agent sessions were unaffected.

This was caused by a model resolution mismatch in Codex agent sessions, resulting in an incompatible model being used at runtime. A mitigation was deployed to select a stable default model for Codex agent sessions.

We are working to harden the underlying model-resolution path so it correctly scopes to the requesting agent's supported models to prevent similar failure mode in the future.




Apr 27, 19:02 UTC
Monitoring - The degradation has been mitigated. We are monitoring to ensure stability.
Apr 27, 19:01 UTC
Update - We've found the issue and are working on deploying a solution to get Codex agent runs working again.
Apr 27, 17:26 UTC
Update - Copilot Cloud Agent (CCA) jobs using the Codex agent are failing after starting. To avoid this issue, please choose a different agent. We are investigating the cause and working towards remediation
Apr 27, 17:01 UTC
Investigating - We are investigating reports of impacted performance for some GitHub services.
Apr 27, 16:48 UTC
Apr 26, 2026

No incidents reported.

Apr 25, 2026
Resolved - On April 24, 2026, from approximately 11:39 UTC to April 25, 2026 at 00:15 UTC, GitHub Actions experienced delays and timeouts for Larger Hosted Runner jobs using VNet injection in the East US region without a failover region configured. Standard and Self-hosted runners were not impacted. This was caused by backend failures in our compute provider’s provisioning, scaling, and update operations for VMs in the East US region and mitigated by a rollback across all affected Availability Zones. More detail is available at https://azure.status.microsoft/en-us/status/history/?trackingId=5GP8-W0G.

We are working to improve the reliability of our annotations for jobs impacted by regional issues and are adding system log notifications as an additional customer communication channel alongside annotations.

VNet Failover is also now in public preview, allowing customers to evacuate Larger Hosted Runners using VNet injection in cases like this.




Apr 25, 00:36 UTC
Update - This is related to the public impact, "Multiservice impact for Azure Workloads in East US" shared at https://azure.status.microsoft/
Apr 24, 19:14 UTC
Update - We are investigating reports of degraded performance for Larger Runners with vnet injection in East US and we are working with our service provider on mitigation.
Apr 24, 19:09 UTC
Investigating - We are investigating reports of impacted performance for some GitHub services.
Apr 24, 19:02 UTC
Apr 24, 2026