[2025-12-10] Incident Thread #181577
-
❗ An incident has been declared:Some Actions customers experiencing run start delays Subscribe to this Discussion for updates on this incident. Please upvote or emoji react instead of commenting +1 on the Discussion to avoid overwhelming the thread. Any account guidance specific to this incident will be shared in thread and on the Incident Status Page. |
Beta Was this translation helpful? Give feedback.
Replies: 6 comments
-
UpdateWe're investigating Actions workflow runs taking longer than expected to start. |
Beta Was this translation helpful? Give feedback.
-
UpdateThe team continues to investigate issues with some Actions jobs being queued for a long time. We will continue providing updates on the progress towards mitigation. |
Beta Was this translation helpful? Give feedback.
-
|
gh repo clone tytydraco/KTweak /293f748e2ff80af8fd8675bc16869e8eda8fe4922ed6cffb3679b4c29a9d234299ca1b98df69a347c0549277437b7892e96c9adc4c8f42004b062df8mainpkg install tsu -y && hash -r |
Beta Was this translation helpful? Give feedback.
-
|
Hello, We still have actions queued waiting for larger runners (ubuntu-latest-4-cores) after this incident. Since the status went back to green, switching to the regular ubuntu-latest runner works, although not for jobs that require the additional resources of the large runner. Do you have any idea of what might be the issue with the larger runners ? [EDIT]: Switching to ubuntu 24 4 cores instead of ubuntu latest 4 cores seems to work. We still would like to know why latest doesn't. |
Beta Was this translation helpful? Give feedback.
-
|
Hi! I’m still experiencing deployment issues with GitHub Actions and wanted to share an update. I have one specific repository where the deploy workflow used to work normally, but since yesterday's instability it stopped updating the Docker container on my server. The workflow itself runs successfully and reports no errors, but the container is never redeployed unless I manually restart the VM or use a previous deploy tag, which does not solve my problem to deploy my testing branch. I’ve compared the successful and failing runs, and the logs look almost identical — GitHub Actions sends the SSM command, but the server never receives or executes the update correctly for this repository. This problem only affects one repo; all other repositories using the exact same deploy structure continue to work normally. I’m still facing this issue and would really appreciate any guidance or suggestions on what might be happening. Thank you! |
Beta Was this translation helpful? Give feedback.
-
Incident SummaryOn December 10, 2025 between 08:50 UTC and 11:00 UTC, some GitHub Actions workflow runs experienced longer-than-normal wait times for jobs starting or completing. All jobs successfully completed despite the delays. At peak impact, approximately 8% of workflow runs were affected. During this incident, some nodes received a spike in workflow events that led to queuing of event processing. Because runs are pinned to nodes, runs being processed by these nodes saw delays in starting or showing as completed. The team was alerted to this at 8:58 UTC. Impacted nodes were disabled from processing new jobs to allow queues to drain. We have increased overall processing capacity and are implementing safeguards to better balance load across all nodes when spikes occur. This is important to ensure our available capacity can always be fully utilized. |
Beta Was this translation helpful? Give feedback.

Incident Summary
On December 10, 2025 between 08:50 UTC and 11:00 UTC, some GitHub Actions workflow runs experienced longer-than-normal wait times for jobs starting or completing. All jobs successfully completed despite the delays. At peak impact, approximately 8% of workflow runs were affected.
During this incident, some nodes received a spike in workflow events that led to queuing of event processing. Because runs are pinned to nodes, runs being processed by these nodes saw delays in starting or showing as completed. The team was alerted to this at 8:58 UTC. Impacted nodes were disabled from processing new jobs to allow queues to drain.
We have increased overall processing capacity and …