-
Notifications
You must be signed in to change notification settings - Fork 41.9k
Put pods preempted in WaitOnPermit to backoff queue #135719
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
|
Please note that we're already in Test Freeze for the Fast forwards are scheduled to happen every 6 hours, whereas the most recent run was: Thu Dec 11 09:40:29 UTC 2025. |
|
This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: Argh4k The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
/assign macsko ania-borowiec |
|
@Argh4k: The following tests failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
What type of PR is this?
/kind bug
What this PR does / why we need it:
This PR changes the behavior of the scheduler when it preempts the pods that are in "WaitOnPermit" phase. Currently the pods are Rejected by a preemption plugin which moves the victim pod to unschedulable queue, adding the information about preemption to PodCondition. It means that this pod will be stuck in unschedulable queue until some event in the cluster pushes it back to backoff/active queue. It seems undesirable as it does not give a strong signal to the component owning the pod to recreate it (compared to default preemption which deletes the pod completely). After this PR the victim pod will be moved to the backoff queue, allowing the scheduler to try to schedule it once again after some time.
Which issue(s) this PR is related to:
This was found as a part of the #132332
Special notes for your reviewer:
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:
N/A
/sig scheduling