-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
✨ Remediate unhealthy MachinePool machines #11392
base: main
Are you sure you want to change the base?
✨ Remediate unhealthy MachinePool machines #11392
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
||
// Calculate how many in flight machines we should remediate. | ||
// By default, we allow all machines to be remediated at the same time. | ||
maxInFlight := len(unhealthyMachines) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You'll find this mostly copied from the MachineSet
remediation code, except that I don't like changing the meaning of a variable throughout the function, so I split up inFlight
/maxInFlight
variables.
@@ -1299,7 +1299,10 @@ func (r *Reconciler) reconcileUnhealthyMachines(ctx context.Context, s *scope) ( | |||
for _, m := range machinesToRemediate { | |||
log.Info("Deleting unhealthy Machine", "Machine", klog.KObj(m)) | |||
patch := client.MergeFrom(m.DeepCopy()) | |||
if err := r.Client.Delete(ctx, m); err != nil && !apierrors.IsNotFound(err) { | |||
if err := r.Client.Delete(ctx, m); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Small bugfix that I took over from my code: don't go on with conditions.MarkTrue
/Patch
if the object doesn't exist
792b24a
to
a7fc58f
Compare
Not sure what's up with the build. I ran |
Before looking into this PR It would be great to better understand what is the overall status of MachinePool machines.
Also, probably worth to think about E2E test coverage of this feature; however, the key decision to be taken is if we want to keep adding MP tests to existing test pipelines or if we want to have dedicated test pipelines for MP. |
I have concerns that adding separate dedicated test pipelines for MP will increase the maintenance effort for all of us. Instead I would simply disable MachinePools for e2e tests where it leads to flakiness (until the flakes are fixed). |
Ideally those should be maintained by folks taking care of MP, but I agree with you that without a clear commitment from a small set of new maintainers, this could become an additional burden for the existing maintainers. edited 23/11 What I have in mind is not to add complexity, but just expand a little bit on what we have today where we a have single test func that is used for different test scenarios by simply switching templates of changing a few inputs. |
a7fc58f
to
2f706f2
Compare
@AndiDog: The following tests failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
@fabriziopandini @sbueringer can we start a chat thread about flaky tests? I couldn't find exactly which ones those are. Regarding this PR, I get strange differences from a local |
@AndiDog we usually use https://storage.googleapis.com/k8s-triage/index.html?date=2024-03-08&pr=1&job=pull-cluster-api-e2e-* and play around with filter criteria to find flakes. AFAIK, we did a good job to reduce flakes in the last few months (and my note above about E2E was more a forward looking statement for tests that we should add for this work). As per office hour discussion, what could really help to move on with this discussion is to get a better understanding is what is the overall status of MachinePool machines (and machine pool in general). #9005 (comment) is the best shot that we have a this, but having someone like you with concrete experience in this area taking a look at open issues / last few proposal and figure out current state could really help to gain confidence for moving forward |
PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
What this PR does / why we need it:
With infra providers implementing the MachinePool machines specification, users want to use
MachineHealthCheck
to select, check and remediate single machines within a machine pool. Right now, onlyKubeadmControlPlane
andMachineSet
machines get deleted once they are marked unhealthy by theMachineHealthCheck
controller.Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Part of #9005 because this is a major gap of machine pool machines support.
Related to CAPA implementation for AWSMachinePool AWSMachines (PR), since that's the provider where we want to use the remediation feature first, as soon as possible. If both PRs have good chances of getting accepted, my company will integrate them first into their fork and test them extensively.
/area machinepool