Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

✨ Remediate unhealthy MachinePool machines #11392

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

AndiDog
Copy link
Contributor

@AndiDog AndiDog commented Nov 7, 2024

What this PR does / why we need it:

With infra providers implementing the MachinePool machines specification, users want to use MachineHealthCheck to select, check and remediate single machines within a machine pool. Right now, only KubeadmControlPlane and MachineSet machines get deleted once they are marked unhealthy by the MachineHealthCheck controller.

Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):

Part of #9005 because this is a major gap of machine pool machines support.

Related to CAPA implementation for AWSMachinePool AWSMachines (PR), since that's the provider where we want to use the remediation feature first, as soon as possible. If both PRs have good chances of getting accepted, my company will integrate them first into their fork and test them extensively.

/area machinepool

@k8s-ci-robot k8s-ci-robot added area/machinepool Issues or PRs related to machinepools cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Nov 7, 2024
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign chrischdi for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Nov 7, 2024

// Calculate how many in flight machines we should remediate.
// By default, we allow all machines to be remediated at the same time.
maxInFlight := len(unhealthyMachines)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You'll find this mostly copied from the MachineSet remediation code, except that I don't like changing the meaning of a variable throughout the function, so I split up inFlight/maxInFlight variables.

@@ -1299,7 +1299,10 @@ func (r *Reconciler) reconcileUnhealthyMachines(ctx context.Context, s *scope) (
for _, m := range machinesToRemediate {
log.Info("Deleting unhealthy Machine", "Machine", klog.KObj(m))
patch := client.MergeFrom(m.DeepCopy())
if err := r.Client.Delete(ctx, m); err != nil && !apierrors.IsNotFound(err) {
if err := r.Client.Delete(ctx, m); err != nil {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Small bugfix that I took over from my code: don't go on with conditions.MarkTrue/Patch if the object doesn't exist

@AndiDog AndiDog force-pushed the machine-pool-machine-remediation branch from 792b24a to a7fc58f Compare November 7, 2024 22:13
@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Nov 7, 2024
@AndiDog
Copy link
Contributor Author

AndiDog commented Nov 7, 2024

Not sure what's up with the build. I ran make generate again locally with no differences.

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Nov 8, 2024
@fabriziopandini
Copy link
Member

Before looking into this PR It would be great to better understand what is the overall status of MachinePool machines.
See #9005 (comment) and more specifically

It's not entirely clear to me (us) how much of the proposal is implemented

Also, probably worth to think about E2E test coverage of this feature; however, the key decision to be taken is if we want to keep adding MP tests to existing test pipelines or if we want to have dedicated test pipelines for MP.
I'm bringing this us because the last few releases we always struggled to cleanup up our release signal from MP flakes

@sbueringer
Copy link
Member

I have concerns that adding separate dedicated test pipelines for MP will increase the maintenance effort for all of us.

Instead I would simply disable MachinePools for e2e tests where it leads to flakiness (until the flakes are fixed).

@fabriziopandini
Copy link
Member

fabriziopandini commented Nov 11, 2024

I have concerns that adding separate dedicated test pipelines for MP will increase the maintenance effort for all of us.

Ideally those should be maintained by folks taking care of MP, but I agree with you that without a clear commitment from a small set of new maintainers, this could become an additional burden for the existing maintainers.

edited 23/11

What I have in mind is not to add complexity, but just expand a little bit on what we have today where we a have single test func that is used for different test scenarios by simply switching templates of changing a few inputs.
The idea is to to have dedicated test scenarios for new MP features like the one discussed in this PR, and do not merge this scenarios in existing test scenarios until new features are stable/without flakes

@AndiDog AndiDog force-pushed the machine-pool-machine-remediation branch from a7fc58f to 2f706f2 Compare November 18, 2024 08:33
@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Nov 18, 2024
@k8s-ci-robot
Copy link
Contributor

@AndiDog: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-cluster-api-build-main 2f706f2 link true /test pull-cluster-api-build-main
pull-cluster-api-e2e-blocking-main 2f706f2 link true /test pull-cluster-api-e2e-blocking-main
pull-cluster-api-test-main 2f706f2 link true /test pull-cluster-api-test-main

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@AndiDog
Copy link
Contributor Author

AndiDog commented Nov 18, 2024

@fabriziopandini @sbueringer can we start a chat thread about flaky tests? I couldn't find exactly which ones those are.

Regarding this PR, I get strange differences from a local make generate which lead to the build errors. What am I doing wrong? Why would it delete the Convert_v1beta1_MachinePoolSpec_To_v1alpha4_MachinePoolSpec function, for instance? This happens on macOS, even after cleaning hack/tools.

@fabriziopandini
Copy link
Member

fabriziopandini commented Nov 23, 2024

@AndiDog we usually use https://storage.googleapis.com/k8s-triage/index.html?date=2024-03-08&pr=1&job=pull-cluster-api-e2e-* and play around with filter criteria to find flakes.
Might be it could also help to look at the CI team notes. https://github.com/kubernetes-sigs/cluster-api/tree/main/docs/release/role-handbooks/ci-signal#continuously-reduce-the-amount-of-flaky-tests

AFAIK, we did a good job to reduce flakes in the last few months (and my note above about E2E was more a forward looking statement for tests that we should add for this work).

As per office hour discussion, what could really help to move on with this discussion is to get a better understanding is what is the overall status of MachinePool machines (and machine pool in general).

#9005 (comment) is the best shot that we have a this, but having someone like you with concrete experience in this area taking a look at open issues / last few proposal and figure out current state could really help to gain confidence for moving forward

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Dec 20, 2024
@k8s-ci-robot
Copy link
Contributor

PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/machinepool Issues or PRs related to machinepools cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants