Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🐛 create bootstrap token if not found in refresh process #11037

Closed
wants to merge 1 commit into from

Conversation

archerwu9425
Copy link
Contributor

@archerwu9425 archerwu9425 commented Aug 12, 2024

What this PR does / why we need it:
For the refreshBootstrapTokenIfNeeded function, if token not found, should create a new one instead of just raise error

why we need it:

  1. Bootstrap token is created by bootstrap controller but will delete by workload-cluster
  2. MachinePool is scaled up/down by cluster autoscaler
  3. When cluster is with paused: true filed, reconcile will stop. During this period, bootstrap token may be deleted but kubeadminconfig will not updated.
  4. Instanced in machinepool created during this period will not be able to join cluster, even after cluster remove paused filed and start to reconcile.

Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #11034

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Aug 12, 2024
@k8s-ci-robot k8s-ci-robot added the do-not-merge/needs-area PR is missing an area label label Aug 12, 2024
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign fabriziopandini for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels Aug 12, 2024
@k8s-ci-robot
Copy link
Contributor

Hi @archerwu9425. Thanks for your PR.

I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@archerwu9425
Copy link
Contributor Author

/area bootstrap

@k8s-ci-robot k8s-ci-robot added area/bootstrap Issues or PRs related to bootstrap providers and removed do-not-merge/needs-area PR is missing an area label labels Aug 12, 2024
Copy link
Member

@neolit123 neolit123 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

seems like this creates a "secret" (bootstrap-token) on demand. i.e. when the user is trying to join node with some invalid BST, the BST will be created for them.
that doesn't seem like the correct thing to do, but interesting to hear more opinions.

@archerwu9425
Copy link
Contributor Author

archerwu9425 commented Aug 12, 2024

seems like this creates a "secret" (bootstrap-token) on demand. i.e. when the user is trying to join node with some invalid BST, the BST will be created for them.

that doesn't seem like the correct thing to do, but interesting to hear more opinions.

If the token kept in kubeadminconfig is not able to find in remote cluster, it will create a new one and update the token in kubeadminconfig and launch template. This will still be handled by bootstrap controller and used for machine pool. Also this is the logic for rotate machine pool bootstrap token, the only difference is the machine pool has nodeRef or not, code block to be referred:

https://github.com/kubernetes-sigs/cluster-api/blob/main/bootstrap/kubeadm/internal/controllers/kubeadmconfig_controller.go#L276-L286

https://github.com/kubernetes-sigs/cluster-api/blob/main/bootstrap/kubeadm/internal/controllers/kubeadmconfig_controller.go#L379-L391

@archerwu9425 archerwu9425 requested a review from neolit123 August 14, 2024 05:37
@archerwu9425
Copy link
Contributor Author

@neolit123 @ncdc @greut Could you please help review? Thanks

@neolit123
Copy link
Member

seems like this creates a "secret" (bootstrap-token) on demand. i.e. when the user is trying to join node with some invalid BST, the BST will be created for them. that doesn't seem like the correct thing to do, but interesting to hear more opinions.

my comment is here. waiting for comments from more maintainers.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 13, 2024
@@ -326,6 +325,20 @@ func (r *KubeadmConfigReconciler) refreshBootstrapTokenIfNeeded(ctx context.Cont

secret, err := getToken(ctx, remoteClient, token)
if err != nil {
if apierrors.IsNotFound(err) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should limit this behaviour (re-create the token) to only when configOwner.IsMachinePool() or when the config owner is a machine and it doesn't have the data secret field set.

This is required because machines, once data secret field is set, are never going to pickup up new data secrets (and having re-created by un-used data secrets around will be noisy/possibly confusing when triaging issues).

We also using test coverage for this change

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@fabriziopandini Any advice for the issue I met? Current logic bug here is:

  1. Token is created and tracked by cluster-api controller, but the lifecycle is handled by workload cluster, it will be deleted by workload cluster without info cluster-api controller
  2. Machine Pool is created by cluster-api controller, but cluster-autoscaler is used to trigger the replica number expected for the pool

So whenever the cluster is paused, things may be out of control, and we can see node join issue, especially when we use the default 15 mins for token ttl.

In our case, I have changed the token ttl to 7 days to avoid the issue, but still I think we should find a way to handle it. If auto refresh is not recommended, will it be acceptable that we got some kind of ctl to do the refresh?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the reason for pausing the cluster? Maybe when pausing the cluster, you also should pause autoscaler for this cluster in some way?!

Copy link
Contributor Author

@archerwu9425 archerwu9425 Nov 28, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In our case:
We have two controllers cluster, so we can migrate workload clusters to another controller cluster during the capo controllers upgrade, which will pause the cluster. If meet any issue during migration, the time for pause may be longer.

As I said, we have used longer ttl to solve this. But still think this issue should be handled.

Cluster api support paused feature, then I think it's reasonable to handle this kind of case. Just my opinion😄

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While the approach you mention here is valid, the feedback from Fabrizio

limit this behaviour (re-create the token) to only when configOwner.IsMachinePool() or when the config owner is a machine and it doesn't have the data secret field set.

should still apply — given that we don't want to renew every token regardless if the owner has joined or not

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The issue I met is for machinepool, so if I limited the refresh token when not found to machinepool created by cluster-api, can it be acceptable?

Copy link
Member

@fabriziopandini fabriziopandini Dec 2, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, it is ok to improve how MP recovers after a Cluster is paused for a long time.
What we want to make sure is that the current change doesn't impact anything which isn't owned by a MP (kubeadmconfig_controller serves also regular machines, not only MP).

Also, please do not consider pause as a regular Cluster API feature.
It is an option that we introduced for allowing extraordinary (emphasis on extraordinary) maintenance operations and it is assumed deep knowledge of the system for whatever happens when the cluster is paused.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've covered this initial request (only recreate for machine pools) in my dupe PR #11520. Sorry that I didn't find this PR earlier – maybe I got distracted by closed PRs or so. Our PRs are now quite similar.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@AndiDog Great to know, and you have got all the test ready, please go on with your PR, nice to have the issue be fixed soon

@fabriziopandini
Copy link
Member

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Dec 3, 2024
@fabriziopandini
Copy link
Member

@AndiDog Great to know, and you have got all the test ready, please go on with your PR, nice to have the issue be fixed soon

Considering the comment above
/close

@k8s-ci-robot
Copy link
Contributor

@fabriziopandini: Closed this PR.

In response to this:

@AndiDog Great to know, and you have got all the test ready, please go on with your PR, nice to have the issue be fixed soon

Considering the comment above
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/bootstrap Issues or PRs related to bootstrap providers cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/S Denotes a PR that changes 10-29 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Instance in machine pool failed to join cluster with error bootstrap token not found
8 participants