-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🐛 create bootstrap token if not found in refresh process #11037
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Hi @archerwu9425. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/area bootstrap |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
seems like this creates a "secret" (bootstrap-token) on demand. i.e. when the user is trying to join node with some invalid BST, the BST will be created for them.
that doesn't seem like the correct thing to do, but interesting to hear more opinions.
If the token kept in |
@neolit123 @ncdc @greut Could you please help review? Thanks |
my comment is here. waiting for comments from more maintainers. |
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
@@ -326,6 +325,20 @@ func (r *KubeadmConfigReconciler) refreshBootstrapTokenIfNeeded(ctx context.Cont | |||
|
|||
secret, err := getToken(ctx, remoteClient, token) | |||
if err != nil { | |||
if apierrors.IsNotFound(err) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should limit this behaviour (re-create the token) to only when configOwner.IsMachinePool() or when the config owner is a machine and it doesn't have the data secret field set.
This is required because machines, once data secret field is set, are never going to pickup up new data secrets (and having re-created by un-used data secrets around will be noisy/possibly confusing when triaging issues).
We also using test coverage for this change
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@fabriziopandini Any advice for the issue I met? Current logic bug here is:
- Token is created and tracked by cluster-api controller, but the lifecycle is handled by workload cluster, it will be deleted by workload cluster without info cluster-api controller
- Machine Pool is created by cluster-api controller, but cluster-autoscaler is used to trigger the replica number expected for the pool
So whenever the cluster is paused
, things may be out of control, and we can see node join issue, especially when we use the default 15 mins
for token ttl.
In our case, I have changed the token ttl to 7 days to avoid the issue, but still I think we should find a way to handle it. If auto refresh is not recommended, will it be acceptable that we got some kind of ctl to do the refresh?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the reason for pausing the cluster? Maybe when pausing the cluster, you also should pause autoscaler for this cluster in some way?!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In our case:
We have two controllers cluster, so we can migrate workload clusters to another controller cluster during the capo controllers upgrade, which will pause the cluster. If meet any issue during migration, the time for pause may be longer.
As I said, we have used longer ttl to solve this. But still think this issue should be handled.
Cluster api support paused feature, then I think it's reasonable to handle this kind of case. Just my opinion😄
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While the approach you mention here is valid, the feedback from Fabrizio
limit this behaviour (re-create the token) to only when configOwner.IsMachinePool() or when the config owner is a machine and it doesn't have the data secret field set.
should still apply — given that we don't want to renew every token regardless if the owner has joined or not
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The issue I met is for machinepool, so if I limited the refresh token when not found to machinepool created by cluster-api, can it be acceptable?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, it is ok to improve how MP recovers after a Cluster is paused for a long time.
What we want to make sure is that the current change doesn't impact anything which isn't owned by a MP (kubeadmconfig_controller serves also regular machines, not only MP).
Also, please do not consider pause as a regular Cluster API feature.
It is an option that we introduced for allowing extraordinary (emphasis on extraordinary) maintenance operations and it is assumed deep knowledge of the system for whatever happens when the cluster is paused.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've covered this initial request (only recreate for machine pools) in my dupe PR #11520. Sorry that I didn't find this PR earlier – maybe I got distracted by closed PRs or so. Our PRs are now quite similar.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@AndiDog Great to know, and you have got all the test ready, please go on with your PR, nice to have the issue be fixed soon
/ok-to-test |
Considering the comment above |
@fabriziopandini: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
What this PR does / why we need it:
For the refreshBootstrapTokenIfNeeded function, if token not found, should create a new one instead of just raise error
why we need it:
paused: true
filed, reconcile will stop. During this period, bootstrap token may be deleted butkubeadminconfig
will not updated.paused
filed and start to reconcile.Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes #11034