-
Notifications
You must be signed in to change notification settings - Fork 616
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Placement constraints #3063
Comments
Possible workaround: Use two services service_AZ1 and service_AZ2 that both have the same network alias and then route to the network alias? |
Thanks for the suggestion, this is what I ended up doing.
Curious though, is there any ongoing development on Swarmkit? Or is it (mostly) abandoned due to Kubernetes?
…Sent from my iPhone
On 1 Jun 2022, at 10:02, Martin Braun ***@***.***> wrote:
Possible workaround: Use two services service_AZ1 and service_AZ2 that both have the same network alias and then route to the network alias?
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.
|
This is why I implemented
Kubernetes has become industry standard so most of the developers are focusing to it. However there is still some development going on swarmkit side too time to time, depending how people are interested about it. I example just created one PR #3072 so it is interesting to see if that gets merged. Issue about swarm roadmap can be found from #2665 |
What I want to achieve:
Ensure replicas are spread across availability zones and never run in the same.
current approach:
I am currently doing this through placement preferences.
The problem:
However, they are that. Preferences. If one zone becomes unavailable, the other replica will be scheduled in the other zone which already contains a replica.
This is particularly troublesome, as the service is backed by persistent storage (ebs), which is not replicated across availability zones; meaning that the replica will start up with an empty volume in that zone.
Is it possible to set a placement preference similar to how constraints work? So that I can guarantee a (one) replica will always run in a distinct zone, if the other zone is not available then do not start the replica? If not, is there an alternative way of achieving this?
The text was updated successfully, but these errors were encountered: