-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Concurrent flag evaluation bursts the back-end #588
Comments
If I have understand correctly, you are talking about concurrent evaluation happening at the same time with the same context and the same flag. Based on all your options I am not sure where it makes the most senses to do it. I am also wondering if it should be part of the SDK or this is up to the provider to decide? |
I think the ofprep provider approach would be the best one because it does not interfere with provider specifics. The trade-off here is that the providers will be responsible for managing their own back-end communication strategies. However, considering that the current decision to manage caches is made within each provider, we can continue following this approach and impose the least constraint possible on the provider implementations regarding back-end requests. |
Could we perhaps build a "wrapper" provider that does this, and takes another provider in it's constructor? That way we would have some composibility? A |
@toddbaert This is definitely something I can do in the GO Feature Flag wrapper, but for OFREP if this is something we see often, could it be a configuration of the provider directly maybe? |
Hello, consider the following program:
We know that goff provider caches the flag result.
However, during the first request, if there are concurrent jobs getting the flag value, we get the following effect:
You can see multiple concurrent unnecessary requests coming from the application. Think when we have like thousands of go routine workers.
I propose here to leverage https://pkg.go.dev/golang.org/x/sync/singleflight
You may argue that if the context is the same, the flag should be retrieved in advance, but think of applications that would need a big refactoring to allow that to happen, or even parts of the application that are not too close that may need the same flag in the same moment.
Using a single flight group would be an elegant way to prevent that from happening coming from any part of the application, by using an unique key based on the evaluation context, just like the cache does.
The question is in which layer we should apply this logic:
Per provider approach
In that case, the provider is responsible to take care of the cache warm up and it will need to prevent bursts like that. In our example, we might add the code to the go-feature-flag provider, using the same cache key strategy for the flight key.
ofprep provider approach
The single flight would always occur in the ofprep provider.
Client approach (Provider independent) approach
We could add the single flight directly to the client thus avoiding bursts on any kind of back-end provider.
For any of the approaches we might want to move the cache key strategy to a func in FlattenedContext struct and use that both for caching and for single flight key.
The text was updated successfully, but these errors were encountered: