You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It seems that scaling metric is not precise enough.
Scaler is reporting items to process:
Lease [0] owned by host Instance-cosmosdb-order-processor-585d7b9bc5-gnb88 reports 37 as estimated lag.
Lease [1] owned by host Instance-cosmosdb-order-processor-585d7b9bc5-5jdb2 reports 38 as estimated lag.
There are 2 partitions with estimated lag.
This may have to do more with the order processor not finding any items to process than the scaler reporting incorrect estimated lag. Is it possible that the old processor pod completed processing all changes in its change-feed, then shifted to second change-feed and processed items there too, before KEDA could increase the replica count for order processor? Is the estimate lag not going down to zero, even after giving some time, say 5 minutes or so?
It seems that scaling metric is not precise enough.
Scaler is reporting items to process:
But processor is not getting any items:
I verified both are connected to the same DB using a test app that used same client for estimator and feed processor.
Expected Behavior
Pods are scaled to 0 when there's nothing to process.
Pods are processing items when scaler reports estimated changes.
Actual Behavior
Scaler reports changes but processor is not doing anything.
Steps to Reproduce the Problem
Specifications
The text was updated successfully, but these errors were encountered: