You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I know the difference is negligible. But it is influencing my model training on prophet. Could you please suggest any way to make them identical without using compute().
The text was updated successfully, but these errors were encountered:
I *think* that floating point inaccuracies are just a fact of life when you’re doing things in chunks, at least with the algorithms that dask.array uses today. I don’t think there’s anything we can do in dask-ml to address that (but maybe check the source to be sure).
I am getting different results from sklearn StandardScaler and dask StandardScaler.
Dask scaler
Sklearn Scaler
I know the difference is negligible. But it is influencing my model training on prophet. Could you please suggest any way to make them identical without using
compute()
.The text was updated successfully, but these errors were encountered: