← Back to all frameworks
Machine Learning
MLflow + Weights & Biases
Experiment tracking, model registry, reproducibility
What it is
MLflow handles model lifecycle — tracking, registry, deployment. W&B handles experiment dashboards — gorgeous loss curves, hyperparameter sweeps, comparison tables. I use both, depending on what the team already runs.
How Vaaani uses it
- Tracking every training run with hyperparameters and metrics
- Model registry: staging → production with one click
- Hyperparameter sweeps with parallel agents
- Comparing runs across teams visually
Why it makes the cut
Without these, after 50 experiments nobody knows which model is in production or how it was trained. With them, audits and rollbacks are trivial.
Sample code
import mlflow with mlflow.start_run(): mlflow.log_param("lr", 0.001) mlflow.log_metric("f1", 0.92) mlflow.pytorch.log_model(model, "model")
Related in the Vaaani stack
Have a project that needs MLflow?
30-min discovery call. You describe the busywork; I map it to an AI worker and a budget.