I have added support for MLFlow Tracking tracking experiments run, parameters as well storing the notebook used to run given experiment. This helps in comparing different runs with different configuration with easier GUI and FastAI callback functions make it easy to use it.
This does require adding MLflow as dependecy which user needs to have to use it. So let me know if this support is of interest to community in general. If yes, then how we should proceed, as creating PR template requires we agree on feature here before creating a PR.
@sgugger Here is an example notebook adapted from lesson 1 which shows as how to use mlflow for tracking. Remember to install mlflow and run
mlflow server before running this notebook.
This will create view in mlflow ui available at
http://localhost:5000 . Let me know if you have any comments/questions.
Error rate in two runs
Love how you’re using MLFlow with this project. We would love to help out and get your feedback on MLFlow. If you shoot me an email at email@example.com I can connect you with our team here to assist with any questions you may have. We can even let you trial run this on https://databricks.com/mlflow
Thanks and keep up the good work!
Databricks - The Unified Analytics Platform for AI and ML.
@sgugger I have updated the code and created a separate file like Tensorboard integration and use the similar method to resolve optional dependency. Also updated the behavior to use this functionality. Now to use MLFllow tracking, user need to add this partial functional call
create_cnn(data, models.resnet34, metrics=error_rate,
callback_fns=[partial(MLFlowTracker, exp_name="lesson1-pets", params=params, nb_path="/data/course-v3/nbs/dl1/lesson1-pets-Copy1.ipynb")])
and recording of all parameters, metrics as well notebook on training end is taken care by new code.
Thanks Sam for the offer. But i am using a local installation of MLFlow and it serve my needs well.
Great job @gurvinder
If you want to go one step further and share your MLflow experiment runs with others you can have it hosted on neptune.ml (free tool unless you are a for-profit organization).
All you need to do is change
mlflow ui to
For example I can share this experiment run link with all of you and let you access my model weights.
You can read more about in this blog post.
Hi, I love this callback. however, I’m having some problems - all left site of run log (user, code version, source, run name stays empty… do you have any idea what could be the reason?
Hi, what is the state of this? Will it be merged? I’m really interested in having a callback for logging metrics on MLflow.