I was just wondering if anyone has used the library at https://ray.readthedocs.io/en/latest/tune.html to tune models or hyperparameters in Fastai. I’m looking to do it and was looking for repo’s or somewhere to get started with merging the two.
Recently, I searched for hyperparameters tune libraries. I found Optuna, Tune, AX and BayesianOptimization between others. After several hours, I found Optuna to be more concise and clear than Tune. Also, I has much better docs than Tune. I think that Optuna is almost as powerful as Tune while being extremely easy to use. From reading both docs, I believe that Tune is better if you need to scale up your training to hundreds or thousands machines.
Finally, Optuna has a fastaiv1 builtin callback .
Interesting, thanks for that I’ll check it out.
Hey @vferrer, I’d love to learn about your experience with the Tune and Tune documentation – Is there a place I can reach out to you to chat? I’m not trying to sell you anything, just trying to find out what we can do better (I work on Tune).
You can also reach out to me at my email found here: https://github.com/richardliaw
@rliaw I’m not vferrer, but for my purposes it would be pretty amazing if there was a fastai callback for use with Tune. I have a teammate who uses Keras who highly recommended it and it looks like it would be very useful, but for my use with Fastai I don’t currently have the time to develop a callback for it so I couldn’t use it. Based on the amount of code that Optuna took to implement pruning, it doesn’t look like it would even be a giant lift to support it, especially since it already supports Pytorch. I’m not saying that it’s worth your time, that would depend on the overlap of Fastai/Tune users, but it would have helped me out.
Hi @rliaw , I have a notebook on which I was trying to tune fastai1 model with FLAML. Unfortunately it still pretty buggy.
It seems FLAML uses Ray tune under the hood.
I would love to chat with you sometime soon.