Get_transforms hyperparameter tuning

Hi guys, what are some hyperparameters for get_transforms we should always check out for our ImageDataBunch object?

There are so many of them and I wonder if anyone had any success in improving the model’s accuracy by only tuning a few.

Thank you.

Hi @wjsheng,

Jeremy noted in September last year that "get_transforms now defaults to reasonable defaults for side-on photos" - so changing transforms a lot when working with these kind of photos will probably not give you a much better result.

As far as I know the kind of transforms that you should check depends a lot on the data set that you will use. I usually go through every transform and think if it makes sense to apply that transform.
For example flip_vert makes sense for satellite images, but not for pictures of animals.
An important part of deciding what kind of transforms make sense is seeing what kind of data you’re working with: how’s the lighting, the angle, etc? Knowing these kind of things will help you make better decisions.

I know that fellows from fellowship.ai have been working on ‘optimal transforms’ - see https://platform.ai/blog/page/3/new-food-101-sota-with-fastai-and-platform-ais-fast-augmentation-search/. This method would find the optimal hyperparameters for each transformation - I would love to hear more about the status of that process. I can imagine a process that scans the data set on lighting or angles and advices on the optimal transforms this way could be really helpful (no clue if that’s what they’re doing).

1 Like

Thank you for the detailed reply, gietema. You are right, my default has been doing so well that I am not sure how to further improve my model. I will explore more on the other transformations. Thanks, once again!