How to finetune hyperparameters after results?


Hi all,

So I recently just finished watching lesson 3 and also tried to my hand on a couple of kaggle comps, specifically the statefarm and plant seedlings competitions. I basically used Fastai’s 8 steps to build an image classifier model and have been pretty enthusiastic about the results I’m getting.

However, after I finish using learn.TTA() and calculating accuracy/metrics, I’m stuck, unsure of what to do next. I try randomly changing the batch sizes, architecture, and image sizes, then rerun the whole process again, which is a time consuming process. I’m not sure how to tell if I should go with a more complex model or what particular batch sizes / image sizes to use. What I do is basically browse kaggle and forums and see what hyperparameters others use and use that to get a higher ranking.

  • Should I instead be trying to find trends between loss and hyperparameters (like epoch size, etc)?
  • Would I be able to tell if these hyperparameters are working as soon as I train my pretrained model with precompute=False because unfreezing the model makes it train a lot longer?
  • How does the community generally work on improving their model after getting the metrics?

Also, how do I apply what I learned through plotting incorrect/uncertain pictures the model predicted and the confusion matrix? One approach I can think is to grab more images similar to the uncertain image through google and put that in my training set, but is there also a way to correlate it with the hyperparameters?