Hi Everyone,
I want to understand a bit more about how transfer learning works in practice and was hoping that someone here might be able to point me in the right direction.
I have my own set of data (images of a forest taken by a drone) with annotations. I have successfully adapted the notebook to lesson 3 so I can transfer learn using the weights from the imagenet competition. So, imagenet + fine tunning is done, the results are nice but not super great (main problems are very likely the low number of images and poor annotations).
However, I would like to go one step furhter, I would like to fine-tune the imagenet model to a problem (with lots of data) that is closer to mine and then fine-tune the result with my images. So I want to do tranfer-learning ātwiceā.
So, I have trained the same imagenet model using the āPlanetā dataset as is explained in the lesson. From now on I will call the model that I trained (again, transfer-learning from imagenet) the āplanetā model.
Now what is left is to fine-tune the āplanetā model with my images. However I have not yet been able to figure out how to do it.
CONCERNS:
-
I have to be careful not to discard the fine-tuned part of the planet model (if I cut enough I will end up with the āimagenetā model.
-
I cannot directly load the planet model onto my problem, the planet model outputs seven categories and my data only has five, so at the very least I need to change that.
I have been looking at several things (create_body and create_head) and Transfer learning with different model shapes (error: size mismatch) so far are the most promising, but so far I have not had much success. Any pointers will be greatly appreciated.