Hi Fastians! (Fastaians?)
I have been thinking about tweaking some large pretrained models, then retraining them on their original ImageNet task to see if my tweaks improved anything.
Has someone tried to do this as well?
NB: this is different from the usual ‘finetuning’ approach in this course, where we throw out the final parts of the model and replace them with our own layers and apply to our own data. This is really about fiddling with the (architecture of the) entire model (without breaking the current accuracy) and then resuming the training on the original data.