How to retrain the Bear Classifier model after cleaning the data?

I’ve been playing with the Bear Classifier notebook. I’ve trained the model and cleaned the data using the ImageClassifierCleaner.
But now, how do I retrain the model again with the newly learned weights?
And do I use fine_tune or fit_one_cycle when retraining?

You can retrain your model from scratch by running

learn = cnn_learner(dls, resnet18, metrics=error_rate)  
learn.fine_tune(4)

Allocates a new learner with initialized weights
fine_tune() to train the model – i.e. Learn by adjusting the weights in the model

What error are you getting when you “simply retrain the same model”?
You should be able to just run fine_tune() again without errors.

@meanpenguin Thank you for your reply. I was able to run fine_tune() again but I confused about what inputs do I give the cnn_learner. How will it be different from the line below:

learn = cnn_learner(dls, resnet18, metrics=error_rate)  
learn.fine_tune(4)

Sorry. Not sure what you are asking.

You said you were able to run the fine_tune()
I’m assuming that you ran the two lines you have quoted.
And those two lines would have retrain the model.

What difference are you looking for?

@meanpenguin I’m sorry I wasn’t clear enough. I’m new at this.
What I want to know is this:
I train the model using the code:

learn = cnn_learner(dls, resnet18, metrics=error_rate)  
learn.fine_tune(4)

Then I use the ImageClassifierCleaner.
Then I use the two lines of code in the book to delete images and move them to a different category, respectively.
Now, the book says that “Once we’ve cleaned up our data, we can retrain the model.”
My question is that what code do I use to retrain the model with this clean data. Will it be exactly the same as the code above or will there be some difference?

Thank you for your help.

You will need to also recreate your DataLoaders object before you run those two lines of code.

The reason is that despite DataLoaders and their Datasets being lazy objects (ie. they lazily load the images when called/required, like during the batch creation when training), in this circumstance you have already provided the path files for each image in your dataset and are stored in the dls. Therefore we need to provide a new list of path files (that contain the updated labels and have removed the deleted images), and since the DataLoaders object knows how to get these (using this line get_items=get_img_files) you simply recreate the dls.

Now when you reinitialise the learn object in learn = cnn_learner(dls, resnet18, metrics=error_rate) , you are erasing the model parameter updates you made in the previous training loop when you had poorer quality data. You could in theory re-train this model by updating the learn object with a new dls and simply run learn.fine_tune(4) again, but the prior updates would be there and thus the model needs to unlearn the images that you discarded or reclassified. So by including the first reinitialising line with the updated dls, you are returning to the Resnet preset weights and starting training from scratch, albeit with a better dataset (the updated dls) which should result in better accuracy, faster.

Hope this helped

2 Likes

@stantonius Thank you! This is what I was looking for.

Hello!

I have the same problem of @shiro12 , i did what you said but after cleaning (deleting some images) i run the folliwing code again:
animals3 = DataBlock(
blocks=(ImageBlock,CategoryBlock),
get_items=get_image_files,
splitter=RandomSplitter(valid_pct=0.2,seed=42),#splits the dataset into a training set and validation set
get_y=parent_label,
item_tfms=RandomResizedCrop(224, min_scale=0.5),
batch_tfms=aug_transforms())

and then:
dls = animals3.dataloaders(root)
learn = cnn_learner(dls, resnet18, metrics = error_rate)
learn.fine_tune(4)

The problem is that i get different results, it trains again the model and with different errors, some randomness is involved here and i think it should not.

Thank you for your help.

1 Like

What i mean with different results is that my confusion matrix is different than before, maybe each time it trains uses different training and validation but that shouldn´t happen because of the RandomSplitter.

I think I might be experiencing the same problem. I fine tune the same model on the same data and similar, but different results.

I believe that is because the final layer of weights is re-randomly initialized when you run fine_tune()

I’m trying to find a way to either seed that random initalization to get the same results making my experiments more more consistant.

Anyone know how to seed the random inializations?

aha! I found a solution.

I added:
seed(42)

just before my learn = cnn_learner()
function and it trains consistantly now on the same datasets