Using the full training set (no validation)

#1

Hi,
As suggested by Jeremy Howard, when there is no validation set we can use approx 20% of the training set for validation.
When doing “dog breed” in Lesson 2 he mentions that after all calculations, he would use the whole training set to use that 20%… but how can we “fit” our network if 100% is training and there is no validation? or does he mean to re-calculate a new random 20% for validation and train the network a bit more?

Thanks!
Bliss

0 Likes

(Malcolm McLean) #2

If I understand your question…all the fitting of the network is done using the training set. The validation set is used only to calculate the metrics. These show how the network performs on samples it has never seen, for example to assess accuracy and detect over-fitting to the training set.

0 Likes

#3

So after a good training with 20% of training set used for validation… we should train a few more epochs “blindly”? i.e. without validation so we maximize the training set size?

For Example, we obtained the 20% like so:
val_idxs=get_cv_idxs(n)

And then used it here:
data= ImageClassifierData.from_csv(PATH,‘train’,f’{PATH}labels.csv’,test_name=‘test’,
val_idxs=val_idxs,suffix=’.jpg’,tfms=tfms_from_model(arch, sz),bs=64)

Do we just not use val_idxs?

Hope my question makes sense :slight_smile:

Thanks
Rodrigo

0 Likes

(Malcolm McLean) #4

So after a good training with 20% of training set used for validation… we should train a few more epochs “blindly”? i.e. without validation so we maximize the training set size?

I believe that’s right conceptually. You will still have the training loss to go by.

As for the best way to implement it - sorry, I don’t know. I’m using v1.0 of fastai - there you can specify the validation/train split fraction.

0 Likes

(Yijin) #5

Actually, once you have arrived at a good training process (hyperparameters, unfreezing, number of epochs, etc.), you can re-run your training from scratch with val_idxs (or val_pct) set to zero / near-zero data. Your model will then be able to train with the full set of your training data, and you will just ignore the “validation accuracy” metric output since it is not meaningful anymore. More training data should mean that you can train to a lower loss with better general accuracy.

Of course, this depends on how long it takes/took to train the model, as you might not want to re-train from scratch if it took you aaaaages previously. In that case, as you said, you can just train a few more epochs with the full training set, which should still give you some benefit.

Regarding val_idxs (and val_pct), there are default values in the fast.ai code (v0.7 and v1). You will need to manually override the defaults, to give the validation set nearly no data. I have not looked at all the code in detail, but it might not like it if there is no validation sample at all, so maybe best to give it a single validation sample to load?

@Pomo has already answered your initial question about ‘how can we “fit” our network if 100% is training and there is no validation?’ : )

Yijin

1 Like

(shweta) #6

Hi. I had a same question. Pardon my basic question as I am new to deep learning.

So does this mean we have to go through all the process of no of epochs, cycle length, cycle multiplier, freezing and unfreezing again? Or should we just run it against the last fit cycle?

0 Likes

(Yijin) #7

My understanding is that, let’s say with 80% training set (20% validation set), from your experiments you arrived at a training process of, say, 10 epochs at a certain LR, then unfreeze and 4 epochs at certain LRs, then freeze again and 2 more epochs, to give you accuracy and losses that you are happy with. Then, you can make a copy of your jupyter notebook, set training set to be 100% of your data (~0% validation set), and just run the whole notebook from start to end without changing anything else (i.e. 10, 4, 2 epochs with the found LRs in the example above). You should ignore the validation metric for this time round, but you should get a better trained model at the end of it, to be used for actual testing inference.

Hope that helps?

0 Likes

(shweta) #8

Oh. Thanks. It makes sense.

1.Can you also clarify one more thing? How many epochs should I run ?
everytime should I run it til accuracy is decreasing ?
2. Also did you face any kind of issue when when you dont give validation set in ImageClassifierData.from_arrays? I am getting error for that.

0 Likes

(Yijin) #9
  1. You should experiment with number of epochs, and stop training and/or start using other best practices from fast.ai when: (a) your losses are no longer reducing and your accuracy no longer improving (indicating that you have reached the limit); OR (b) your training loss is getting much lower than your validation loss (indicating overfit).

  2. I have not looked into that function in detail. If you are getting an error, try giving it a validation set with just one item? Doesn’t matter what that item is, since you are going to ignore the validation metrics when you are just re-running your established training process (but now with the full set of training data).

0 Likes

(shweta) #10

Thank you so much for the help. I really appreciate it.

0 Likes

(Yijin) #11

Please also note this tweet from Jeremy, regarding overfitting. Your training loss will likely be much much lower than your validation loss, and your accuracy will deteriorate, if your classification model is really overfitting. An “epoch limit” when training your classification model is more likely to be caused by lack of GPU resource / time (or when you have reached your accuracy target), rather than for the avoidance of overfitting : )

0 Likes

#12

Did anyone figure out how to give no validation examples? Setting val_idxs=None causes a random 20%, and setting it to 0 causes it to use the index =0.

0 Likes