How do I skip validation loss calculation?

Hi! I’m using learner.fine_tune(..) and it takes a very long time on my mac. The validation loss calculation (which happens every epoch by default) takes up a good chunk of this time. To speed up training, I would like to skip this calculation until the final epoch. How can I achieve this?

I found this post from 2019 mentioning a new Callback system to do this, but couldn’t find anything about it in the docs. I’d be grateful if anyone could help me out.

I don’t have experience writing custom callbacks but gave it a shot in this Colab notebook. I got it to skip the validation step with the following callback:

class SkipValidationCallback(Callback):
  def before_validate(self): 
    raise CancelValidException()

But the repercussion is that after training I’m unable to use get_preds or predict.

Not sure if any of this will help but hoping it nudges you in the right direction!

A couple of resources I came upon along the way:

1 Like

You callback solution works when I pass it to fine_tune instead of vision_learner.

learn = vision_learner(dls, resnet18, metrics=error_rate)
learn.fine_tune(1, cbs=[SkipValidationCallback()])

Doing this lets me use get_preds as usual.

Thanks a lot @vbakshi. You saved me hours in training time! :smile:

1 Like

Awesome! Glad to hear it!