Kerem, thanks for the PR, it is a great idea.
I am trying to use it in Lesson 14’s Carvana example.
When I try to
learn.lr_find(), it gives this error message:
ValueError: Target size (torch.Size([8, 256, 256])) must be the same as input size (torch.Size([8, 3, 256, 256]))
This is quite odd. Target is the ground truth. So, it is a black and white image, just one channel. Input is a RGB image. This is how Unet34 from Lesson 14 works as well.
I tried to follow your code from github and the only thing I didn’t copy was the loss function (which is exactly where the error happens). My loss is a simple BCEWithLogitsLoss.