I have been implementing progressing resizing for an image domain problem, specifically regression based on an input image, starting at 120,160, training the model, creating a new databunch with the same image but at 240, 320 and then training again.
However when I do this my model immediately begins with a much worse loss, its almost as if the weights have been reset, or pushed horribly out of place.
Has anyone else experienced this issue before or something like it?
Is this behaviour expected and I am just not caught up on the theory?
Any information on this topic would be greatly appreciated.