Once you unfreeze and retrain with one cycle again, if your training loss is still higher than your validation loss (likely underfitting), do you retrain it unfrozen again (which will technically be more than one cycle) or you redo everything with longer epoch per the cycle?
Your training loss being lower than your validation loss is normal behavior. You are likely overfitting
I’d often find running fit_one_cycle
over multiple epochs, the accuracy and valid_error would start off terrible (worse than the end of the previous call of fit_one_cycle
!) in the first epoch and get better over subsequent epochs. That makes me think they’re not the same.
Sorry typo. I mean the other way around.
I will ask the 3 highly voted questions when Jeremy finishes this explanation
Why do we need to fit_one_cycle with a certain learning rate before unfreezing. Why don’t we instead unfreeze directly.
For human assigned labels, what if human may have different criteria for assigning labels. as a result the labels may be “mislabeled” (but not necessarily wrong,could be different opinions), which may confuse the model; which then affect the model accuracy. What to do with those cases if i dont want to delete them?
Being able to correct incorrect labels is a feature we plan to add in the future
“Matrix multiplication” = “dot product”?
I think the explanation for this is that the optimizer resets, and it’s using momentum, so by the end of the previous training it had a better “idea” of which direction to head, whereas when you restart it starts off by heading in a less optimal direction.
No, absolutely not.
The cycle of one cycle follows this structure. The cycle length of the total move up and down is accros the number of epochs. So if you do one epoch and then run it again, it will do the “triangle” twice. @edwardjross, this also explains why the learning rate at the beginning is higher than at the end of the previous cycle!
Sorry, @sandmann, answered somehow to the wrong question, this was in answer to you.
Also, for metrics, instead of error rate can use F1 score instead of error rate… as @wdhorton mentioned, sampling should be done, based on the class distribution
Never thought like that simple equation y = mx + c
can be written as matrix dot product.
not only the momentum, but also the LR itself!
Sorry I meant underfitting I just made a mistake about the losses. Sorry, morning with no coffee over here.
Question: Image size, size parameter in ImageDataBunch, that is set to 224. Is it better that is you will get lower error when images are higher resolution (it will take more time, bs smaller)? or the image resolution need to be 224 for resnet34?
is there a no.of.epochs learner like lr learner?
For imbalanced classes question: would you balance you validation set or check on unbalanced validation set?
Jeremy’s answer on handling unbalanced data eg 200 real bears and 50 teddy bears “just try it, it always just works fine”… is that only when starting with a well-trained net, or is also when starting fresh?
If consciousness arrises from complex enough data processing, at what point do you give up on ML training and obsess over AI sentience?