I’m trying to use CSVLogger to log metrics during training. Although I see loss of both train and valid, I see only the metrics (e.g. accuracy) of train in the saved log file at each epoch:
Howe can I extract the metrics of valid data at each epoch as well?
It’s not recommended since the metrics on the validation set are more important (you want to see your validation metrics improve, training metrics improving doesn’t tell you if you’re model is overfitting).
If you really want it, you can do learn.validate(ds_idx=0)
Thanks. It’s beneficial to have both train and valid data together to see their difference and check for example overfitting.
As I understand, learn.validate(ds_idx=0) only give the metrics of the final model (last epoch). Any way I can record the metrics of both train and valid at each epoch during training process?
Hi @welloilel@ilovescience
Thanks to both of you! Finally I find a way to output a value of the metrics and loss. Which value is it? The last? The best?
My problem: I need to evaluate the fitting on slightly changed randomized datasets.
I have a coupe of questions:
1- where I can find some documentation on learn.validate or ?
learn.validate should also contain the best value of accuracy…? Or do I need to use a callback? Because it is not compatible with EarlyStoppingCallBack apparently.