@rachel Ppl in previous thread were asking why ConvLearner name was changed to create_cnn.
I do not know if it is true in Fastai, but in Keras, if you have dropout layer, your training loss can be higher than validation loss, since dropout is not applied during validation.
Looking at the Teddy, Grizzly, Black Bear classification problem, what can I do If I have pictures with another animal, e.g. a Zebra, but no Zebra training data? Is there an option to say “other” in general in classification?
Rachel I can confirm this one had a lot of likes.
That pixel-to-numbers graphic is from this article: https://medium.com/@ageitgey/machine-learning-is-fun-part-3-deep-learning-and-convolutional-neural-networks-f40359318721
It’s part of a series that looks really good. The author has more great looking deep learning articles here: https://medium.com/@ageitgey
Hey I have upgraded using commands in https://forums.fast.ai/t/faq-resources-and-official-course-updates/27934 that and it is showing
‘Name: fastai
Version: 1.0.18’
But still getting not found error for download_images . I am on salamander.ai
can anyone help with this error “NameError: name ‘download_images’ is not defined”
is it always fine to just randomly split validation set and training set?
I am getting error even after successful upgrade
check your fastai.version
from my experience is no. Since dropout is not applied during validation, at least on Keras.
Not in my experience on tabular data
@Rachel There are a few likes regarding the mysterious 3s in the end points on the range of values passed in to the learning rate finder:
import fastai
print(fastai.version)
1.0.15
Do we have to reinstall everytime it changes version? I assume git pull just gets the code not the update of fastai
Might be different on tabular data, but for CNNs nine times out of ten validation error still goes below training error by the end
Name: fastai Version: 1.0.18 Summary: fastai makes deep learning with PyTorch faster, more accurate, and easier
can learn.recorder.plot_losses() be explained in a bit more detail? why train_loss is plotted for each iteration whereas val_loss is plotted after each epoch?
Thanks! I wonder if there is anyone who looks at this rigorously…
Git pull is just for the nbs/doc/materials. The libraries should be updated with pip or conda.