This feels like a newbie question but can anyone explain what the scale is for the “probabilities” returned by the learner.predict method? Looks to me that the output is the result of the last Linear layer in the model but it’d be nice to know that each number is say in the range 0 to 512 or something.
I’m confused about training loss vs validation loss! I’ve noticed that in the lesson2-download nb and many examples I’ve seen, train_loss is higher than valid_loss. I’ve found it difficult to tweak the model with my data and get train_loss consistently lower than valid_loss, especially with a good error rate. Often the best I can get is something like this:
Can someone please clarify this! @lesscomfortable, I nominate you!
Yes, this feature in added in fastai-1.0.20 but “pip install fastai” is installing fastai-1.0.18. So I tired installing from source, i.e developer installation and it working now. Thanks
Hey! Have you tried by tweaking the probability of dropout (‘ps’) and weight decay(‘wd’)? Try increasing them for a few epochs and tell us what you find!
Hey,
I also faced the same problem val_loss was way lower than train_loss when I started, now as training progresses the train loss goes down enough to surpass the val_loss. What I do then is that I lower my learning rate. Eg. Say, you did this learn.fit_one_cycle(12,max_lr=slice(1e-5,1e-3)) and this is what you got, then I try learn.fit_one_cycle(2,max_lr=slice(None,1e-5)); fo my case it always did lower the train_loss (and yes it will be slow). However the moment val_loss > train_loss, you can do a lear.find_lr() and then decide how to reiterate. This has worked for me, assuming we are only using what we are taught till date. I haven’t used weight decay yet. Thus it becomes important to check the metric, if after 15 iterations your performance metric indicates that you are performing poorly, though your val_loss > train_loss then your model might be overfitting.
You probably have a (semi) corrupt image. Try finding which one it is by writing a small script where PIL opens each and every image in your dataset and seeing upon which image it fails, or you could try PIL.ImageFile.LOADTRUNCATEDIMAGES = True
Hi all. This could be just an issue related to the fact that I’m a huge newb, or maybe because I’m on a Mac. Quoted from lecture: “you hit ctrl-shift-J, or command-option-J, and you paste this into the console. I hit enter and it downloads my file for me…”
For me, it just returns null. Any ideas of what I may be doing wrong? Huge thanks!
Anybody facing this problem with download_images that it stops in between and is constantly giving out content length error. Any help or suggestion with this will be highly appreciated.
It’s been a few days but I think on Windows I also didn’t get any indication that the file had downloaded - but it had downloaded. You may just need to check wherever a Mac downloads files - that I don’t know.
Unfortunately I just checked that the error “ModuleNotFoundError: No module named ‘ipywidgets’” is removed last night.
I am also facing the “Runtime disconnected” error while running the widget.
Looks like Colab currently don’t support IpyWidget. I ended up manually inspecting and deleting the images in toploss.
Hi everybody, I heard Jeremy mentioned there is free GPU ? Crestle? that are free using VC moneys at our disposal. Which one? How can we use it? Thank you.
Right now I am just using google colab, but nice to have an option.