What I’ve been doing is replacing valid_ds with train_ds and using my own defined range of indices to clean up the data.
The animation is a whole lot nicer with the following command.
you can step backwards and forwards.
animation.FuncAnimation(fig, animate, frames=100, interval=20)
in colab that works with
with no further installs
This feels like a newbie question but can anyone explain what the scale is for the “probabilities” returned by the
learner.predict method? Looks to me that the output is the result of the last Linear layer in the model but it’d be nice to know that each number is say in the range 0 to 512 or something.
I’m confused about training loss vs validation loss! I’ve noticed that in the lesson2-download nb and many examples I’ve seen, train_loss is higher than valid_loss. I’ve found it difficult to tweak the model with my data and get train_loss consistently lower than valid_loss, especially with a good error rate. Often the best I can get is something like this:
Can someone please clarify this! @lesscomfortable, I nominate you!
Yes, this feature in added in fastai-1.0.20 but “pip install fastai” is installing fastai-1.0.18. So I tired installing from source, i.e developer installation and it working now. Thanks
Hey! Have you tried by tweaking the probability of dropout (‘ps’) and weight decay(‘wd’)? Try increasing them for a few epochs and tell us what you find!
Lesson 2 Download nb - underfitting
I also faced the same problem val_loss was way lower than train_loss when I started, now as training progresses the train loss goes down enough to surpass the val_loss. What I do then is that I lower my learning rate. Eg. Say, you did this
learn.fit_one_cycle(12,max_lr=slice(1e-5,1e-3)) and this is what you got, then I try
learn.fit_one_cycle(2,max_lr=slice(None,1e-5)); fo my case it always did lower the train_loss (and yes it will be slow). However the moment val_loss > train_loss, you can do a
lear.find_lr() and then decide how to reiterate. This has worked for me, assuming we are only using what we are taught till date. I haven’t used weight decay yet. Thus it becomes important to check the metric, if after 15 iterations your performance metric indicates that you are performing poorly, though your val_loss > train_loss then your model might be overfitting.
For prediction - is there a way to normalize + resize a folder or test images (rather than just 1) and predict on multiple?
I also faced this issue on Google Colab. Installing ipywidgets fixed the issue.
Run below command to install
!pip install ipywidgets
When I tried to run
I got the following error
OSError: image file is truncated (8 bytes not processed)
any idea? thanks
You probably have a (semi) corrupt image. Try finding which one it is by writing a small script where PIL opens each and every image in your dataset and seeing upon which image it fails, or you could try
PIL.ImageFile.LOADTRUNCATEDIMAGES = True
which learning rate should i choose from this graph?
Thanks for the update. I just found it in this link too https://stackoverflow.com/questions/12984426/python-pil-ioerror-image-file-truncated-with-big-images
not sure does it affect the accuracy …
Could we remove noise from data set using distance between images(kMean or PCA)?
i did that, but when i execute this code:
fd = FileDeleter(file_paths=top_loss_paths)
it says Runtime disconnected …everytime.
not sure what happened?
do you mind sharing your google colab code for lesson2? Thanks
Try running it again in a range starting from an earlier leaning rate.
Hi all. This could be just an issue related to the fact that I’m a huge newb, or maybe because I’m on a Mac. Quoted from lecture: “you hit ctrl-shift-J, or command-option-J, and you paste this into the console. I hit enter and it downloads my file for me…”
For me, it just returns null. Any ideas of what I may be doing wrong? Huge thanks!
Anybody facing this problem with download_images that it stops in between and is constantly giving out content length error. Any help or suggestion with this will be highly appreciated.
It’s been a few days but I think on Windows I also didn’t get any indication that the file had downloaded - but it had downloaded. You may just need to check wherever a Mac downloads files - that I don’t know.
Unfortunately I just checked that the error “ModuleNotFoundError: No module named ‘ipywidgets’” is removed last night.
I am also facing the “Runtime disconnected” error while running the widget.
Looks like Colab currently don’t support IpyWidget. I ended up manually inspecting and deleting the images in toploss.