The video and notes supposed to be this but the latest notebook gone astray hence I do not understand anything aboutt he codes as per image. I’m just showing that it’s different.
Hello, first post (on Lesson 2) hope it’s in the right place.
Subject: extra ‘meta’ category
In the original notebook for lesson2-download, there are 3 classes [‘teddys’,‘grizzly’,‘black’].
I did something similar for birds [‘chickadee’,‘titmouse’,‘squirrel’]. The first time I worked through the notebook, everything went fine and it was able to correctly recognize my own image of a squirrel.
Today, I re-ran everything from scratch, and the line…
I’m trying to figure out where the 4th class ‘birds’ is coming from. My image folders are in a parent folder named ‘birds’ and I see that the function ImageDataBunch.from_folder() seems to work from a folder hierarchy. So I’m assuming it’s coming from that. But I am unable to pinpoint the exact cause.
Any thoughts as to why it’s picking up the additional class?
Hi tlo hope you are well.
Have you checked the number of directories in your birds directory.
I had a similar error when I created an extra directory by mistake.
This could have occurred if the path was set to birds already when you ran the folder creation sections of the notebook.
I don’t know if this will be helpful to anyone, but I had some hiccups downloading images (I’m running a Pixelbook so that may be the cause) with the provided javascript, and figured out a fix.
I was hitting: Not allowed to navigate top frame to data URL: which I believe is coming from some chrome security for XSS or the like. I ended up modifying the script like so:
I used the techniques from lesson 2 to create https://floret-finder.onrender.com/ to help those of use who can’t distinguish between broccoli, cauliflower, and romanesco (my favorite). Enjoy!
I am getting this error:
AttributeError: ‘NoneType’ object has no attribute ‘detach’
on every epoch end with learn.fit()…
any suggestions? have been stuck for a while…
I have the same problem and it really causes me to have a hard time understanding what is going on, feeling quite lost when notebook material is different from what is being taught at the lesson
In case of too many epochs what’s happening is till a point of time the error rate will decrease but as your number of epochs is too high the model will unable to further decrease that error rate so after that it starts to learn the noises too which make the model less generalize which leads to overfitting.
I have a silly question which bothered me when I applied lesson 2 to my own dataset.
I was retraining the model many times on the same amount of data. I found that many times, the Learning rate finder gave different graphs. Sometimes they were straight forward, you could easily find out a long lasting downward slope of losses. Other times, the slope was not always decreasing, it either remained the same or kept increasing (as in , the loss either stayed the same or increased but it never decreased ).
My question is - If the data is the same, why should the Learning rate finder return a different curve ?
One thing to note is that, by retraining I actually mean re running the Notebook. So perhaps, a different set of images is grouped into the training and validation sets ? Is that right ?
Hi warun hope you are well!
I believe this occurs, as when you train a model in lesson 2, the data is randomly chosen from the training set to train the model. However the validation set data is kept constant by using a seed value. This random selection causes the learning rate to change in different ways, when you retrain the model.
Hi,
I was trying to deploy web app to Heroku[free-tier]. After compilation my slug size is 936MB.
Which is more than maximum limit 500MB. Pytorch is taking about 734MB. If anyone has successfully deployed his/her web app in Heroku please share your experience.
A lot of the time your error will be a good amount higher than what jeremy achieves in the video, partially depending on your data. But it also looks like the time it is taking is really long, 5-6 minutes per epoch is a pretty long time, I would experiment with a higher learning rate to reduce the wait time. Also I would recommend running for more epochs, you should try to increase your number of epochs until your training loss is lower than your valid loss.