where do we get lesson 4 youtube link to the live stream? Cant find it on the forum, I am subscribed to the class for lesson 1,2 received emails from Jeremy. Cannot seem to find an announcement in this forum, can someone point me into the right direction please?
This is true but the difference is small; would not be enough to explain significant differences between validation and training losses, specially if you do several epochs.
Btw, on the camvid notebook, most of the time I get good results matching those from the class, but every now and then, I get into a weird accuracy zone that looks like this:
Simply rerunning the cells gets back into the proper 80-90% accuracy zone from the first pass, but occasionally will once again lead to these low accuracies…anyone have a similar experience, and if so, any explanation? Does the learning rate somehow nondeterministically enter a strange zone on the loss function map that it can’t get out of?
Hello, I was running the NB for regression problem (head-pose) without any change made from my side and I got the following error when the data object is created:
Facing the same issue
Can someone help out
It used to work fine until I pulled and conda installed the new updates (fastai version 1.0.27)
Working on Google Cloud if its any help
I hope your issue has been resolved. I have an issue above this issue you have in the data_lm part. Could you share your nb how you get pass this? I think it has to do with my untar_url on IMDB. i have the following 3 files instead of what Jeremy has in his output. So i dont have a train or test folder for running “data_lm = (TextList.from_folder…”
data_lm = (TextList.from_folder(path) #Inputs: all the text files in path
.filter_by_folder(include=[‘train’, ‘test’]) #We may have other temp folders that contain text files so we only keep what’s in train and test
.random_split_by_pct(0.1) #We randomly split and keep 10% (10,000 reviews) for validation
.label_for_lm() #We want to do a language model so we label accordingly
.databunch(bs=bs))
data_lm.save(‘tmp_lm’)
@angelinayy, It was not for you and me to solve. The URL for IMDB used to point to a tgz file which contained only 3 txet files. The current URL (https://s3.amazonaws.com/fast-ai-nlp/imdb.tgz) has the right tgz file. If you click on the URL link and open the tgz file you will find that it contains directories like train, test etc.