Lesson 1 - Official topic

My example was not a good one… But funny :slight_smile:
The example in the book is
learn.predict(“I really liked that movie!”)
which gives positive
learn.predict(“I did not like that movie!”)
gives positive also.
Seems like the model does much better on longer reviews.

Somehow I feel that "I did not like that movie!” doesn’t sound very negative… it would be interesting to see if anyone writing a negative review ever used such a phrase. Maybe the model has a similar intuition? :slight_smile:

Access the notebook via ipywidgets enabled JupyterLab, as shown here

hi fastai communiity, any insight into why there seems to be 2 passes (2 blocks of epoch iterations) during the training and fine tuning of the models in the lesson 1 notebook

is this because there is one for each layer?

On a separate note, how are folks handling getting into the fastai python code to get a sense of what is being done. I am working on paperspace and was planning on cloning the github repo to my local machine and then using a python IDE to start exploring. Is this the best way?

In the case of transfer learning there are two phases. First with the pretrained part frozen and only the newly added parameters are trained (by default for 1 epoch) and then the whole model together with the pretrained part. See the code here.

As for your general question, I’ve actually created a tutorial for that

3 Likes

thanks @slawekbiel

Just my personal preference: I would prefer lessons’ official topics to remain pinned :pray:

Who’s with me? :slightly_smiling_face::handshake:

3 Likes

I have similar experience. Generally speaking, I think it is just the way things are.

Is there any specific example you have in mind?

Maybe this addresses your issue? In short: what class is probs[1], what does it contain, etc.

Any update on this?
Both your link to https://book.fast.ai/ above and the relevant link in github fastbook just send me straight to course homepage.
Recommended tutorials would be really useful.

(Same happened for the questionnaire solution link but happily you made a wiki for this Fastbook Chapter 1 questionnaire solutions (wiki) - thanks)

@wlw if you go to the very top of this posting, under Lesson resources, click the link for the The fastai book which will take u to the github version

@foobar8675 thanks. I think you might have misunderstood (may be my bad though!)…

I have been reading the github version of the book already. This online book refers to a “book website” with helpful resources, and includes a corresponding link to https://book.fast.ai

Unfortunately this link just redirects to course homepage - which I have also already been using.

1 Like

The resources will be added to the course website. See this issue.

Well I try the code and it workd, though that upload button did not display the image I was uploading. And when I changed the image it the count on the button went up, though it still references the (0) item. So quite confusing.

I’ve setup my linux server with a nvidia 1080Ti with software installed to go through the jupyer notebooks of the course. I’m going through the 01_intro jupyter notebook and successfully evaluated every cell in sequence until I reached this cell:
dls = TextDataLoaders.from_folder(untar_data(URLs.IMDB), valid=‘test’, bs=4)
learn = text_classifier_learner(dls, AWD_LSTM, drop_mult=0.5, metrics=accuracy)
learn.fine_tune(4, 1e-2)

epoch train_loss valid_loss accuracy time
0 0.727598 0.495284 0.785680 15:27
epoch train_loss valid_loss accuracy time
0 0.734474 00:01

Even though I’ve reduced the batch size to 4, it still errors out with the following message:
RuntimeError: CUDA out of memory. Tried to allocate 92.00 MiB (GPU 0; 10.92 GiB total capacity; 3.25 GiB already allocated; 63.75 MiB free; 3.31 GiB reserved in total by PyTorch)
Exception raised from malloc at /opt/conda/conda-bld/pytorch_1595629395347/work/c10/cuda/CUDACachingAllocator.cpp:272 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x4d (0x7f54c568d77d in /opt/anaconda3/envs/fastai2/lib/python3.8/site-packages/torch/lib/libc10.so)

Is there a workaround, or should I give up running this model training on my linux computer because it is underpowered? Is there any other workaround for this issue?

Cheers,
Nasir

Hello Nasir,
Welcome to the community.
As it is mentioned in lesson 1, there is absolutely no need to set up your own machine yet, as it will only take away time and energy from learning.
I for one have also struggled with this in an previous attempt to finish this course (I believe 2018). Believe me, it is not worth the hassle. Any free cloud service machine like Google colab or paperspace will easily outperform your 1080ti. And your own card will never be able to scale up when the time arrives that you need the performance.
Unless you have really sensitive data, I would advise to do as Jeremy says and use a cloud provider with ready to use fast.ai images. It is worth it.
Cheers!
Michael

A 1080Ti should be able to handle a bs=32 on that cell. I should know as I have the same configuration.
It sounds like the gpu memory is not clearing properly. I would restart the kernel, run the first cell of the notebooks to do the imports, then skip to run this cell.

Maybe have a second shell going to watch the ram usage with nvidia-smi?

1 Like

Thanks Michael for the advice. I’ll fall back on using cloud platform if problem persists. However, I find it curious that I was able to run all the other cells in the notebook.

Cheers,
Nasir

Thanks for the advice. I’ll plow through the remaining notebooks and see if the problem recurs elsewhere. Despite multiple kernel restarts and importing of libraries and reducing batch size to 4 , I get the same error.
Cheers,
Nasir