General course chat

No, still stuck up. Help me out :neutral_face:
@matejthetree

Well first check if you did all the steps from the notebook at the beginning, where it says to accept all the kaggle agreements?
https://github.com/fastai/course-v3/blob/master/nbs/dl1/lesson3-planet.ipynb

You can check if you really downloaded the dataset by playing with
ls
cd {nameOfTheFolder}
ls -al

ect…

You can also go to kaggle, download the dataset, untar it locally and upload it to google drive, if you know how to use google drive on Colab.

I would recommend to do it inside kaggle kernels
https://www.kaggle.com/hortonhearsafoo/fast-ai-v3-lesson-3-planet

Fork it and change the ImageItemList to ImageList as the API changed

1 Like

I’ve downloaded the dataset, but i’m not able to unzip it.
I used the same code as mentioned in the lesson,
! 7za -bd -y -so x {path}/train-jpg.tar.7z | tar xf - -C {path.as_posix()}

But it returned syntax error.
Anyways I tried with kaggle environment and it worked out well.
Thank you.
@matejthetree

1 Like

Can a databunch object be converted to other type of object like a numpy array or object which a keras model takes as input?

you can get the dataset from databunch using .dl https://docs.fast.ai/basic_data.html#DataBunch.dl

When you have torch dataset, iterate over it to create numpy array

1 Like

thanks

Stuck again. running notebook: http://192.168.1.11:8888/notebooks/nbs/dl1/lesson7-superres.ipynb

All goes well until this step:

learn.lr_find()

When I get an out of memory error:

RuntimeError: CUDA out of memory. Tried to allocate 5.21 GiB (GPU 0; 8.00 GiB total capacity; 814.63 MiB already allocated; 5.46 GiB free; 1019.50 KiB cached)

As you can see there is lots of free memory, so I figured it was fragmentation, so rebooted and tried again. But the same problem.

I also tried reducing the batch size:
bs,size=32,128
to
bs,size=8,128
This didn’t help, which makes me think I am not understanding it…

Any ideas?

ChrisP

Quick follow-up. I used the line of code “DataBunch.save(‘ImageDataBunch’)” analogous to “learn.save(“model_stage_1”)” to save a model. Is that correct?

Also, you wrote, “When you have torch dataset, iterate over it to create numpy array” are you saying that the line of code “data = ImageDataBunch(path, …)” creates a Pytorch dataset named “data” that can be used to create a numpy array by iterating over it?

Sorry, I seem “thick” but I am a newbie. My first post!

I think that I should clarify even more. The reason I want to convert the “data” created by ImageDataBunch to a more readable format is that I am assuming inside “data” it tell me which image (by id from image filename) is in each sets of data i.e. training vs validation. Also, I understand the in transformations / augmentation are applied that they are done so at the time the images are loaded into the learner. Is there anyway to track that so that I can tell if the transform, and of what type, of an image was applied for us in the learner?

data.show_batch(rows=3, figsize=(10,8), ds_type=DatasetType.Valid)

apply other constant for train set

@WinstonDodson

matej - thanks! From this, it looks like you knew some facts about the structure of the ImageDataBatch for this method i.e. “rows=3, figsize=(10,8)”. Are there references in the fastai docs or Jeremy’s lectures that explain this?

ImageDataBunch extends DataBunch

so you need to see those as well

https://docs.fast.ai/basic_data.html#DataBunch.show_batch

You rock! After I read these docs again I now remember Jeremy’s lecture. I will look through the detailed lecture notes people have published, look for this subject and really drill down. Thanks again!

1 Like

Hi!

I’m confused and don’t know where to start. I have working knowledge of Python and know a bit of math. Should I start with the Deep learning course or the Intro the Machine Learning course? Any help is appreciated!

Hi @at14,

Start by going to https://course.fast.ai/
Read throught the first page to get started. I use the google cloud platform (GCP).
Then after that, click on lesson one under lessons on the top left of the page. In the video, you’ll be given every other information you need.

It is pretty straight forward and very cool.
Cheers

1 Like

Hi,

In lecture 4 - Collab Filtering the y_range is specified as follows:

y_range = [0,5.5]

Why is the range value not [0,5] ? Am I missing something here?

Hi, I’m trying to make a tensorflow implementation of lecture-1 pet classification and I could not get it to work . Can anyone please help me out.

Hi, in lesson 4, in the part where he draws the matrix multiplication in neural network,
He says multiplying a 3x1 matrix and a 3x5 matrix yields a 5x1 matrix, but it actually yields a 3x1 matrix.
Is this correct?
@Prowton

@SiddharthGadekar
It is because as the activation function we use is Sigmoid, it has asymptote at the boundary of the functions, (i.e. 0 and 5) so we can actually never get the value as 5(as asymptotes tends to touch, but never touch the line).
But movies may have a rating of 5, so to include that we set the limit at 5.5

In lesson 7, model5 replaces nn.RNN with nn.GRU, so I tried to replace nn.GRU with nn.LSTM

    self.rnn = nn.LSTM(nh,nh, batch_first=True,num_layers=1)

But I then get this error: (in fit_one_cycle)

RuntimeError: Expected hidden[0] size (1, 64, 64), got (64, 64)

Can anyone suggest how I should resolve, it seems like I need to unsqueeze the data in some way to make it the right shape? But really I’ve no idea.

Any advice appreciated.

1 Like