General course chat

I found that if I restart the kernel so I can train much faster. I think the reason is because of the GPU memory. GPU memory not being freed after training is over . I will read to see how to free the memory without restarting the kernel

1 Like

What human Human overfitting feels like :wink:

1 Like

After banging my head around this for a lot of time, couldn’t identify the issue causing this unusual behaviour. Did a system reboot and looks like that has fixed it

Yeah I also have problems understanding it. I referred to the dev_nb in the fastai_old repo. But couldn’t get myself to understand it. It will help a lot if Jeremy tells about them in the coming lessons

1 Like

I noticed that Lr_find() in version 1.0 is significantly slower than the previous one. I also noticed that it runs 4 times, so does that mean it reduces lr for each mini batch through 4 epochs? I’m confused

The lesson 3 head-pose notebook currently gives out an error when the data object is created
@jm0077 has created an error trace
The notebook worked fine until I pulled and conda installed the new updates (fastai version 1.0.27)

I pondered about YouTube’s auto-caption feature while watching lesson-4 (NLP).
As per my guess, I think they use speech to text ML models to generate those captions.
Is there any possibility that in-addition they might also be using NLP to enhance the prediction ?
I think speech-recognition + NPL would really further improve the auto-caption performance.

Just want to go on record saying I LOVE Jeremy’s Excel demos. Not that I’m a major Excel-worker, but I find it so clearly emphasizes “what’s really going on” without any possible “magic”.

6 Likes

Do bias are also updated by product of learning rate with it??

b = b - db * learning_rate

or just this

b = b- db

The first one, like all parameters that are trainable.

1 Like

This might be a relatively basic question, but why is the batch size parameter always set to values that are powers of 2?

How to use test csv file with learn.predict() ?

How do we fetch the list of filenames from our dataset with the new updates to the library?

I’m running on planet-amazon. 10 days ago I was able to just call:

idx2class = {v:k for k,v in learn.data.train_ds.ds.class2idx.items()}

to convert class indices back to class-names, and:

fnames = [f.name.split('.')[0] for f in learn.data.test_ds.ds.x]

to get the filenames. But the .data.<xyz>_ds no longer has the .ds attribute I was using.

I checked the changelog and searched around the forums, but anything I found was from about a month ago. I’ll edit this post with the answer if I find it.


edit:

so looks like you can call:

learn.data.train_ds.x.items

to get the list of filepaths (and also for .valid_ds and .test_ds).

Is this the ‘right’ way to do it? And is this guaranteed to match up with predictions on the validation and test sets?


edit2:

Think I found how to get your class-to-index mapping:

learn.data.train_ds.y.c2i

It’s gotten more intuitive: (“where can I find filenames?” → take a look at where the data comes from: .<blah>_ds.x.<blah>; “where can I find how classes are one-hot encoded?” → look where the labels are stored: <blah>_ds.y.<blah>

2 Likes

Is there anyway I can read zip file directly through fast.ai libs and read all image files within zip files?

Thanks
Amit

I think there’s no fastai method to extract zip files right now, you have to decompress it on your own using an other library first

1 Like

Thanks…Have now used python zip module to unzip

unfreeze() function generally unfreezes all layer. I could not locate any fast.ai function which allows me to unfreeze last x layers and not all the layers…Can anyone please help me on this?

Thanks
Amit

look at the imdb notebook. I think you want to use

learn.freeze_to(-x) where X would be you last few layers or your choosing

3 Likes

A note: It looks like freeze_to freezes by layer group (learn.layer_groups). For resnet34 and resnet50 there are 3 layer groups.

So for finer control I guess you can just take a look at the function and apply

if not self.train_bn or not isinstance(l, bn_types): requires_grad(l, False)

on the layers (l's) you want to freeze.

I always insert in my notebooks fastai.__version__

1 Like