General course chat

Using 1.0.24 worked for me. Thanks @ademyanchuk.

Next question. Is there an interpretation class for multi-label classification? What’s the right way for us to do similar analyses here?

I would like to see confusion matrix, top losses, etc for my multi-label classification of pizza toppings, but the single image classification interp object doesn’t seem to be working for this.

2 Likes

I didn’t dig deep in that case. So, have nothing to say, sorry)

Even more helpful would have been to stick with a single version for a course, IMHO.

I got the reply from Jithin James. It worked.

The correct syntax is: data = (TextList.from_csv(path, ‘texts.csv’, col=‘text’) .split_from_df(col=2) .label_from_df(cols=0) .databunch()) This is for fastai version: import fastai fastai.version [9]: ‘1.0.24’

1 Like

On running the example bear classifier, I’m consistently classified as a black bear and my wife a teddy. Guess we know who to look out for :bear::bearded_person::teddy_bear:
Too funny.

Running camvid segmentation notebook from lesson 3 as is from the repo but not able to get same results… The accuracy is going almost close to zero. Using fastai version 1.0.24

image

Anyone else facing this issue?

Anyone know why sometimes the training is super slow, with the same configuration but in 2 differences time, the training time is much different ? Sometimes I can fix it by reboot my computer but I don’t know why.

I found that if I restart the kernel so I can train much faster. I think the reason is because of the GPU memory. GPU memory not being freed after training is over . I will read to see how to free the memory without restarting the kernel

1 Like

What human Human overfitting feels like :wink:

1 Like

After banging my head around this for a lot of time, couldn’t identify the issue causing this unusual behaviour. Did a system reboot and looks like that has fixed it

Yeah I also have problems understanding it. I referred to the dev_nb in the fastai_old repo. But couldn’t get myself to understand it. It will help a lot if Jeremy tells about them in the coming lessons

1 Like

I noticed that Lr_find() in version 1.0 is significantly slower than the previous one. I also noticed that it runs 4 times, so does that mean it reduces lr for each mini batch through 4 epochs? I’m confused

The lesson 3 head-pose notebook currently gives out an error when the data object is created
@jm0077 has created an error trace
The notebook worked fine until I pulled and conda installed the new updates (fastai version 1.0.27)

I pondered about YouTube’s auto-caption feature while watching lesson-4 (NLP).
As per my guess, I think they use speech to text ML models to generate those captions.
Is there any possibility that in-addition they might also be using NLP to enhance the prediction ?
I think speech-recognition + NPL would really further improve the auto-caption performance.

Just want to go on record saying I LOVE Jeremy’s Excel demos. Not that I’m a major Excel-worker, but I find it so clearly emphasizes “what’s really going on” without any possible “magic”.

6 Likes

Do bias are also updated by product of learning rate with it??

b = b - db * learning_rate

or just this

b = b- db

The first one, like all parameters that are trainable.

1 Like

This might be a relatively basic question, but why is the batch size parameter always set to values that are powers of 2?

How to use test csv file with learn.predict() ?

How do we fetch the list of filenames from our dataset with the new updates to the library?

I’m running on planet-amazon. 10 days ago I was able to just call:

idx2class = {v:k for k,v in learn.data.train_ds.ds.class2idx.items()}

to convert class indices back to class-names, and:

fnames = [f.name.split('.')[0] for f in learn.data.test_ds.ds.x]

to get the filenames. But the .data.<xyz>_ds no longer has the .ds attribute I was using.

I checked the changelog and searched around the forums, but anything I found was from about a month ago. I’ll edit this post with the answer if I find it.


edit:

so looks like you can call:

learn.data.train_ds.x.items

to get the list of filepaths (and also for .valid_ds and .test_ds).

Is this the ‘right’ way to do it? And is this guaranteed to match up with predictions on the validation and test sets?


edit2:

Think I found how to get your class-to-index mapping:

learn.data.train_ds.y.c2i

It’s gotten more intuitive: (“where can I find filenames?” → take a look at where the data comes from: .<blah>_ds.x.<blah>; “where can I find how classes are one-hot encoded?” → look where the labels are stored: <blah>_ds.y.<blah>

2 Likes