Share your work here ✅

Val loss lower than train loss means you are under-fitting. Val loss should always be higher than train loss when you are finished fitting.

edit: fix over->under

3 Likes

Can you tell us what you changed to make it more accurate?

I believe there is a typo here. You said today that ‘when train loss > val loss means you have not fitted enough’. I think what you mean here is: ‘Val loss lower than train loss means you are underfitting ’.

I cropped them locally and create a separate dataset for that. See: https://github.com/arunoda/fastai-courses/releases/tag/fastai-vehicles-dataset

(Check the filename: fastai-vehicles-crops.tgz)

Read the end of this blog post on how I cropped those images with ImageMagick.

Used a melspectogram
https://librosa.github.io/librosa/generated/librosa.feature.melspectrogram.html

3 Likes

I am asking how did you pass these multiple squares of one image into the cnn to make the classification? I read your blog but this is not mentioned

They all belong to a single category. That is enough.
No need to relate it to a given image.

Great work and insights Alex ! Might be a good idea to start a new thread on the topic of working with huge datasets in fastai v1.

After listening again to part2 v2 lectures, I realize that what I I’m trying to do may be done better with the U-Net architecture. It give per-pixel classification along with using multiple levels of detail to generate the result. The ground true would be trivial - all pixels are the same class (the artist of that painting). I’ll get back to this project later.

How can we apply 10-fold in fastai?

Thanks

This is neat ! :clap:

I am also bit confused now. In the

This is counter-intuitive. When we say val loss > training loss than it means my model did good on training (low train loss) but performed worse on testing (high val loss), it means it learned training data well but is not generalizing very well on a validation set so it is “overfitting”.
On the other end when we say val loss < train loss means I am doing good on validation but not so good in training so I have a scope of improvement. I am “underfitting”.

Am I missing something? (Sorry to tag you directly @jeremy)

2 Likes

Many apologies - not enough sleep and I didn’t notice I’d typed the opposite of what I meant! Fixed my post now, and removed most of the replies of people that I confused in the process, so as to avoid confusing people even more… :blush:

10 Likes

Greeting!! Sorry for late post, was busy with School.
Below is a link of my work for Assignment_1.
nb: https://gist.github.com/imbibekk/651b43aa5b4772442311515244c3cd8c
blog: https://medium.com/@bibekchaudhary/are-you-chinese-japanese-or-korean-93e4bf270a5

2 Likes

@Bibek from your post:

But take a look at the validation loss. This is higher than the training loss which means that our classifier is over-fitting

Listen again to yesterday’s lesson. You are not over fitting! :slight_smile:

@navjots I edited the post with a link to the jupyter notebook - link.

1 Like

Ciao @Kaspar I’m also interested in analysis medical images for “radiomics”.
It would be great to join efforts on some experiments

3 Likes

Super interesting!
How did you manage the segmentation?

1 Like

Here is a result I got from trying this on elephants and mammoths. (took me a while to figure out something cool i wanted to classify.)

Am I on the right track?

@jeremy
THANK U SO MUCH for reading the post and pointing that out.
I watched the video and understood training loss < validation loss is actually a good sign and we want to train the model in that way.
But I find hard time understanding the performance of the model when I unfreeze the layers. It’s supposed to improve but at best it’s getting the same performance as the case of freezed layers.
Could you enlighten me on that?

1 Like