Share your work here ✅

Maybe it would make sense to add a relu and an additional linear layer on top, so that instead of just a weighted combination of NLP and a tabular model activations you would also get their interactions?

Also you mention that the same learning rates are used for both models, which sounds like an important limitation. Maybe the learned bottom layers weights should be kept frozen and only the top layers trained?

1 Like

You wanna try Docker Toolbox for Windows (legacy version) instead of Docker for Windows which supports Windows pro and Windows Enterprise, Try this article.

This is looking amazing already - looking forward to the next stage! :slight_smile:

2 Likes

I love this! This is a great example of how to create your custom ItemBase and ItemList :slight_smile:

Just one remark to help you port your code to the latest version of fastai, the show_batch method in AudioItem should now be show_xys in AudioItemList (and you can code show_xyzs if you want show_results to work).

For all the ways you can customize your own ItemList, there is now a tutorial, just in case you didn’t see it before.

13 Likes

really nice. I guess baseline wander, noise and phaseshift could be augmentation candidates ?

Great work its too much interesting i am not able to reproduce your results. its giving me the following error. The web-page is not getting loaded.

Hi,

That all seems valid and I don’t see an error there. Can you check the console on the browser? Also, are you serving via HTTPS, most browsers require this to access the camera.

According to your code if its non HTTPS then the code is forcing to serve in https. issue is when i run python server.py and try to acess its get loaded for a while without webcam on and suddenly the web-page dies.

Which browser/os? Any errors showing up in the browser? Please feel free to message me directly so we don’t spam this thread.

Please Check your inbox lets talk privately for the ease of others.

Hello All,

I am working on a kaggle project https://www.kaggle.com/c/plant-seedlings-classification/data and have used Resnet101 model weights and have further finetuned weights. But i am not able increase the accuracy beyond 87%.

Can anyone please guide me how i can improve accuracy further? Github link for the notebook is https://github.com/amitkayal/PlantSeedlingsClassification/blob/master/Plant_Seedlings_Classification_fast_ai.ipynb

epoch train_loss valid_loss accuracy
1 0.904559 0.602475 0.826715
2 0.886423 0.490211 0.830325
3 0.717062 0.447640 0.814079
4 0.600986 0.386214 0.877256
5 0.514590 0.415294 0.873646

Thanks
Amit

hi it looks like you are using the default crossentropy loss function which is meant for multiple classes but only one class pr input/output pair.

There is a long discussion here about using MultiLabelSoftMarginLoss for multiple classes pr input/output pair https://discuss.pytorch.org/t/multi-label-classification-in-pytorch/905/56.

Hope this can help

Sorry I am not expert and need some more guidance on this. My data-set is multi class and each data belong to only one specific class. So can’t i use cross entropy for this as loss function? I thought cross entropy can be used?

Thanks
Amit

@amitkayal here is the documentation for pytorch crossentropy it says that cross entropy should be used for multi-class classification problems, which is your case I guess.

1 Like

When you do classification, the usual loss function that is used is called Cross Entropy.

It can be used if you have 2 classes and is in this case sometimes called Binary Cross Entropy and defined by the function : -(y \log(\hat{y}) + (1-y) \log(1-\hat{y})), which is basically the same as an if/else statement.

If you have more than 2 classes, it is sometimes called Categorical Cross Entropy and is defined in a general manner by the function: - \sum_c y_c \log(\hat{y_c}).

So in your case you should use Categorial Cross Entropy, simply called CrossEntropyLoss in PyTorch.

5 Likes

Thanks a lot. So this means I need to override the default loss function into my createCNN function.

thanks a lot…

@NathanHub …I have done the changes and now loss has been explicitly been added. Still i can only go upto 88.3% accuracy. Have uploaded the script and link is https://github.com/amitkayal/PlantSeedlingsClassification/blob/master/Plant_Seedlings_Classification_fast_ai_categorical_crossentropy.ipynb

May be now I need to update FC layers which is being recommended by CreateCNN function of fast.ai?

poch train_loss valid_loss accuracy
1 0.217084 0.274144 0.895307
2 0.217529 0.282573 0.891697
3 0.238252 0.296328 0.884477
4 0.227515 0.311445 0.880866
5 0.226270 0.307705 0.881769
6 0.210381 0.293696 0.888087
7 0.203054 0.297964 0.88267

Thanks
Amit

1 Like

Hello All,

I have created a flask app and deployed on Heroku, here is the link:
http://water-classifier1.herokuapp.com/

I have managed to get the error rate of 10% but there is still a lot of improvement I have to make in order to make my model robust.

A lot of people are struggling to deploy their flask app on Heroku because of the size and installation of a library, I have written a guide on GitHub in case anyone needs it

I am also writing a blog which will be coming soon :slight_smile:

12 Likes

crossentropy is the default for image classification i fastai so you should expect the same results as before. MultiLabelSoftMarginLoss is a lossfunction i pytorch that you can find her: https://pytorch.org/docs/stable/nn.html#multilabelmarginloss.
so you could try and set learn.loss_function = torch.nn.MultiLabelMarginLoss() and see if that works.

2 Likes