Share your work here ✅

Thanks a lot. So this means I need to override the default loss function into my createCNN function.

thanks a lot…

@NathanHub …I have done the changes and now loss has been explicitly been added. Still i can only go upto 88.3% accuracy. Have uploaded the script and link is https://github.com/amitkayal/PlantSeedlingsClassification/blob/master/Plant_Seedlings_Classification_fast_ai_categorical_crossentropy.ipynb

May be now I need to update FC layers which is being recommended by CreateCNN function of fast.ai?

poch train_loss valid_loss accuracy
1 0.217084 0.274144 0.895307
2 0.217529 0.282573 0.891697
3 0.238252 0.296328 0.884477
4 0.227515 0.311445 0.880866
5 0.226270 0.307705 0.881769
6 0.210381 0.293696 0.888087
7 0.203054 0.297964 0.88267

Thanks
Amit

1 Like

Hello All,

I have created a flask app and deployed on Heroku, here is the link:
http://water-classifier1.herokuapp.com/

I have managed to get the error rate of 10% but there is still a lot of improvement I have to make in order to make my model robust.

A lot of people are struggling to deploy their flask app on Heroku because of the size and installation of a library, I have written a guide on GitHub in case anyone needs it

I am also writing a blog which will be coming soon :slight_smile:

12 Likes

crossentropy is the default for image classification i fastai so you should expect the same results as before. MultiLabelSoftMarginLoss is a lossfunction i pytorch that you can find her: https://pytorch.org/docs/stable/nn.html#multilabelmarginloss.
so you could try and set learn.loss_function = torch.nn.MultiLabelMarginLoss() and see if that works.

2 Likes

Walt Whitman Poetry Generator app: https://leaves-of-ai.now.sh/
Github: https://github.com/btahir/leaves-of-ai

More style than substance at the moment but it was a fun project. I have setup the web app to be taking a string input and running learn.predict on it rather than the image upload/classification in the default zeit sample app. Feel free to use it if you are doing something similar. :slight_smile:

16 Likes

The progressive resizing is a nice trick to get very good results. As those are pictures of plants taken by hand and from the top, you could use more data augmentation as warping (the plants are not all exactly straight and the pictures are not from the exact top position), you could also use some random rotations as plants from above do not depend much on that (I saw that you used vertical flipping which also is a good idea).

Also why wouldn’t you update FC layer ? After finetuning ResNet a little bit to avoid having random weights at FC layers, you certainly want to update them. Hope that helps :slight_smile:

1 Like

I generated Grad-CAM heat-maps (from lesson 6) at different points along the network to see how they shape up. Here’s how they look like:

The way these heat maps look (for different input images) at all points except the last few layers seems pretty random to me. Would love to hear from some of the more experienced folks out here if they can figure out some generalizations from these images.

Notebooks here.

8 Likes

Proof of prevention in Overfitting through implementing Dropouts
Hi this week I tried to implement my own classifier from scratch using PyTorch, creating my own 4 layers of neural network and comparing two architecture, one has a dropout and the other doesn’t. Please checkout the medium post below to see my full implementation and also the proof of prevention in Overfitting through Dropouts.

All feedback is welcomed! :slight_smile:

Link:

Hi everyone,

In lesson 5 and 6, Jeremy introduced few concepts of regularization technique such as dropout, weight decay and optimization such as momentum, adam …So I decided to implement them all from scratch (using only python and numpy) just to see how hard it could be (I did implement a bare-bones neural net a while ago so it’s not too bad). It helps me alot in exploring the architecture and its hyperparameters in depth, and troubleshooting as well (e.g. know what vanishing gradients and exploding gradients look like and how to fix them). So I decide to share the code with you and hope it helps you as much as it helps me.

Here is the notebook where I do my experiments and training: part 1 on neural net + regularizations / part2 on optimizers. The code itself is in neural_network.py and optimizers.py

If you are interested, I have implemented some other machine learning algorithms as well. You can take a look here: http://quantran.xyz/projects/#scratch-ml

9 Likes

Hey lovely people,

parts 3 and 4 and thereby my whole series about the inner workings of convolutional neural networks are finished. I am very happy how the whole thing turned out and I hope that it will be helpful for some. The parts are focussing on the backward pass, i.e. backpropagation in general, gradient descent and backpropagation in the convolutional layers.


Cheers,
Marvin

4 Likes

Hello!

I wanted to share my work on image classification, with different problems:

  1. Happy vs Angry dogs:


    getting 95% accuracy on validation: dogs_CM

  2. Taekwondo vs Judo matches:

    getting 98% accuracy: martial_CM

  3. Trashies vs Shopkins toys:

    with 95% accuracy: toys_CM

In all cases I used transfer learning tuning the last layer only (no unfreeze), with 10 epochs and all hyper-parameters in their default.

I downloaded the images from google, as explained in the class, with somewhere between 100 and 400 images per class. I spent a significant amount of time curating the images (removing duplicates; cropping when needed).

For notebooks and more details: https://github.com/martin-merener/deep_learning/tree/master/quick_transfer_learning

1 Like

I also worked in a multi-class problem classifying 12 different kinds of sushi.

getting an accuracy of 93%

Approach: transfer learning from resnet34, no unfreeze, all defaults hyper-parameters, 10 epochs.

And I built the corresponding web app demo: https://sushiclassifier-xiylbmtwnr.now.sh/

Details and notebook: https://github.com/martin-merener/deep_learning/tree/master/quick_transfer_learning

Cheers!

2 Likes

Hey,
I’ve written up a blog post about a side-project at work: finding wrongly conjoined words like “thespace” or “helloworld” and splitting them up so we can reduce the amount of unk tokens for NLP problems.
The implementation is based on two character level language models which read the text forwards and backwards and try to identify likely split points - all implemented with fastai and pytorch!
Blogpost:


Notebook:

13 Likes

Paul, I tried to setup VSCode on my local machine (mac) to browse the fastai code, but the ipynb files only load as raw JSONs… how did you work around that? I already installed the Microsoft python extension as well as 2 jupyter extensions inside VS code. Can you guide me please? Thanks.

The source code of the fastai library is in the *.py files.

In VS Code you can add the fastai subfolder to easily search it.

Maybe there are also options to view Jupyter notebooks properly (but I am not aware of them). Maybe somebody knows how to view/run them in VS Code?

Thanks Michael… I was looking at the course repo earlier. You’re right, I’m able to view the fastai library python code just fine. But my goal is to be able to start at the course notebook and go back n forth into the modules and function calls from there.

Also, within VScode, would you know how to build tags so I can jump in and out of functions? Do folks usually clone the pytorch library as well to see all the way through?

Thanks!

I’ve written the following short Medium post diving deeper into the Pytorch library, doing some code review, and looking at the concepts we deepened into during Lesson 5.

By @jeremy 's recommendation, I have focused mainly on the torch.nn.modules.linear class.

Nalini: I’ve installed the extension in the picture and it’s quite usable:

I’m an emacs user by habbit, and typically do my development in emacs; for the notebooks I sometimes use “ein” in emacs, but often I just use any browser that’s available (safari/firefox/chrome). I’ve played with VSCode on the mac and linux, and I like it, but I’m not an expert.

4 Likes

I did some research on this for another multi class problem, and I think that the multi label soft margin loss can be unstable. Using BCEWithLogitsLoss is a good default with pytorch.