Share your work here ✅

Hello everyone, I want to ask what is the better way to share Jupyter notebook on GitHub, using Gist or a repo? Thanks.

Hey everyone, I just wanted to share some work I’ve been doing. At my university, I have been preaching fastai to both undergraduate and graduate level students. As a result I “teach” the course there through my club. Essentially I use the lessons as a base, and expand from there. Through this, I’ve been able to get four research projects for other students using the fastai library and the professors love it. I wanted to share with you guys my lecture/meetup material, in case anyone else would find it useful. This year I made it two days a week, where the first day we go over a particular type of problem (tabular, images, etc) and the second day is focused on state of the art practices with code examples, along with helpful tips/resources/functions for applying fastai and deep learning for research. If anyone wants to take a look, my notebooks are here :slight_smile:

It may look slightly disorganized, I’m still preparing for the next class for this semester. Should be completely done with the new material in the next week or so.

The notebooks are all finished :slight_smile:

9 Likes

Hi muellerzr hope all is well!

Thanks once again for sharing your work.

Have you got any time management tips or a specific work ethic I could learn or emulate?
You seem to create and help so much.

.

Thank you.

mrfabulous1 :smiley::smiley:

1 Like

Hey @mrfabulous1! Sure :slight_smile: I usually find some project where I can just get lost in it, explore it until it frustrates me, and continue until it doesn’t. Also, trying to teach and guide others at my school has really helped me make sure I know the material, as the people I am helping come in never even touching python in some cases. That takes a lot of prep work and thinking into how to help gear them into the right direction.

For the past few months also, I work roughly 1-2hrs a day on smaller projects (this was before the meetup work), just exploring what some functions do, how they work, and applying it to any dataset I could find. Since most of my research is tabular, I was going through datasets found on the UCI.

Then, I’d explore pure pytorch code papers and try to migrate it to fastai. Sometimes this is easy, eg the new optimizer LessW2020 got to work, where it’s a simple port of a function, other times it’s trying to pull full architectures from papers such as NTS-Net or Deep High Res. Again only working at most 2 hours a day so I don’t get too frustrated.

I also explore the source code and lecture notebooks. Often. How does x work. Why does x work. And why does doing y break x’s code? (What did I do wrong). Most of the time, simply tracing back what a function does answers most of my questions. And for the course notebooks, I still can’t remember how to write an image databunch from memory so I cheat (oh no!). I try to not, and if it doesn’t quite work, the course notebooks show an example for most any problem so I debug there.

I write (or try to) when I can. I haven’t lately for my blog as things have been crazy, but I found writing blogs have helped me figure out what’s the most important bits from lectures, the library, etc and also helps me to be able to explain it to others.

And lastly, for lectures (the actual fastai course). Honestly I didn’t complete course v3 for four months. Why? I focused on what I needed then, and slowly worked my way through. Doing this allowed me to not get overwhelmed with the super advanced topics at the end of the course right away, and instead focused on what I needed to learn and do at the time for my various tasks.

I know I said lastly but just came to me, also don’t be afraid to be curious. Einstein once said “ The important thing is not to stop questioning. Curiosity has its own reason for existing.” This can come in many ways such as feature engineering, playing around with the number of variables, classes, hyperparameters tuning, etc. even if someone’s done it, assume their way may not be the best, and try to see if you can outthink it. Even if that somebody is yourself! :slight_smile: I had a research project where I was trying to beat a baseline in random forests. I spent two months on it and couldn’t quite do it. I always fell 1-2% short. Then I had discovered a paper on feature engineering for sensors a few months later, revisited it with my new knowledge and practices and wound up blowing them out of the water! Patience, persistence, and curiosity is everything. While I know a decent amount about the library, there is much I don’t know, and I always remember that to stay level-headed. Everyday I’m learning something new just by playing around.

So basic sum-up:

  • Spend 1-2hrs a day on mini projects that I can get deep into for a month or two at most.
  • Look over the source code and notebooks often
  • Write blogs and lecturing geared towards those who either barely know what fastai is or are getting the basics to make sure I know it well enough and can explain it.
  • Go through the lectures and courses slowly, relistening and running the notebooks often.
  • You are your own rival. Try to outperform yourself on your projects and you will see growth.
  • Read the forum daily. Even a casual browse of a topic. I may not understand something people are talking about, but I know it exists and I can revisit it later if I need to

Hope some of that will help you or others keep going :slight_smile: I’ve only been in this for 9 months now, and doing the above has helped me solidify my comprehension of the material to a point where it’s allowed me to teach and help others at a young age (I’m still 21) and opened many research and job opportunities. It doesn’t take much to get there :slight_smile:

13 Likes

Hi muellerzr thank you for providing a comprehensive reply.:smiley:

I’m happy to say I have always had a lot of perseverance, curiosity and patience with others but according to my partner not with my self. I do a few of the things you mentioned. But from your reply I can see I can do a lot more. I will endeavor to add some of your tips to my repertoire.

Many Thanks mrfabulous1 :smiley::smiley:

1 Like

I’ve been going through lesson 1 and 2, I think I got the ideas okay, my problem has been trying to deploy to a webapp for free. Tried heroku a few times but been having problems. I’d like to do it but it’s been hard, I’ll check android next.

But anyway, the thing I’m doing is a basic art classifier that tells you the artistic movement. Here’s a sample of the dataset:

My current error_rate is down to between 2-5%.

2 Likes

Hi @LauraB. Great job!.. in fact, wow, you did a LOT of work adding and trimming fastai to the Lambda. But why? :slight_smile:

I just deployed a small proyect (I’ll share it soon), but I didn’t have to add fastai, so I saved a lot of time there. I just exported the model to PyTorch and then used the dockerfile from Pytorch which had all the modules I needed ( https://github.com/brunosan/iris-ai/blob/master/iris-aws-lambda/pytorch/app.py#L4 ). What made you need fastai? Just curious, you must have spent A LOT of time on that bit, but I don’t know why. I think the reason is that you didn’t port your model from the fastai format to the PyTorch format (explained here: https://course.fast.ai/deployment_aws_lambda.html#export-your-trained-model-and-upload-to-s3 )

2 Likes

Hi @brunosan

You are right in that you don’t need the fast.ai library for inference if you export your model to the pytorch format.

I wanted to see if it was possible to have the fast.ai library running on AWS lambda, and it was a good learning experience for me :slight_smile:

Laura

2 Likes

Weeds vs Grass

So there is great interest in reducing the use of herbicides in parks where children play, so I though I would take a walk in the park and take photos (25 weeds/25 grass) with my iphone and see whether the resnet34 classifier would work. And it did! The error rate was 12.5%. I had 8 in my validation set and 17 in my training. I did it again with a different mix of train and validation. I got an error rate of 25%. But I then swapped to resnet50 and that dropped back down to 12.5%.

BTW, I’m totally running this on my windows machine and I spent very little effort in installing fastai and pytorch in my virtual environment using visual studio and pip. I’ve only really tested all of lesson 1 though - fingers crossed.

1 Like

I’m a non-engineer business guy diving into this world of deep learning, and I’m loving it. After the 1st lesson I have created (painstakingly) my first trained model AND web app (took me hours for figure this out). You can input any picture of a human face and my model will be able to tell you if it is smiling, frowning, or sad! It has a 82% accuracy rate. I’m not sure if that’s a bad or good accuracy rate, but I’m proud of it. Looking forward to lesson 2-7!

Here is my web app! https://expressive.onrender.com/

I wrote about it here: https://www.instagram.com/p/B1afnSgjzY3/

2 Likes

Hey everyone, I tried to see if I could beat the IMDB results by including SentencePiece and ensembling four different models (Forward + backwards, SentencePiece + Spacy). I did not quite achieve state of the art, but I need to see what I missed as I did not get Jeremy’s results, but they look promising! CrossPost, Article, Notebook

1 Like

Hi Ramon,
thank you very much for sharing this app! I learned a lot deploying it in a local server. It took a while since I had to learn about docker and other stuff.
Anyway I got it running, but I am doing something wrong.
I compared my version with the heroku version and everything works fine except for the heatmap the heroku version its fine mine version looks a bit scrambled.
This is the Heroku


This is mine:

Any help will be appreciated.

Lorenzo

1 Like

Hi Iphattori,

Thanks for your appreciation!

My best guess right now is that I did some changes (and committed them without deploy) after the last Heroku deploy. Maybe I upgraded the model or changed the predict.py file. I’m not able to dig into this right now. But you might want to check the latest commits to see if a change broke this part of the app.

Best regards,
Ramon

Continuing the discussion from Share your work here :white_check_mark::

Should SentencePiece help on an English corpus? I treat it as a necessary evil for Polish as we have too many forms for each word to make standard vocabulary work, but wasn’t aware this is needed/ helpful for English language…

In the class (NLP), Jeremy discussed trying a blend of all four, that’s why I did it. Overall I noticed sentence piece performing slightly less, but only barely.

1 Like

That’s cool - what did you use for data? How many images and did you manually label?

I scraped images of the members of congress from https://congress.gov using Beautiful Soup and built a classifier model using the lesson 2 notebook to determine whether an image was of a Republican or Democrat. It is deployed on render at https://repubordem.onrender.com.

I used images for 304 Republican members of Congress and 249 images of Democratic members of Congress. I got the overall error rate down to 35%, which I interpreted to mean the model was picking up something meaningful to distinguish between Republicans and Democrats, but not that great since you could get an error rate of 45% by just picking Republican every time.

1 Like

Hi all,

New member here. Thanks for this open source software!

Recently completed lesson 1 and 2 of the fast.ai course and I wanted to get stuck in. Decided to classify ships. Wrote a blog post on this at https://sites.google.com/view/raybellwaves/blog/classifying-ship-classes

Had a few hiccups along the way but as a result I am more familiar with the software. Here’s a list of some of my stalling points as how I got around them:

1 Like

Banknotes detection for blind people

I wanted to share a banknote detector I made. It recognizes what currency it is (euro or usd dollar) and what denomination (5,10,20, …). The social impact purpose is to help blind people, so I took care to make “real-life” training images holding the banknotes in my hand, sometimes folded, sometimes covering part of it.

It is deployed on iris.brunosan.eu

As others have shared, the fast, fun and easier part is the deep learning part (congrats fastai!), and the production server took roughly 10x time (I also had to learn some details about docker and serverless applications).

The challenge

I found just a few efforts on identifying banknotes to help blind people. Some attempts use Computer Vision and “scale-invariant features” (with ~70% accuracy) and some use Machine Learning (with much higher accuracy. On the machine learning side, worth mentioning one by Microsoft research last year and one by a Nepali programmer, Kshitiz Rimal, with support from Intel, this year.

  • Microsoft announced their version at an AI summit last year, “has been downloaded more than 100,000 times and has helped users with over three million tasks.” Their code is available here (sans training data). Basically, they use Keras and transfer learning, as we do in our course, but they don’t unfreeze for fine-tuning, and they create a “background” class of non-relevant pictures (which, as Jeremy says, it’s odd to create a “negative” class). They used a mobile-friendly pre-trained net “MobileNet” to run the detection on-device, and 250 images per banknote (+ plus data augmentation). They get 85% accuracy.

  • The nepali version from Kshitiz: 14,000 images in total (taken by him), and gets 93% accuracy. He started with VGG19 and Keras for the nural net, and “Reach Native” for the app (This is a framework that can create both an iOS and Android app with the same code), but then he switched to Tensorflow with MobileNetV2 and native apps on each platform. This was a 6 months effort. Kudos!! He has the code for the training, AND the code for the apps, AND the training data on github.

My goal was to replicate a similar solution, but I will only make a functioning website, not the app, or on-device detection (I’m leaving that for now). Since I wanted to do several currencies at once, I wanted to try multi-class classification. All the solutions I’ve seen use single-class detection, e.g. “1 usd”, and I wanted to break it into two classes, “1” and “usd”. The reason being that I think there are features to learn across currencies (all USD look similar) and also across denominations (the 5usd and 5eur have the number in common). The commonalities should help the net reinforce those features for each class (e.g. a big digit “5”).

The easy part, Deep learning

I basically followed the multi-class lessons for satellite detection, without really many changes:

The data

It is surprisingly hard to get images on single banknotes in real-life situations. After finishing this project I found the Jordan paper and the Nepali project which both link to their dataset.

I decided to lean on Google Image searches, which I knew was going to give me unrealistically good images of banknotes, and then some that I took myself with money I had home for the low denominations (sadly I don’t have 100$ or 500eur lying around at home). In total I had between 14 and 30 images per banknote denomination. Not much at all. My dataset is here.

Since I didn’t have many images, I used data augmentation with widened parameters. (I wrongly added flips, it’s probably not a good idea):

tfms = get_transforms(do_flip=True,flip_vert=True, 
                      max_rotate=90, 
                      max_zoom=1.5, 
                      max_lighting=0.5, 
                      max_warp=0.5)

In the end, the training/validation set it looked like this:

It’s amazing one can get such good results with that few images.

The training

I used 20% split for validation, 256 pixel size for the images, resnet50 as the pre-trained model. With the resnet frozen, I did 15 epochs (2 minutes each) and got an fbeta of .087, pretty good already. Then unfroze and did more training with sliced learning rates (bigger on the last layers) on 20 epochs, to get .098. I was able to squeeze some more accuracy by freezing again the pre-trained model and doing some more epochs. The best was fbeta=0.983. No signs of over-fitting, and I used the default parameters of dropout.

Exporting the model and testing inference.

Exporting the model to PyTorch Torch script for deployment is just a few lines of code.

I did spend some time testing the exported model, and looking at the outputs (both the raw activations and the softmax. I then realized that I could use it to infer confidence:

  • positive raw activations (which always translate to high softmax) usually meant high confidence
  • negative raw activations but non-zero softmax probabilities happened when there was no clear identification, so I could use them as “tentative alternatives”.

e.g. this problematic image of a folded 5usd covering most of the 5

{‘probabilities’:
‘classes’: [‘1’, ‘10’, ‘100’, ‘20’, ‘200’, ‘5’, ‘50’, ‘500’, ‘euro’, ‘usd’]
‘softmax’: [‘0.00’, ‘0.00’, ‘0.01’, ‘0.04’, ‘0.01’, ‘0.20’, ‘0.00’, ‘0.00’, ‘0.00’, ‘99.73’],
‘output’: [’-544.18’, ‘-616.93’, ‘-347.05’, ‘-246.08’, ‘-430.36’, ‘-83.76’, ‘-550.20’, ‘-655.22’, ‘-535.67’, ‘537.59’],
‘summary’: [‘usd’],
‘others’: {‘5’: ‘0.20%’, ‘20’: ‘0.04%’, ‘100’: ‘0.01%’, ‘200’: ‘0.01%’}}

Only the activations for class “usd” positive (last on the array), but the softmax also correctly brings the class “5” up, together with some doubt about the class 20.

Deployment

This was the hard part.

Basically you need 2 parts. The client and the server.

  • The front-end is what people see, and what it does is give you a page to look at (I use Bootstrap for the UI), the code to select an image and finally displays the result. I added some code to downsample the image on the client using Javascript. The reason being that camera pictures are quite heavy nowadays and all the inference process needs is a 256 pixel image. These are the 11 lines of code to downsample on the client. Since these are all static code, I used github pages on the same repository.

  • The back-end is the one that receives the image, runs the inference code on our model, and returns the results. It’s the hard part of the hard part :slight_smile:, see below:

I first used Google Cloud Engine (GCE), as instructed here . My deployment code is here, and it includes code to upload and save a copy of the user images with the infered class, so I can check false classifications use them for further training.

Overall it was very easy to deploy. It basically creates a docker that deploys whatever code you need, and spins instances as needed. My problem was that the server is always running, actually 2 copies, at least. GCE is meant for very high scalability and response which is great, but it also meant I was paying all the time, even if no one is using it. I think it would have been 5-10$/month. If possible I wanted to deploy something that can remain online for long without paying much.

I decided to switch to AWS Lambda (course instructions here). The process looks more complicated, but it’s actually not that hard, and the huge benefit is that you only pay for use. Moreover, for the usage level, we will be well within the free tier (except the cost of keeping the model on S3, which is minimal). My code to deploy is here. Since you are deploying a Torchscript model, you just need PyTorch dependencies, and AWS has a nice docker file with all that you need. I had to add some libraries for formatting the output and logging and they were all there. That means your actual python code is minimal and you don’t need to bring fastai (On this thread Laura shared her deployment tricks IF you need to also bring fastai to the deployment).

UX, response time.

Inference of the classification takes .2 seconds roughly, which is really fast, but the overall time for the user from selecting the image to getting the result can be up to 30s, or even fail. The extra time is partly uploading the image from the client to the server, and downscaling it before uploading if needed. In real-life tests, the response time was roughly 1s, which is acceptable… except for the first times, it sometimes took up to 30s to respond for the first time. I think this is called “cold start”, as AWS pulls the Lambda from storage. To minimize the impact I added some code that triggers a ping to the server as soon as you load the client page. That ping just returns “pong” so it doesn’t consume much billing time, but it triggers AWS to get the lambda function ready for the real inference call.

Advocacy

This summer I have a small weekly section to talk about Impact Science on a spanish national radio, and we dedicated the last one to talk about Artifical Intelligence and the impact on employment and Society. I presented this tool as an example. You can listen to it (in Spanish) here (timestamp 2h31m) Julia en la Onda, Onda Cero.

Next steps

I’d love to get your feedback and ideas. Or if you try to replicate it have problems, let me know.

  • Re-train the model using a mobile-friendly like “MobileNetV2”
  • Re-train the model using as many currencies (and coins) as possible. The benefits of multi-category classification to detect the denomination should become visible as you add more currencies.
  • Add server code to upload a copy of the user images, as I did with the GCE deployment.
  • Smartphone apps with on-device inference.
21 Likes

I did a cucumber detection model that looks at English, Field and Lemon Cucumbers. It was fun to actually get the images on google. Because I used Google Colab I couldn’t use the widgets apparently, so I looked at the downloaded images on the Google drive to view and delete the bad ones.

1 Like