Share your work here ✅

That seems as good of a way to do it as any! Thank you!

I am familiar only with Apple Core ML, and only played around with tensorflow lite once.

So i like to state this from my own experience. It is easy only in a few simple case, (eg. single label classification), as soon as you venture off a bit, the difficulty and technical requirement will skyrocket very steeply. So very few ppl will do anything real and new in a couple of hours. Worse if direct GPU programming is required, then even fewer ppl have all these intersected skill set. The good news is that at this stage, it doesn’t take very sophisticated UI to have something effective, and impress users.

I aint familiar with Xamarin, but cross-platform AI mobile is a bit early given a single platform dedicated stuff is hard enough. Tools are still very buggy. ONNX may offer a bridge to that when it eventually comes. I would focus on single platform for now. I assume you only need to take care of CoreML and Tensorflow Mobile.

I think Andrej Karpathy had talked about a summer course last yr where they show how to do a complete end to end: Data -> Training -> Test -> Deploy Mobile. I don’t know what mobile platform he used though. Yes, it is an interesting exercise to provide a template project to do at least a single label classification. For apple, it is almost a drop in/dragdrop sort of thing using their sample app. The only hazard is to migrate the pytorch model to apple core ml model.

I deployed a working version of the classifier on GCP at https://dinosaur-finder.appspot.com/.

This version has been trained on the dinosaurs below:

  • Tyrannosaurus rex
  • Velociraptor
  • Spinosaurus
  • Triceratops
  • Mosasaurus
  • Stegosaurus
  • Pterodactyl
  • Brachiosaurus
  • Allosaurus
  • Apatosaurus
2 Likes

Nice. But I thought they said some dinosaurs were supposed to have hairs?

Hi Kelvin,
Yes, that is one of the challenges I was curious to see if deep learning could tackle. I wrote about it a bit in the notebook I used for research and plan to post it after it has a little more polish (draft linked in my original post). Dinosaurs are depicted in a variety of ways, yet we can still look at the different drawings of t-rexes and triceratops and know what they are. Can a deep learning algorithm pick on the same traits that we do?

It is not really related to the Deep Learning stuff but recently I realized that I am using argparse module a lot while writing various data preprocessing scripts, simple servers to host my models, etc. So I’ve decided to create a little guide on how one can use standard Python’s library to implement a CLI interface for their scripts and utilities (including a small overview about more high-level third-party solutions) as a single reference point for myself.

A very handy thing when you need to write a simple script to preprocess your data. Probably the post could help one to better understand what is going on in this built-in fastai’s snippet intended to help wrap Python’s utils with CLI. Especially, some interesting tricks about argparse that allows one to write a custom and a quite flexible solution.

3 Likes

Hi Shawn,
Well, the funny thing is no one had ever seen a hairy dinosaur. ;-). You can try your app on depiction of hairy ones. If it doesn’t perform well, i am guessing you need “add hair” data augmentation API. I don’t mean this as a joke. There are some researchers looking into super-sophisticated data aug coming from the video game industry.

WaWoo

Thanks a lot, @jeremy for your awesome MOOC on fastai. It has helped me a lot in practical deep learning.

It was a shocker when the kernel I made got into Kaggle’s Newsletter.


(https://www.kaggle.com/devilsknight/malaria-detection-with-pytorch)

This is my latest kernel on classifying Audio using Deep Learning. Found an Interesting Technique on Converting Audio to Spectrogram and to my surprise trained a model with 97% Accuracy. Do check this out if you’re working in the audio space.


(https://www.kaggle.com/devilsknight/sound-classification-using-spectrogram-images)

7 Likes

Augmentation is one route and an interesting idea. But another approach is just to ensure that the test set includes dinosaurs drawn in different ways. My training set did have a few dinosaurs with hair and feathers, so it should perform well with them. Unfortunately all of the images I’ve found to test it are already in my training set.

Getting a representative test set is just to ensure you are measuring the accuracy well., but don’t think it will improve performance of your system.

It is considered cheating if some of your exact test images are also in the training set. So hope you didn’t do that. I think u mean same distribution, i.e. other haired ones but not the exact one.but doing this way, overall isn’t interesting…

The data aug route is the one that will interest. Cos it now becomes slightly more practical. I.e. if you have a sample of clean faces and never bearded faces, can you still recognize the person if he grows beard. Doing it with Dino is a bit odd, but may still yield some interesting insight.

This isn’t real work like most here have posted. But Happy Valentine:

1 Like

Yes, that’s what I was trying to say. Your comment prompted me to search for a few "hairy"dinosaur pictures to test with, but the first hits all looked familiar from the set I used for training and validation. They wouldn’t be a true test of the model’s ability to generalize.
So far I have been using images of my kid’s toys to test, and have asked friends with kids to do the same.

Perfect!
btw: github page link is broken

I set up a resnet34 learner to distinguish between photos of my identical twin sons – it didn’t work. Both training and validation losses were persistently high. The photo quality is pretty variable and there were only 50 photos of each. I subsequently tried resnet50, which also didn’t work.
I decided that I wanted some positive results, so I set up a photo dataset of sons vs. daughter, which I thought would be easier to learn. I started with resnet50 on this set, and after 10 epochs the training loss had decreased but the validation loss had increased. So I tried unfreezing and running 10 more epochs. At the very end, the validation loss started to decrease, so I ran 10 more, and that worked (resnet50, unfrozen, w/20 epochs). Great! But I wonder how it would perform now on an independent test set…

Thanks, @muhajir. You can find the notebook here, the code is the same just the dataset is now different. Basically you can experiment with any data you want it just requires you to provide text files with image urls.

I see. ok. got it.

actually, the toys may be a good source of training data. i assume you can use your phone to snap photo of them at different angles.

I just finished the first lesson and as part of the exercise made a simple classifier that differentiates images of my favorite football team Manchester United FC and our rivals Liverpool FC.
I created the dataset myself via Google images and achieved an accuracy of about 95% after about an hour’s worth of tinkering with it.

Here are some images from my validation set.

Here’s the notebook:

Hi everyone,
I did a presentation on image augmentations where I had presented the various transforms supported by the library, at a meetup hosted by @aakashns

I hope @sgugger will excuse me for using doggy images this time :smiley:

Link to Kernel/Code
Link to the video
Blog Post: TBA

Regards,
Sanyam.

1 Like

I have uploaded the Ancient Language data set on Kaggle. Check it out here:


It contains a total of 400 images of 8 languages with approximately 50 images for each. I had to write a script to split the images into train, valid and test data sets. Here is it is:

Here is the kernel for the ancient language classifier made using fast.ai:
https://www.kaggle.com/nitron/ancient-language-classifier
It was an amazing experience to work on this! I will further improve the classifier after completing the next fast.ai lessons.
I will now move on to fast.ai lesson 2 :slight_smile:

2 Likes