Share your work here ✅


(Vedran Grcic) #967

Thank you :slight_smile: Glad you like it.
I’ve been thinking about a mobile app but will probably postpone it or let it pass, as I currently suck at mobile development and would probably waste a day or two at it learning the ropes, when someone half-competent could do it in a couple of hours. Also, id prefer it being cross platform (eg. Xamarin) and I am under no illusions that Im good at UI.

That being said if anyone is up for making a nice looking Xamarin shell of a mobile app,it should be relatively easy to integrate it and I’d love to do so.

Also, and perhaps more interestingly, such a project would be a usefull mobile starter for all the OTHER projects (so like Jeremys’ starter but for mobile) and looking at it that way it would be extremely useful.

Re sediments, like the idea. Dont know where you could get data, though. Sorry :slight_smile:


(Vedran Grcic) #968

Probably the easiest way would be to put them side by side (so, create a new image combining two alike, or two not alike images). Label according to type.

If i was doing it (it is slightly unusual) id do a small proof of concept not to waste too much time collecting data. Also id take care to exclude image order, so lets say A and B is similiar C and D are not. You’d want to create:

AB, similiar
BA, similiar
CD, not
DC not

(Note that I’m doubling each pair,as you are stiching images but dont really care that AB and BA are differently ordered)


(Gavin Stewart) #969

That seems as good of a way to do it as any! Thank you!


(kelvin chan) #970

I am familiar only with Apple Core ML, and only played around with tensorflow lite once.

So i like to state this from my own experience. It is easy only in a few simple case, (eg. single label classification), as soon as you venture off a bit, the difficulty and technical requirement will skyrocket very steeply. So very few ppl will do anything real and new in a couple of hours. Worse if direct GPU programming is required, then even fewer ppl have all these intersected skill set. The good news is that at this stage, it doesn’t take very sophisticated UI to have something effective, and impress users.

I aint familiar with Xamarin, but cross-platform AI mobile is a bit early given a single platform dedicated stuff is hard enough. Tools are still very buggy. ONNX may offer a bridge to that when it eventually comes. I would focus on single platform for now. I assume you only need to take care of CoreML and Tensorflow Mobile.

I think Andrej Karpathy had talked about a summer course last yr where they show how to do a complete end to end: Data -> Training -> Test -> Deploy Mobile. I don’t know what mobile platform he used though. Yes, it is an interesting exercise to provide a template project to do at least a single label classification. For apple, it is almost a drop in/dragdrop sort of thing using their sample app. The only hazard is to migrate the pytorch model to apple core ml model.


(Shawn P Emhe II) #971

I deployed a working version of the classifier on GCP at https://dinosaur-finder.appspot.com/.

This version has been trained on the dinosaurs below:

  • Tyrannosaurus rex
  • Velociraptor
  • Spinosaurus
  • Triceratops
  • Mosasaurus
  • Stegosaurus
  • Pterodactyl
  • Brachiosaurus
  • Allosaurus
  • Apatosaurus

(kelvin chan) #972

Nice. But I thought they said some dinosaurs were supposed to have hairs?


(Shawn P Emhe II) #973

Hi Kelvin,
Yes, that is one of the challenges I was curious to see if deep learning could tackle. I wrote about it a bit in the notebook I used for research and plan to post it after it has a little more polish (draft linked in my original post). Dinosaurs are depicted in a variety of ways, yet we can still look at the different drawings of t-rexes and triceratops and know what they are. Can a deep learning algorithm pick on the same traits that we do?


(Ilia) #974

It is not really related to the Deep Learning stuff but recently I realized that I am using argparse module a lot while writing various data preprocessing scripts, simple servers to host my models, etc. So I’ve decided to create a little guide on how one can use standard Python’s library to implement a CLI interface for their scripts and utilities (including a small overview about more high-level third-party solutions) as a single reference point for myself.

A very handy thing when you need to write a simple script to preprocess your data. Probably the post could help one to better understand what is going on in this built-in fastai’s snippet intended to help wrap Python’s utils with CLI. Especially, some interesting tricks about argparse that allows one to write a custom and a quite flexible solution.


(kelvin chan) #975

Hi Shawn,
Well, the funny thing is no one had ever seen a hairy dinosaur. ;-). You can try your app on depiction of hairy ones. If it doesn’t perform well, i am guessing you need “add hair” data augmentation API. I don’t mean this as a joke. There are some researchers looking into super-sophisticated data aug coming from the video game industry.


(ebby) #976

WaWoo


(Sachin Dev) #977

Thanks a lot, @jeremy for your awesome MOOC on fastai. It has helped me a lot in practical deep learning.

It was a shocker when the kernel I made got into Kaggle’s Newsletter.


(https://www.kaggle.com/devilsknight/malaria-detection-with-pytorch)

This is my latest kernel on classifying Audio using Deep Learning. Found an Interesting Technique on Converting Audio to Spectrogram and to my surprise trained a model with 97% Accuracy. Do check this out if you’re working in the audio space.


(https://www.kaggle.com/devilsknight/sound-classification-using-spectrogram-images)


(Shawn P Emhe II) #978

Augmentation is one route and an interesting idea. But another approach is just to ensure that the test set includes dinosaurs drawn in different ways. My training set did have a few dinosaurs with hair and feathers, so it should perform well with them. Unfortunately all of the images I’ve found to test it are already in my training set.


(kelvin chan) #979

Getting a representative test set is just to ensure you are measuring the accuracy well., but don’t think it will improve performance of your system.

It is considered cheating if some of your exact test images are also in the training set. So hope you didn’t do that. I think u mean same distribution, i.e. other haired ones but not the exact one.but doing this way, overall isn’t interesting…

The data aug route is the one that will interest. Cos it now becomes slightly more practical. I.e. if you have a sample of clean faces and never bearded faces, can you still recognize the person if he grows beard. Doing it with Dino is a bit odd, but may still yield some interesting insight.


(kelvin chan) #980

This isn’t real work like most here have posted. But Happy Valentine:


(Shawn P Emhe II) #981

Yes, that’s what I was trying to say. Your comment prompted me to search for a few "hairy"dinosaur pictures to test with, but the first hits all looked familiar from the set I used for training and validation. They wouldn’t be a true test of the model’s ability to generalize.
So far I have been using images of my kid’s toys to test, and have asked friends with kids to do the same.


(Muhammed Talo) #982

Perfect!
btw: github page link is broken


(Ben) #983

I set up a resnet34 learner to distinguish between photos of my identical twin sons – it didn’t work. Both training and validation losses were persistently high. The photo quality is pretty variable and there were only 50 photos of each. I subsequently tried resnet50, which also didn’t work.
I decided that I wanted some positive results, so I set up a photo dataset of sons vs. daughter, which I thought would be easier to learn. I started with resnet50 on this set, and after 10 epochs the training loss had decreased but the validation loss had increased. So I tried unfreezing and running 10 more epochs. At the very end, the validation loss started to decrease, so I ran 10 more, and that worked (resnet50, unfrozen, w/20 epochs). Great! But I wonder how it would perform now on an independent test set…


#984

Thanks, @muhajir. You can find the notebook here, the code is the same just the dataset is now different. Basically you can experiment with any data you want it just requires you to provide text files with image urls.


(kelvin chan) #985

I see. ok. got it.

actually, the toys may be a good source of training data. i assume you can use your phone to snap photo of them at different angles.


(Aman) #986

I just finished the first lesson and as part of the exercise made a simple classifier that differentiates images of my favorite football team Manchester United FC and our rivals Liverpool FC.
I created the dataset myself via Google images and achieved an accuracy of about 95% after about an hour’s worth of tinkering with it.

Here are some images from my validation set.

Here’s the notebook: