Share your V2 projects here

thanks! that app is cool, have you come across others? i havent seen much.

the colors are the activations the model learned from the best swings. like what jeremy did with the pets in v3’s lesson 6 notebook:
heatmap

for my model the colors mapped out like this:

colorkey2

so if the swing has yellow/white squares its saying it matches the best swings. no colors would be a 59% or less match to the best swings

3 Likes

Had some more fun with my bird classifier app from last week’s assignment.
Added a searchable entry box with the 150 bird species from the dataset.
When selecting a species, the URL entry box is populated with a sample URL that can be submitted to the model for identification.
The app is deployed on a CENTOS 7 server.
You can find it here: https://birds.smbtraining.com/bird_name
And the code: https://github.com/sylvaint/fastai2-bird-classifier

8 Likes

Very nice. I especially like the URL feature so I don’t have to go out on the web and get an image to test it! Also I found out that I am a California Condor with 83% probability. Perhaps my big nose resembles a beak… :wink:

5 Likes

That looks awesome. I would be happy to learn about your experience using Seeme and where we can improve.

Feel free to reach out if you need help anywhere…

FileUpload Widget 101:

When doing my project which uploads non-image (audio Data), I was forced to figure out how the FileUpload widget really works, this isn’t as easy as it sounds, because the online documentation is a work in progress. But lucky for you, I’ve got some working code you can use.

In the example below I’ve declared a global metadata for debug purposes, so you can see after you upload something what it contains. To get this function to work, we observe the ‘data’ trait on the widget as follows:

 btn_upload.observe(_handle_upload, names=['data'])

Now Here’s the code. You can substitute your code for my
processAudioData function:

metadata = None   
def _handle_upload(change):
    global metadata
    lastkey = list(btn_upload.value.keys())[-1]
    uploadDict = btn_upload.value[lastkey]
    values = list(uploadDict.values())
    metadata = values[0]
    filename = metadata['name']
    size = metadata['size']
    fileData = values[-1]

    processAudioData(filename,fileData,size)
2 Likes

fastGUI (just following the naming based on a number of projects that start with ‘fast’ :slight_smile: ) is a project inspired by last weeks lesson using voila and binder. An easy way to view images, no of classes, no of images per class, choose an image and view effects of augmentations(work in progress as there seems to be an issue with rendering in voila, posted here https://forums.fast.ai/t/issue-viewing-images-in-voila/68221).

fastgui4_sm

fastgui5_sm

10 Likes

40% faster training with a scikit-learn-like API for numpy arrays

If you use numpy arrays as input to your model, you may be interested in a new package I’ve created to speed up training.

The API looks like this:

dsets = NumpyDatasets(X, y=None, tfms=tfms, splits=splits, inplace=True)

You have more details on this link:

6 Likes

The experience was great and seemless, but why can’t you make the application public.

The research questions at the end of chapter 4 of the fastai book includes one where you have to reproduce the notebook but use the full MNIST dataset instead. I tried my hands on it and I’m happy to share my results:
Github repository:

Gist:

2 Likes

I’m not able to keep up with the course but it’s great to see a lot of inspiring projects. For my side, as a project for this course, I wanted to work on model interpretability but I couldn;t start until few hours ago.

I still have to do some cleaning/commenting on the notebook (hopefully soon) but here is an example based on the pet notebook:

  • on the left the original image (in b/w)
  • on the right the pixels of the original image where the model seem to focus

image

4 Likes

Hi all,

I have written a new post about my exploratory use of fastai2 on the PEER Hub ImageNet (PHI) dataset that is of interest in the built environment (an obvious area of focus for my company Arup), getting an accuracy of 93.4% in <10 epochs of training – could not quite beat the 95% achieved by the winners in 2018! =P

I modified the ImageClassifierCleaner actions to get better traceability of the deletion/correction that I made to the data labels – fastai2 made it super easy to do these things!

for idx in cleaner.delete():
#     cleaner.fns[idx].unlink()
    delname = "%s/%s.deleted" % (str(cleaner.fns[idx].parent), cleaner.fns[idx].name[:-4])
    shutil.move(str(cleaner.fns[idx]), delname)
for idx,cat in cleaner.change(): shutil.move(str(cleaner.fns[idx]), path/cat)

Credits to PEER for collecting, curating, and making available their Φ-Net dataset, and designing the PHI Challenge 2018 tasks for machine learning. Special thanks to @muellerzr for his very informative posts on fastai2 API (e.g. on DataBlock).

Thanks.

Yijin

1 Like

Hi Yijin, nice post on your blog. Very readable. I was lucky enough to get 1st in 4 of the 8 tasks in this challenge (2nd overall). There are a couple of ‘competition tricks’ involved beyond network training. Perform image hashing to identify dupes. Then 1) use it to remove duplicates from training, 2) use it to obtain ground truths where test images are in the train set. Then you can use inter-task findings. A model trained to classify damage levels may also be useful when used in concert with a model trained specifically to check if there was any damage at all. Similarly, a model trained to detect collapsed structures may also be useful to classify moderate or heavy damage. The result comms I have from PeerHub said “The dataset used in this challenge is a Beta version, which contains some wrong labels and duplications. It will be further cleaned and released in the mid of April [2019].” So this may explain your not being able to reach the previous results. I’ll message you the hyper params used for task 1 which you blogged about.

2 Likes

Thanks for your comment : )

That’s amazing! Well done!

Those are indeed very useful tips and tricks. In my quick runs and data cleaning, I did notice some duplicates that I removed, but did not go any further than simple checks by eye (using the fastai2 ImageClassifierCleaner). And I did not think about checking file hashes – will keep this tip in mind for future data checks!

; )

Did you combine any of the tasks into multi-label classification? I was wondering whether that will add further useful info/data for model training.

I only found out about their dataset release after I had done my fastai2 quick-explore. Might circle back to look at the cleaned and released dataset, and the rest of the eight tasks, if/when I have some time again.

Thanks.

Yijin

Anyone knows the former Swedish car manufacturer “Saab” ?

I have implemented a classifier for these model:
9-3, 9-5, 9000, 900

and finally managed to run it on mybinder via voila:

If you are interested in the repository:
voila code: https://github.com/we-make-ai/saab-model-classifier-voila
notebook for creating the model: https://github.com/we-make-ai/saab-model-classifier

1 Like

Hello, Czech study group here. So far, we have deployed the following models:

  1. Karol Pal trained “plastic bag or jellyfish?” classifier https://plastic-jelly.westeurope.cloudapp.azure.com/
  2. Vlasta Martinek trained Cyberpunk or Steampunk? classifier ( web , GitHub )
  3. Petr Simecek (me) trained Steve Jobs vs Rambo classifier ( GitHub )
  4. Zdenek Hruby trained a classifier of architectural styles (web, GitHub)

I personally really enjoyed using an app from my cell phone (i.e. taking the photo instead of uploading) and making faces to my Jobs vs. Rambo classifier. I also tested other kind of pictures (see below). Currently, I am going to use Google Photos API and classify all my photos there.

EVGu4cmU8AANqfl EVGu4ckUwAIHoCk

2 Likes

I used the bear classifier code nearly one-to-one to create a wild garlic vs lily-of-the-valley classifier (getting Binder to run the app was not easy but the forums were helpful).

Both plants are very common in Germany.
Wild garlic (German: Bärlauch) is a plant whose leaves are used for making tasty soups and pestos and Lily-of-the-valley (German: Maiglöckchen) is a very very similar looking plant that is highly poisonous!
Enthusiastic visitors to Germany who want to try wild garlic recipes need to be very careful while picking these leaves! This app can help :wink:

Jokes apart, I got 86% accuracy when I used the defaults (150 images of each class, standard transforms, just like the bear classifier). Over the next couple of days, I might look more into improving the model.

2 Likes

This is very cool. Do you have a notebok for how you set it up? I would love to test it out on basketball jumps.

I wrote a callback to W&B to compare and monitor models (took me several months already!).
The goal is for it to be easy to use by just adding WandbCallback to your learner and have it log all metrics, parameters you used in every function, upload model, monitor computer resources, etc…

This was a side project as I was playing with fastai2 to try to make my own colorizer but it became much larger than intended and I still see room for improvement.

You can find more details in my full post.

9 Likes

Hello Everyone,

I have written an article on how to do data-augmentation on audio files in python with help of librosa library.

Please do let me know your reviews and please do share.

Thanks

3 Likes

My first use of fastai v2 was to participate in a ICLR 2020 (International Conference on Learning Representations) conference challenge on Computer Vision for Agriculture https://www.cv4gc.org/cv4a2020/#wheat classifying types of crop disease.
I achieved 3rd out of 300+ using fastai v2 pretty much straight out of the box except for hooking in an external senet154 model from the excellent ross wightman’s model zoo. It was interesting in that the best performing image size was very large at 570px.

15 Likes