Share your work here ✅

As of 07/15/20, I was able to use these instructions for jetson-nano but with several adjustments. Installed JetPack 4.4, torch 1.4.0, torchvision 0.5.0. Use instructions from nvidia for torch & torchvision link.

Thanks!
May could you share your code again, The last link does not work.

Thanks again !

Yes, I killed that link when I realised it was telling people my email address when they ran it. It’s on GitHub now, and there’s a little tiny button right at the top of it you can click to load it up on Colab. Here.

1 Like

https://kirankamath.netlify.app/blog/fastai-multilabel-classification-using-kfold-cross-validation

I’ve just released an update to fastinference, and two new libraries:

fastinference_pytorch and fastinference_onnx

The goal of these two is to be lightweight modules to run your fastai models with in a familiar API. Currently it just supports tabular models but vision and NLP are on the way! See below for a numpy example (there is zero fastai code being used :wink: )

Not to mention there is a speed boost here as well :slight_smile:

For more, the documentation is available:


Note: To use this you must export your model from fastinference with learn.to_fastinference

5 Likes

Just sharing another project I did. You can read the medium article here. It’s a bit long.

To sum it up and share some pictures, I did Kuzushiji character recognition from scratch. Kuzushiji characters look like this:

I learned how to write a simple transform (which honestly didn’t have to be a transform, but I did it for the knowledge), so they could look like this:

Then put them through a custom CNN where I wrote all the layers in Pytorch’s api, then in fastai’s api, so I could learn everything in between. Eventually, I created a model that gave me about 97% accuracy (there we sampling issues, so take that with a grain of salt). And, as a final exercise, I recreated a heatmap the way Jeremy did with his cat, so I could visualize the outputs of my convolution. They looked something like this:

heat

Learned some things about fiddling with my batch size, handling tensor shapes, watching for GPU usage, etc. Was good fun, albeit all my training attempts were a little tiring.

3 Likes

Hi @everyone

Please, I would like some guidelines, I am trying to understand how i can perform Object Extraction from satellite Imagery and yet I don’t where to start. Please, I would like some guidelines to extract specific information from object detected from the satellite imagery. I am stuck in how to start, I am very new in the domain.

Thanks

Hi joell001,

Have you seen some of the course video material? To me it sounds like you should take a look at lesson 3, where Jeremy talks about multi-label classification. You can scroll through lesson notes here: https://github.com/hiromis/notes/blob/master/Lesson3.md and see if it’s what you’re looking for. If it is i would recommend to take the time to go through lesson 1 and 2 first.

Best,
Bjarke

1 Like

Great Job!

mrfabulous1 :smiley: :smiley:

1 Like

Hi all! Just published 2 web apps following lesson 2:

1- Web app identifying whether a person is wearing their mask properly or improperly (9% error rate)


2- Web app classifying endangered waterbird species present throughout the Prek Toal Reserve in Cambodia. This classifier has the potential to bring support to NGOs such as Osmose which ensure the protection of waterbird colonies throughout the reproductive cycle (15% error rate due to many mislabelled images – will need to seek expertise from the NGO)


Code for both models is up on GH!

5 Likes

Such a great and fantastic way to getting your hands on dirty early on, well done

1 Like

I wrote a module for encoding tabular data into images and got top 9% on the kaggle titanic leaderboard with a CNN. :rofl:


16 Likes

Hi!

Lot’s of people classify olympic wrestling and pro wrestling (think WWE).

To make the distinction easier, I deployed a wrestling classifier on render to help out :wink:.

Here is a picture of a match from high school (in the green).

This was my first web application that I have deployed and this is also my first post.

Happy to learn with all of you and to join such a wonderful community!

wrestling-classifier.onrender.com

2 Likes

I wrote little program using resnet34 to try to differentiate between cellos and violins!

As you can see on the 3rd row’s first two images, it has some hiccups (perhaps due to the fact that the Google Images “dataset” I compiled was kind of jank) , but I’m pretty proud of it for a first try :).

2 Likes

i trained a collab filtering model on the jester dataset to create a joke recommender system. it works for a non-existent user using euclidean distances to real users which works quite well and i’m rather happy with.

github. heroku demo.

grumpy-jester

6 Likes

Very nice

Hi all,

I just finished making a Nerf Toy Blaster Classifier which classifies what type of toy gun is in an image!

Here are some results!


In this image, it got 2 blasters wrong. I believe the first one (row 1 col 2) was mistaken because it’s colors are very similar to the colors that nerf blasters typically have. The second one (row 2 col 3) was very hard to classify as it seems to be taken apart and possibly repainted. As a human, I would also classify this as nerf because of the orange and grey color scheme similar to the color scheme in the Nerf Fortnite franchise.


Here everything worked!

I was using Trax to build a Deep net N-grams model in Bayes, which is it Seq to Seq.

As a result, I can generate new text from 2019 Joker text that is less than 100 kb text to generate a small section of Joker story text.

- arthur? - my name’s arthur.
- so awful, isn’t it? - mmm-hmm.
but in all seriousness, i mean, these rats are…
no.
it’s 42 degrees at 10:30 on this thursday, october 15th.
is it just me,
he’s a busy man.
i’m pursuing a career in stand-up comedy.
there’s nothing funny about that.
- it is certainly tense. - hmm.
or comes down with typhoid fever.
the audience, all that stuff?
do you people call it miniature golf
oh, okay. well, there’s something special about you,
i live right here in the city
no matter who they are or where they live.

For more detail, please look at here: https://github.com/JonathanSum/Tmp_Current_Working/blob/master/2019Joker.txt

Great Job!

mrfabulous1 :smiley: :smiley:

I know a few people are using my ImageScraper notebook for creating datasets.

Thanks to @IegorT pointing out how to feed some search constraints into DDG, you now have parameters for image size, image type, image layout and image color.

Even if you think you don’t care about these params, you now get square images and only photos by default which means less cleaning for you and better inputs to your model.

Version 2. Now I just need to add pagination to the image cleaner…

5 Likes