Share your work here ✅

Hi joeldick would this help.

mrfabulous1 :smiley::smiley:

If you are interested in this, then you may find the following article interesting.
Looks like they used an autoencoder trained on a mixture of patches and full paintings.

1 Like

It’s fantastic reading about all the great ideas and fantastic implementations. Still a long way for me to go. Definitely fast.ai helps to deploy surprisingly strong solutions even for non software engineers :wink:

The dataset I’ve used comes from the “Fruit recognition from images using deep learning” dataset (can be downloaded on GitHub) which had at the time of writing 120 classes and over 82.000 images.

show_data

The training loss I arrived at after 3 epochs was 0.040127, validation loss at 0.001555 and the error rate at 0.000417. The training time was 5:29 min in a google colab GPU runntime. I found this to be incredible results, only 3 miss-classified images in the 9600 images-heavy validation set.

Next step is finding now my next project that I also would like to deploy in a web app. Awesome trip so far :smiley:

2 Likes

Hi,
I am interested in Reinforcement Learning for games. Before starting the fast.ai course, I programmed a Connect Four self-learning agent. I thought it would be interesting to see to what degree a 2-class classifier can identify winning positions, i.e. 4 noughts or crosses in a row, column or diagonal.


The results are a mixed bag: 90% success rate after having provided 1200 training and 400 validation images. I feel a bit disappointed because I can’t spot any discernible pattern in the top losses. The failure modes seem to be almost random. But I have learned a lot.

If you are interested in the full journey, feel free to read the notebook.

I am looking forward to your comments and hints on how to improve the accuracy!
Regards, Marius

3 Likes

After watching Jeremy make something in Lesson 2, I started looking for examples to classify and came across a very interesting problem statement. I wanted to to see how the classifier might handle three monuments which look quite similar

  • India gate
  • Arc de Triomphe
  • Washington square arch

As you can see, they are quite similar in appearance and only a close look at the architecture on them might differentiate them. The model did pretty well on a set of ~300 images from each class.

Although the error rate in the last epoch increased, I let it go and wanted to keep going with the rest of the code.


This was the corresponding LR plot and the confusion matrix as follows:

Curios, I looked at the images which were confusing the model and a lot of them were irrelevant. I wanted to use the widget Jeremy demonstrated but since I am using Google Collab, I couldn’t.

Overall, I am quite surprised by the results and love how it is going so far!

5 Likes

Hi anujmenta nice work!

I also use Google Colab.
If I do not have to many images in my dataset of images, I prune them by hand, before I start training, this often helps make the model more accurate.

However your model appears to be working well!

A wonderful person called muellerzr has created a widget which apparently works on Google Colab, which may help now or in the future PR: New Widget! Class Confusion - With Google Colab Support!.

Have a fun day!

mrfabulous1 :smiley::smiley:

1 Like

Hi everybody,
once I’ve seen the possibility to implement something real I’ve tried to make a metal corrosion/rust detector. In 2013 we’ve tried to do this at my company and we didn’t finish it at that time. The goal was to inspect communication transmission towers and scale the job of an inspector, as we used to have one person to check 400 towers. If we could filter that from images maybe we could improve his job.

It has been a little hard to find a good number of good images to create the training set, because google gave me too much garbage. At the end I’ve worked with around 600 images, most of them with corrosion at some metal point. I guess around a hundred were without corrosion, so my model was supposed to say if an image contains rusted metal or not. I’ve reached very poor results if compared with the ones shown in lesson 2. My error rate was around 14%, but anyway, I’ve created alone, without much knowledge, a working model online, something that we didn’t do 6 years ago. I must say that in the beginning the project here wanted only to define the workflow to make the tower pictures and send them to the inspectors, so we have more knowledge about images, photography, drones, but after a while we saw that deep learning could help us.

Anyway, my model is online and I’d love if people could test it: https://doc.cartola.org/ you can send images with rusted or clean metal and tell me if it worked or not. The problem is that it is in Portuguese, my native language, as I wanted my fellows to use it. If you’re able to send it an image or URL then the result “Resultado: com ferrugem” means it detected rust on the image and “sem ferrugem” means “without rust” (with = com, without = sem).

I’ve basically used the project from Natalie Downe that created a cougar or not web application over the weekend and won the Science Hack Day award in San Francisco (as mentioned in Lesson 2). Her project was my start point and I’ve basically adapted it for me, not upgrading it.

Thanks.

Practical DL for Time Series
For those of you interested in Time Series data, I’ve just uploaded a github (more info here) called timeseriesAI where I’ve shared fastai time series code, some state-of-the-art Pytorch models, and a notebook to demo how to integrate everything. You’ll see you can achieve great results in a few minutes leveraging fastai.

6 Likes

Hey, awesome-fastai is live. I think , I can make a projects section there. suggestions welcome. lots of refactoring needed.

https://twitter.com/iamShashank/status/1179100146178523138

2 Likes

Hi all,

I just wanted to share my little test project:

Adapting some code from Lesson 1, I was able to look at images of African Grey parrots, and identify the species: “Timneh” or “CAG” with 82% accuracy.

They look fairly similar, so I’m pleased with this!

I have more exciting ideas following from this which I’ll share if/when they’re coded up!

Cheers,
Lloyd

1 Like

In case it is of interest, this cloud detection thing ended up being weirdly useful.
As my goal is to create practical models for each fast.ai application,
I ended up negotiating with the Cloud Appreciation Society (https://cloudappreciationsociety.org/), training the model on their cloud dataset (over 150 000 clouds, probably largest in the world)
and we used the trained model to add a cloud detector in their Cloud-A-Day app ( https://play.google.com/store/apps/details?id=com.cloudappreciationsociety.cloudaday&hl=en).

The model (single-class 11 categories [10 main cloud types + not-a-cloud], resnet50) is now used daily in production by their members which are “over 46,000 members worldwide from 120 different countries, as of January 2019” which, as a lover of clouds of the physical type, I find pretty cool.

Thank you @jeremy I really appreciate the weird and wonderful things you and the fast.ai community are doing.

14 Likes

Thank you for sharing @vedran.grcic! :slight_smile: Congrats on the fantastic outcome.

Hi Fast AI Community!

I created Estimate Body Fat - https://www.estimatebodyfat.com/, an AI Body Fat Calculator using the ResNet 50 from Fast AI’s Lesson 2 and own Haar Cascades (for upper body identification).

The reason for creating my own haar cascades was to be able to distinguish between the different sexes. I now truly understand how AI can be ethically challenging firsthand as my first few haar cascades were unable to recognize people from certain ethnicities. Yikes!!! But not to worry, I was able to solve this issue.

To get your body fat percentage, all you need to do is upload a picture like the instructions described. After getting your body fat percentage, you also get tips on how you can lose the fat you don’t need ad a lot more useful information on diet and lifestyle.

I personally began my fat loss journey this year and created this application as to way to keep track of all my progress and motivate myself. Give it a try and let me know if you have any questions.

You can find me on twitter at bruce_rebello

6 Likes

As for the dataset, I did create my own. I hope to keep updating it in the future to be able to produce results that are far more accurate that what I have right now!

Thanks @jeremy and @rachel for these amazing courses!

Hi brebel hope your having a marvelous day!

This app made me laugh no end, it’s the funniest app I have seen on “Share Your Work Here” so Far and its probably got health benefits as well!.

Good Job.

mrfabulous1 :smiley::smiley:

1 Like

“Funny how, like I’m a funny guy? What about it makes it so funny?” hehe

Thanks @mrfabulous1 !

I do appreciate it you trying it out and liking it.

Hi brebel hope your well!

I have a funny sense of humor as well.
Reading AI research papers and trying to understand some of the concepts, though enjoyable doesn’t always bring a smile to my face. What actually made me laugh was, I go on holiday with my college mates and since we left college 14 of them have put between 20 and 100lbs, so now I have a tool to help them. When I explained it to them they all thought it was funny.

ps. I run 50km a week so your app gives me a good result.

mrfabulous1 :smiley::smiley:

1 Like

Hello :slight_smile: I did a thing :wink: after first lesson. Nothing too much impressive, but small and potential useful if polished. The problem in game development (another hobby of mine) I encountered some time ago is organizing and sorting through vast library of assets. There are multiple levels of the problems, e.g., you can be searching for 3d models of a car or a model of a tree or a good texture of some type. I choose to focus in lesson 1 challenge on the sub-problem of texture classification. E.g., when designing a level one have to find good textures for grass, hills, roads, etc… And it would be helpful if a machine did the tedious job of browsing through my library of images and return me only the ones that I’m searching for.

I have quite a big texture library myself (only the part that I already manually classified is more than 16G of images), but I started “from scratch”, i.e., by creating a new library of textures from google images :smiley:. That task was tedious to say a bit. The resulting dataset is 20 classes, 4723 images. The classes are: bark, cliff, cobblestone, cracked-soil, forest-leaves, forest-needles, grass-dead, grass-green, ice, moss, mud, path, pebbles, plank, red-bricks, roof, sand, snow, white-bricks, yellow-bricks. I cleaned it up a bit, but still the dataset is quite noisy. Here are some example images:

Limiting myself just to more-or-less what was shown in the first lecture I got error rate of 0.157295. This is using vgg19_bn model architecture (after unfreezing the weights). For first attempt on 20-class classification problem I think it’s pretty nice result. The errors I get shows limitations of such approach and highlight some problems in the dataset:

With the most confused classes being:

[(‘snow’, ‘ice’, 16),
(‘ice’, ‘snow’, 12),
(‘mud’, ‘sand’, 10),
(‘moss’, ‘grass-green’, 9),
(‘bark’, ‘plank’, 6),
(‘forest-leaves’, ‘moss’, 6),
(‘mud’, ‘cracked-soil’, 6),
(‘pebbles’, ‘forest-leaves’, 5)]

Was fun playing around with this :slight_smile:

Edit: managed to reduce error rate to 0.11 after applying some of the stuff I learned in lesson 2 :smile:

5 Likes

I was able to use the fast.ai libraries to build a language model that helped me do two things:

  1. Predict operational failures before the operation actually fails by learning the trends followed by the “pass” usecases and “fail” usecases.

  2. I was able to generate artificial log files by building language models for just the pass usecase and just the fail usecase and then do a <langugage_model>.predict(,n_words=x).

Here is a Medium blog that I just wrote up with the code snippets that I could share (there are some pieces which I could not share as the data has some work specific information :frowning: )

Thank you fast.ai for being awesome! :slight_smile:

3 Likes