Share your work here ✅

Thank you for sharing @vedran.grcic! :slight_smile: Congrats on the fantastic outcome.

Hi Fast AI Community!

I created Estimate Body Fat - https://www.estimatebodyfat.com/, an AI Body Fat Calculator using the ResNet 50 from Fast AI’s Lesson 2 and own Haar Cascades (for upper body identification).

The reason for creating my own haar cascades was to be able to distinguish between the different sexes. I now truly understand how AI can be ethically challenging firsthand as my first few haar cascades were unable to recognize people from certain ethnicities. Yikes!!! But not to worry, I was able to solve this issue.

To get your body fat percentage, all you need to do is upload a picture like the instructions described. After getting your body fat percentage, you also get tips on how you can lose the fat you don’t need ad a lot more useful information on diet and lifestyle.

I personally began my fat loss journey this year and created this application as to way to keep track of all my progress and motivate myself. Give it a try and let me know if you have any questions.

You can find me on twitter at bruce_rebello

6 Likes

As for the dataset, I did create my own. I hope to keep updating it in the future to be able to produce results that are far more accurate that what I have right now!

Thanks @jeremy and @rachel for these amazing courses!

Hi brebel hope your having a marvelous day!

This app made me laugh no end, it’s the funniest app I have seen on “Share Your Work Here” so Far and its probably got health benefits as well!.

Good Job.

mrfabulous1 :smiley::smiley:

1 Like

“Funny how, like I’m a funny guy? What about it makes it so funny?” hehe

Thanks @mrfabulous1 !

I do appreciate it you trying it out and liking it.

Hi brebel hope your well!

I have a funny sense of humor as well.
Reading AI research papers and trying to understand some of the concepts, though enjoyable doesn’t always bring a smile to my face. What actually made me laugh was, I go on holiday with my college mates and since we left college 14 of them have put between 20 and 100lbs, so now I have a tool to help them. When I explained it to them they all thought it was funny.

ps. I run 50km a week so your app gives me a good result.

mrfabulous1 :smiley::smiley:

1 Like

Hello :slight_smile: I did a thing :wink: after first lesson. Nothing too much impressive, but small and potential useful if polished. The problem in game development (another hobby of mine) I encountered some time ago is organizing and sorting through vast library of assets. There are multiple levels of the problems, e.g., you can be searching for 3d models of a car or a model of a tree or a good texture of some type. I choose to focus in lesson 1 challenge on the sub-problem of texture classification. E.g., when designing a level one have to find good textures for grass, hills, roads, etc… And it would be helpful if a machine did the tedious job of browsing through my library of images and return me only the ones that I’m searching for.

I have quite a big texture library myself (only the part that I already manually classified is more than 16G of images), but I started “from scratch”, i.e., by creating a new library of textures from google images :smiley:. That task was tedious to say a bit. The resulting dataset is 20 classes, 4723 images. The classes are: bark, cliff, cobblestone, cracked-soil, forest-leaves, forest-needles, grass-dead, grass-green, ice, moss, mud, path, pebbles, plank, red-bricks, roof, sand, snow, white-bricks, yellow-bricks. I cleaned it up a bit, but still the dataset is quite noisy. Here are some example images:

Limiting myself just to more-or-less what was shown in the first lecture I got error rate of 0.157295. This is using vgg19_bn model architecture (after unfreezing the weights). For first attempt on 20-class classification problem I think it’s pretty nice result. The errors I get shows limitations of such approach and highlight some problems in the dataset:

With the most confused classes being:

[(‘snow’, ‘ice’, 16),
(‘ice’, ‘snow’, 12),
(‘mud’, ‘sand’, 10),
(‘moss’, ‘grass-green’, 9),
(‘bark’, ‘plank’, 6),
(‘forest-leaves’, ‘moss’, 6),
(‘mud’, ‘cracked-soil’, 6),
(‘pebbles’, ‘forest-leaves’, 5)]

Was fun playing around with this :slight_smile:

Edit: managed to reduce error rate to 0.11 after applying some of the stuff I learned in lesson 2 :smile:

5 Likes

I was able to use the fast.ai libraries to build a language model that helped me do two things:

  1. Predict operational failures before the operation actually fails by learning the trends followed by the “pass” usecases and “fail” usecases.

  2. I was able to generate artificial log files by building language models for just the pass usecase and just the fail usecase and then do a <langugage_model>.predict(,n_words=x).

Here is a Medium blog that I just wrote up with the code snippets that I could share (there are some pieces which I could not share as the data has some work specific information :frowning: )

Thank you fast.ai for being awesome! :slight_smile:

3 Likes

People sometimes take my little son for a girl :baby: As a fun task within lesson 2 I created a model that can distinguish between boys and girls https://boyorgirl.artoby.me.

Generally I did the following.

  • Downloaded 800 baby images from the Google Images (cool idea by the way, thanks @jeremy!)
  • Manually cleaned the dataset (removed clothes without babies, adults etc.)
  • Trained the resnet50 model in Google Colab (one iteration takes just a few minutes)
  • Experimented with learning parameters - accuracy on validation set was ~86-88% which I considered cool :slight_smile:
  • Exported model to .pkl
  • Created a web app for recognition using Starlette and React
  • For 9 random photos of my son the model said 8 of them are of a boy :slight_smile: People in the street have pretty the same accuracy.

The source code of the app (notebook, backend and client) is available here. Feel free to use it for your own deployments :slight_smile:

Thanks fast.ai for the course! I enjoy it a lot :slight_smile:

9 Likes

Hey, nice project! btw, do you have any opinion on FastAPI? It’s based on Starlette. Haven’t seen many projects using FastAPI. It seems good.

https://fastapi.tiangolo.com/

Hi vinayrao thanks for sharing a concise and informative post.

mrfabulous1 :smiley::smiley:

1 Like

:+1: @mrfabulous1

1 Like

I did a similar thing on softball and baseball and got an accuracy of around 92%

1 Like

I created a model for differentiating between logo of different automobile brands. Had 600 training images and 130 validation images across 8 different brands. ResNet34 had accuracy of 96.3% with just a few iterations. Then i tried it with resNet50 which surprisingly even after 100 iterations has accuracy of just 70%. Any clue?
image
training after unfreezing and after 100 iterations:
image

Hi. Just came across your post when looking around this super informative thread. Well done on the awesome results! Do you still have the dataset for this challenge? Is it available to share for further research? Thanks.

Hello everyone! I started to solidify my learnings by extensively studying papers, and writing posts on them. Here is my first attempt -

Please leave a clap if you like it, thanks :smile:

3 Likes

Hi everyone, after lesson 2 I created my own image classifier to differentiate between 10 different medications: Allopurinol, Atenolol, Ciprofloxacin, Levothyroxine, Metformin, Olanzapine, Omeprazole, Oxybutynin, Prednisone, Rosuvastatin. The specific strength of each med is noted in the notebook

As a former pharmacist turned software developer, I thought it would be interesting to see how a ML model would perform.

I sourced images from US National Library of Medicine’s Pillbox and google images. As you can tell, google images included quite a bit of junk images.

After cleaning up the data and experimenting with epochs and learning rates, I trained resnet34 on the final dataset.

learn = cnn_learner(data, models.resnet34, metrics=error_rate)
learn.fit_one_cycle(5, max_lr=slice(1e-3,1e-2))

The model had an accuracy rate of 63%. Here’s the notebook on github

Steps for improvement include getting more images, and discarding more junk images.

7 Likes

I built a simple language model based on whatsapp chat data for part 1. I decided to go back, make some improvements and write a medium post about it all.

I’ve now completed part 2 so I wanted to see if I could go back and use what I’ve learned to add a custom rule to the tokenizer, as well as clearly explain what’s going on with the language_model_learner. With a bit of digging through the documentation and source code I was able to do it!

1 Like

Here’s a writeup I did for a competition I participated in on the Zindi competitive data science platform. The object of the contest was to use remote sensing imagery from different timepoints to classify what types of crops were growing in fields with given boundaries. I used a U-Net style approach and I found a really nice library called eo-learn to help with the processing pipeline for the remote sensing data.

6 Likes