Share your V2 projects here

Hi jackharding hope all is well!

Great post ,

After solving the beard issue, I realised the same problem could crop up with different ethnicities as most of the images search results were white men. I did not initially consider this when starting this and something I should include in future computer vision projects. I’d recommend FastAI’s lesson on data ethics for an expansion on what I just mentioned.

I am impressed with your use of the ethics chapter and you observed possible natural biases in the model.

Yours is the first post in this thread that has explicitly mentioned they considered ethics.

An example we can all follow.

It is likely the his failure for some facial recognition software to made ethically or used in unethical ways has lead to decisions such as this Massachusetts Lawmakers Vote To Pass a Statewide Police Ban On Facial Recognition

Cheers mrfabulous1 :smiley: :smiley: :clap:

1 Like

My second blog post is online!

I just finished Chapter 5 from the book and decided to use what I learned about Image Classification on a problem from my research field. Can we actually train models to predict bioactivity from nothing but a molecule image?

6 Likes

Thanks for sharing your blog post. I think it’s really interesting. Keep up with your great work!

1 Like

Once again, a new competition was announced on Kaggle, and I made a fastai starter notebook :slightly_smiling_face:

https://www.kaggle.com/tanlikesmath/ranzcr-clip-a-simple-eda-and-fastai-starter

6 Likes

I wanted to take the lesson on deployment a step further and ended up creating a fully featured web app (with user accounts etc) for yoga pose recognition and evaluation. It’s still work in progress (the model was trained less than 10 minutes, no fine tuning) but feel free to check it out at https://yogapose.app.

2 Likes

For those of you interested in time series/ sequential data, I’ve just added a new tutorial notebook on how to apply TSBERT a self-supervised, BERT-like approach for time series. I’ve tested it on a few datasets, and the results are pretty good.
You can get more details here:

9 Likes

Thank you for the post and the great notebook for the competition.

I was wondering about the different metrics for evaluating models and why one might be chosen over others. In particular, in this competition, when many of the categories are mutually exclusive. For example, for the ETT categories – an xray could be reported as ETT – Abnormal OR ETT – Borderline OR ETT – Normal OR none (if no ETT), but not more than one of these. It confused me that they seem to treat each category as independent labels to assess overall how good the model is. Hoping someone with more datascience or statistical background can provide an explanation.

So, using fastai and Flutter, I’ve now got an Android app on the Google Play store!

The app is called “yogapose”. It identifies and gives a score for your yoga posture. Feel free to try it out here: https://play.google.com/store/apps/details?id=app.yogapose

Suggestions, ideas are most welcome.

7 Likes

This is very nice! How did you deploy your fastai model on Flutter? You should write a tutorial/blog post on that!

2 Likes

Thanks @ilovescience!
I basically used the strategy that Jeremy suggested in the course. The fastai model lives on the server and is used at inference time only.
I will try to get the app in the Apple store as well. Perhaps after that I might write a tutorial / blog post.

5 Likes

I used fastai to train a model than can recognize Sign Language, then went ahead and created an app using opencv that runs inference on the video from my webcam.

sample-output

Here are the relevant links:

Training the Model: https://jimmiemunyi.github.io/blog/tutorial/2021/01/20/Sign-Language-Classification-with-Deep-Learning.html

Creating the opencv inference app: https://jimmiemunyi.github.io/blog/tutorial/2021/01/21/Sign-Language-Inference-with-WebCam.html

Github: https://github.com/jimmiemunyi/Sign-Language-App

Youtube Output Video: https://youtu.be/-nggi8EwfOA

:smiley: :smiley:

8 Likes

Toon-Me, A fun project to Toon portrait pictures. Please have a look at it.

Imgur

8 Likes

:movie_camera::dart: FlightVision

tracker_shot_lq

A realtime solution to track dart targets with fastai and unets, using segmentations layers rather than classical regression.

:clipboard: x, y Coordinate Tracking
  1. Input frame passes through unet model for segmentation mask inference.
  2. OpenCV blob detection identifies centroid location.
  3. OpenCV overlay is applied to show the results

Full write up on Github FlightVision

Background

Flight Club is the name of a franchise in London that modernised the classic pub sport. It’s success comes from the gamified automated scoring system. It has a 3D vision system comprised of live cameras, and also includes a well-polished interface which runs on a screen above the board. This opens the traditional game up for improved entertainment with the inclusion of minigames like team elimination, or a snakes-and-ladders adaption – where the target of your dart throw is how far forward you move on the board.

As of 2020 it has expanded into 3 locations in the UK and one in Chicago, USA.

![](upload://iE1b9cwpU1hEPMHFf9Jae2Pc7v4.jpeg)

How it works

Powered by a “highly sophisticated 3D vision system”, the software brute-forces the solution using calibrated cameras positioned in a frame above the dart board.

A normal dart impact on the board triggers Flight Club Darts’ specially developed 3D fitting algorithms to identify, recognise and measure the precise position, pose and score of the dart to within a fraction of a millimetre. The software manipulates three virtual darts through millions of different orientations and angles until it finds what matches where the dart landed on the board. Using multiple cameras reduces obscuration effects.

A deep learning approach was attempted with a challenge to PhD students as part of a country-wide university competition around 2014. At that time no solution was found, and so more conventional computer vision techniques were used.

Bootleg Flightclub using fastai v2

Take your own pictures, find the x,y coordinates of the dart point using a unet segmentation layer.

Development

1. What Didn’t Work

In lesson 3 of the fastai deep learning course, it explains how regression can be used within computer vision. Unlike classification tasks, which are used to make a categorical predicions (i.e. cat or dog), regression is used to find a continuous numerical values. This fits for our example, because we are trying to determine the x, y pixel locations of the dart point.

![](upload://8cq7pIX1LsdSwEmMFFs3btdoKhm.jpeg)

I spent a fair amount of time trying to get this regression based cnn_learner working – but it simply failed to converge during training. I was surprised that the model couldn’t accurately predict the validation data given the simplicity of some of the samples. For example, the third image down was a picture of a dart, by itself, on a plain background – and it still couldn’t make a reasonable prediction.

![](upload://505xE2jdpv2vQChXZf79eEY8VFH.jpeg)

I knew something must be wrong either with the training process and architecture, or there was a limitation with the dataset I had created. In order to isolate the problem I generated a new dataset.

2. Synthetic Dataset with Blender

Using blender I simulated a simplified version of the problem. With this I could determine whether it was my training database that was wrong, or whether it was my ML/Dl approach that was wrong. I downloaded a 3d model from grabcad and created a python script to automate rendering.

Blender allows for scripting with python. It is quick to learning the scripting API, because when you create an event with the UI the equivalent python command appears in the console, which you can then paste directly into a script. Then with python you can add a loop around it – specify the inputs/outputs, and the whole process is automated. My blender script randomised the rotation, zoom, light position, and centroid x, y location. With these in place I generated 15,000 renders in about half an hour.

:clipboard: Results

What I found was that even with this much simplified computer vision problem, the cnn_learner with regression head still couldn’t converge (and overfit) the training dataset. With that I was convinced the problem was with the architecture, so I moved on to try something else.

3. Learning with Unets

Unets are a segmentation method, where the output of the model is a per-pixel classification mask of the original image. These are commonly used in examples of self-driving car solutions online. My intuition here was that instead of using a multi-classification class of scenery objects (like tree, road, traffic_light etc), the target segmentation mask could simply be 0 or 1, where 0 is a pixel where the centroid isn’t, and 1 is a pixel where the centroid is.

![](upload://1DHswgLl0bKp7UrfbOVf1nt06fd.jpeg)

17 Likes

Hi lukew hope all is well!
Great application of fastai!
:smiley: :smiley:

1 Like

And here it is on the App Store for the iOS peeps: https://apps.apple.com/de/app/yogapose/id1549738502?l=en

4 Likes

I have always had a problem in recognizing big cats, so I thought of building a classifier using FastAI as part of my learning from the FastAI book. ( I was surprised to see @dway8 has worked on a similar project)

An image classifier that predicts members of the genus Panthera family ( Classes include Cheetah, Cougar, Jaguar, Leopard, Lion, Snow leopard, Tiger ). Trained using FastAI v2 API and deployed on StreamLit.

Github Repo : https://github.com/dnaveenr/big_cat_classifier
Live Demo : https://share.streamlit.io/dnaveenr/big_cat_classifier/main/src/streamlit_deploy.py

Thank you.

4 Likes

That looks great - how easy was it to use flutter to make the app? Any good resources you could lnk to?

Cheers

Clive

1 Like

Glad you like it!
It was quite easy to use flutter. I liked the flutter tutorials by the net ninja on YouTube.

2 Likes

PredictionDynamics

Hi all,

I’d like to share with you something I have been using lately that I find pretty useful. It’s a callback that allows you to visualize predictions during training. The plot is actually updated with every epoch. This is the type of output you get:

The main idea is based on a blog post by Andrej Karpathy I read some time ago. One of the things he recommended was to:

visualize prediction dynamics: I like to visualize model predictions on a fixed test batch during the course of training. The “dynamics” of how these predictions move will give you incredibly good intuition for how the training progresses. Many times it is possible to feel the network “struggle” to fit your data if it wiggles too much in some way, revealing instabilities. Very low or very high learning rates are also easily noticeable in the amount of jitter." A. Karpathy

It’s pretty interesting to see how the model is trained, and what the impact of the model, loss function, different initialization schemes, on which classes does the model struggle, etc.

A notebook with a few examples is available in this gist.

As you’ll see it’s very easy to use. The only thing you need to do is upload the callback code and add PredictionDynamics() to your callbacks in your learner.

22 Likes

Great work, can’t wait to test this out!