Share your V2 projects here

! I can’t tell the difference between them at all. It would be interesting to find out how the resnet model is learning to distinguish them, and whether it is based on unintended background features.

2 Likes

somebody asked a couple of weeks ago if I could turn my duckduckgo image scraper notebook into an installable library, and i’ve been meaning to play with nbdev anyway (which is awesome btw, thanks to jeremy and sylvain), so that’s what I did.

docs:

now you can:

!pip install -q jmd_imagescraper

from pathlib import Path
root = Path().cwd()/"images"

from jmd_imagescraper.core import *

duckduckgo_search(root, "Teddy", "teddy bears", max_results=100)
duckduckgo_search(root, "Black", "black bears", max_results=100)
duckduckgo_search(root, "Brown", "brown bears", max_results=100)

and also:

from jmd_imagescraper.imagecleaner import *
display_image_cleaner(root)

those of you having issues with bing keys and trials etc can drop this into your notebook instead.

19 Likes

I made an app to classify images of sea fishes. For the project I ran a 34 resnet on a database that has 44,134 images classified in 44 fish categories (for the moment, as I manage to increase my database I will increase the fish species)

The project was deployed with mybinder. The difference with the app shown in session 3 is that instead of using a jupyter notebook and voila, I used streamlite. The link to the project can be found here and the github repo can be found here.

2 Likes

The aim of the project is to enhance under-exposed Images. Hope you guys Like it!!


Example Images

14 Likes

pexels-anna-shvets-4226124

I just published my project on TowardsDataScinece. I used fastai v2 and experimented with Chest X-ray dataset on kaggle and I got 100 percent accuracy on the provided validation set (val set for this dataset is rather small but its what others used on Kaggle) which compared to other notebooks on Kaggle is the best result on this dataset and higher than others.
Click here to see the post.
I would be really happy if you let me know your comments about it :slight_smile: :slight_smile:

12 Likes

A new library:

I used this library for segment brain tumours:

6 Likes

wow. it took a lot longer to deploy this, than to train it.
First, I tried to deploy on Binder, but it seems that it does not work. I checked some of the ones that should work, but none does anymore. It seems broken.

However, behold the “What game are you playing” classifier:
a little classifier, that can detect what game you play, from a screenshot.

:dizzy_face: on Heroku the slug size is 935MB with the requirements from the course page and only the bear classifier example with a .pkl file of 46MB.

Finally: What game are you playing (on SeeMe.AI)

1 Like

Hi ringoo hope your having a fun day!
Well done for persevering, like you, I think the most difficult bit about creating any classifier model, is deploying it online easily at little or no cost, especially if one is still waiting to make their millions :money_with_wings: :moneybag: from it (pity the link makes you have to log in to see the model :question:).

Cheers mrfabulous1 :smiley: :smiley:

Not if you’re using CPU torch wheels. Were you using the instructions here?

2 Likes

@joedockrill Thank you for your comment. Yes, I used the instructions. The pytorch version in the Procfile is definitely the cpu version. I just copied it from the instructions.
It seems that I included the wrong repository on heroku and it used the myBinder requirements. Now that I have repeated all the steps, it seems to work.
I have created a separate repository for the heroku deployment: https://github.com/mesw/whatgame3.git, if you want to have a look at the files. Now the slug is only 368MB. Nice!

If you would like to take this little experiment for a spin, you can find it here:


too bad, there is a voila error (the same as with myBinder) and it does not work. :unamused:

Guess the Doodle App
I’ve created this App based on the knowledge from Lesson 1 &2, completely in Jupyter Notebooks giving some Material UI touch to the App.
Got inspired by Google Draw and built one of my own using FastAIV2.
Currently supports 5 Doodles (bird,cup,dog,face,fish).
Please do give it a try.

8 Likes

Hi hitchhicker30 hope all is well!
A great little app with an original front end very different from the most common starter code classifier app.
Good work.
Cheers mrfabulous1 :smiley: :smiley:

1 Like

Thank you!! @mrfabulous1

Hi everyone, I released my article on end to end image classification last week.


I have tried to cover the entire spectrum, right from gathering data, cleaning it, training a model to creating an app, by taking an example of a Guitar Classifier which discriminates between Acoustic, Classical and Electric Guitars.
Let me know what you think about it :slight_smile:
4 Likes

Hi,

I have written a blog and developed a Covid19 chest X-ray image classifier based on the initial 3 lessons. Here are the links:



https://www.kaggle.com/krrai77/fast-ai-lesson-3-covid19-x-ray-classifier-app

Thanks
Raji

3 Likes

I made something that lets you convert a screenshot of a chess diagram into a FEN (chess file) that can be imported into chess apps like lichess. Don’t know if there are any other chess fans in here, but I have a few text message chains with friends where we share puzzles as screenshots, and this makes it easier to save/analyze them.

7 Likes

hey @GiantSquid, can you share the code/dataset to this?
:smiley: chess fan here

1 Like

Definitely! I will have to find some time to organize my files, but I’ll share something on github soon.

2 Likes

Hi GiantSquid hope all is well!

Great app!
I played with your app for a bit.
image
This was one of the images I tried.
I was wondering the following things.

  1. does your app work by classifying, segmentation or other.?
  2. how large was your dataset?
  3. does your app prefer particular colors of images.

Great to see another interesting use of fastai!

Cheers mrfabulous1 :smiley: :smiley:

Hey, glad you like it! For your image, the model expects it to be from white’s perspective (white pieces at the bottom), I should have mentioned that. I should really add a user input for black/white perspective, but this would require collecting some data from black’s perspective.

By the way, this might explain something odd about your diagram: the position is clearly from black’s perspective but the notation on the edge of the board is as though it were white’s. Did you recognize the image and then load the FEN back into chess.com? If so, I guess the model worked, but you’re going to have some unexpected behavior… your pawns will be moving backwards :slight_smile:

In answer to your questions…

  1. Segmenting then classifying. I segment in the simplest, most obvious way possible: divide the vertical and horizontal dimensions by 8 and slice into 64 squares. This is why the image needs to be cropped exactly to the board: otherwise the squares will be off. Of course, it’d be better to be able to find the board in a larger image, but I haven’t implemented this. Then I run a standard image classifier on the sliced square images.

  2. I believe I labeled 20 boards from each of lichess, chess.com, and chessbase (1 board -> 64 squares). I also set up a pipeline to generate synthetic data using python-chess, which looks to be the same piece set as lichess. Can’t remember if I used this in the latest model… Stay tuned for github repo/blog post :slight_smile:

  3. My training set was color images of whatever the default is for lichess, chess.com, and chessbase, so it should work best with those.

2 Likes