Share your work here ✅

Was looking at this competitions the other day. Thanks for sharing your work.

1 Like

I wrote a fun little notebook to create an image classifier just with specifying keywords. It’s using the google image search to find images for those keywords and trains a classifier on those images:

Classify images with Keywords

I tried to advance the emerging field of classifying Hot Dogs by specifically adding “Corn Dogs” to the portfolio.

Maybe it’s time for the ImageDataBunch.from_google Method :joy:

39 Likes

hey how did you added the dataset .

in line 3 how did you add the food data . like in Lesson-1 notebook , jeremy has defined a url constant for pet dataset .

I had downloaded the data separately, but I’ll update the notebook with the constant.

Edit: Done

have you put the data in a separate folder and just passed the path of that folder in line 3?

@radek .Thank you for sharing your work. It is very helpful.

1 Like

That folder is where I had downloaded it anyway, so the result will be the same, if my reading of the code is right.

I really needed the script for removing corrupted images(pynoob :smile: ). Thanks!

2 Likes

You’re welcome :grin:

2 Likes

btw, do you know of any shorter method?

You can certainly make it shorter and nicer in terms of coding, but I think opening it is necessary!

1 Like

Edit : Just realised that this is the ‘Share Your Work’ thread. :stuck_out_tongue: Let’s discuss elsewhere :

2 Likes

Note that the sample we used in lesson 1 is only "3"s and "7"s.

5 Likes

To get started I’ve created a small dataset with indoor and outdoor images from a dutch real estate website. Only one wrongly image, top-left-corner (interp.plot_top_losses). Can’t wait to learn on what grounds the network makes it’s decisions.

Buiten = outside, binnen = inside

29 Likes

Cool stuff! Coincidentally I have started on the feature interpretation.

By applying PCA to the last layer before the predictions I get some really cool features, I interpreted the top two features as ‘naked/hairy’ and ‘dog/cat’. Now I can find the hairiest dogs, and the most naked cats:


I’ve shared my notebook here (only accessible by url).
Next I’ll train a linear classifier on these features to learn what features matter most for what breed.

Tips on how to improve the code are welcome by the way. Once I have a better grasp on the library I’ll rewrite this into a proper blog post :slight_smile:

68 Likes

Wow, great work! - Maybe you can explain your PCA setup in more detail?

1 Like

Talking about Quick Draw (or any other Kaggle competitions), it could be useful to check kernels:

Usually, you can find various scripts, notebooks, etc.

2 Likes

Great work! I love the results you found!

I think there are a few bits you can refactor, like:

  • fastai_v1 has a version of save features, and easier way to add hooks. Check out the hook_output, will give you the output of the layer you pass (you’ll just have to append them together). Once you’ve called the model on your input (see below), you’ll find the features in hook.stored.
  • You can just call Image.predict instead of copying the code of _predict. It will get the image through the model and call the hooks to give you the activations.
8 Likes

Nice! You can probably just use our existing callback for saving activations - or at least simplify your code using our HooksCallback:

http://docs.fast.ai/callbacks.hooks.html

Untested, but something like:

class StoreHook(HookCallback):
    def on_train_begin(self, **kwargs):
        super().on_train_begin(**kwargs)
        self.acts = []
    def hook(self, m, i, o): return o
    def on_batch_end(self, train, **kwargs): self.acts.append(self.hooks.stored)

You can pass a list of modules to the ctor to hook whatever layers you like.

18 Likes