Share your V2 projects here

That is awesome, would love to check it out when you are ready to share it :slight_smile:

1 Like

For plotting the validation show_results you could peak at. But otherwise yes the old lecturer (v2 of the course) and there’s the devise notebook which may be less building headache to port over

Are there any resources for finding available datasets? I am thinking of working on a bee classifier (similar to lesson 1 dog and cat classifier) and was wondering how/where I can find a dataset for bees?

Kaggle would be my first go to. There’s tons of datasets on there. (First result after a quick search: https://www.kaggle.com/jenny18/honey-bee-annotated-images)

1 Like

Thanks!

Hi everyone, I tried my best to rewrite the fastai2 version of the Devise implementation that Jeremy did using fastai v0.7 for v2 of the course, here you can see my implementation. I used Tiny Imagenet dataset from Stanford, it’s a subset of the Imagenet dataset contains 200 classes, images with shapes 64x64. I used this dataset because Imagenet is huge and I can’t work with that on Colab. I didn’t train from scratch as I thought it’s just a subset of Imagenet dataset and replacing classifier and training the last part itself should be enough. If anyone found any bug feel free to ping me, I’m still figuring out fastaiv2. I used the higher-level API here because I couldn’t get the Datablock to work(will switch If I figure out how).

5 Likes

Finally I finished building ImageSegmentation pipeline for a Kaggle challenge TGS Salt Identification. The solution should be able to get you to top 1 - 5%.

The solution is an update to my old repo, which is based on fastai 0.7.

Key Features of the notebook.

  • Creating DataBlock (Dataset, Dataloader)
  • Model
    • Create FastAI unet learner
    • Create a custom unet model demonstrating features like
      • Deep Supervision
      • Classifier branch
      • Hyper columns
  • Train on K-Fold
  • Ensemble by averaging
  • Loss function
    • Classifier loss
    • Loss for handling Deep supervision
    • Segmentation loss
  • TTA - Horizontal Flip
  • Create a Submission file.

I wanted to record a code walkthrough and post it soon here. Posting it here so that I do not escape from doing it. Planning to pick up another competition probably quickdraw and build a complete pipeline. If anyone wants to join me on the journey please let me know.

5 Likes

Posting some homework from Zach M’s class which implemented Jeremy’s lecture (pt2 lesson13 2018) on Style Transfer (Gaty 2015) in fastai_v2.

See repository for full write-up: https://github.com/sutt/fastai2-dev/tree/master/style-transfer-hw

I trained most of these models around Feb 10th with the work-in-progress v2 library. I went back and duplicated the work for one the models today: the API is still in place and everything works but the results got a lot better. Perhaps just lucky seed, but exciting to see improvements emerge when you haven’t done anything :slight_smile:

23 Likes

After learning about fine_tune and trying to explore it further, I found a paper on coral identification inwhich they used Keras and a ResNet 50 for ~300 epochs and got an accuracy of 83%. Using some of the techniques from the first lesson, chapter 6, and chapter 7 (Progressive resizing, Test-Time Augmentation, and Pre-Sizing) I was able to get 88% accuracy in just 9 epochs! Read about it here

Edit: sorry it was a 404 for a moment, briefly rearranged things on the site and it broke the link

5 Likes

I worked to get U-GAT-IT working with fp16. It takes in a picture of a person, and then maps it to an anime image. (Cyclegan training in fp16)
Everything is currently a work in progress, but here is the results and a WIP blog: (btw looking for job)
Yes, all of this was done in fastai2. I have been working on it since October.


Ouputs

Inputs:

Edit: Forgot to normalize images, also uploaded input example.

32 Likes

Hey! :wave: I happened to be learning about Auto Encoders when the invitation for this V2 course came in so I implemented three experiments in v2: https://github.com/jerbly/fastai2_projects. This was a good way to learn implementing simple pytorch models into v2 (small enough to run on CPU) and includes a custom batch transform class to add random noise for the Denoising Auto Encoder. :slight_smile:

2 Likes

This is amazing! but the link to Medium is broken right now.
Is this based off a particular lesson in the series?

Nope, completely my own project. I think the link doesn’t work unless you are logged into a medium account.

These are amazing results! So cool

Larger than memory datasets

I’ve created a notebook to show how you can use np.arrays larger than memory. It’s based on np.memmap. I’ve used it to train a 20GB dataset on an 8GB RAM.

12 Likes

Does FastAI have libraries for Unsupervised / Semi Supervised learning. Any thoughts about doing a project in those areas.

1 Like

Look at https://forums.fast.ai/t/fastai2-blog-posts-projects-and-tutorials/65827 @Epoching shared his blogpost explaining fastai2 for semi-supervised problems.

1 Like

Yes! I’m open to any questions and I’m always willing to help :slight_smile:

Hi everyone, I don’t know whether this belongs here, but sharing it anyway.
I made a NotifierCallback that notifies you through almost any Instant Messaging(Telegram/Slack/ and more…), SMS, Email, Push Notification service at each Epoch’s end.
Callback Code:

class NotifierCallback(Callback):
    "Notifies you the losses and the metrics"
    def __init__(self, service_addrs):
        self.service_addrs = L(service_addrs)
        try:
            import apprise
        except:
            raise ModuleNotFoundError("Apprise module not found, Install it to use this Callback.")
        self.apobj = apprise.Apprise()
        for addrs in self.service_addrs:
            self.apobj.add(addrs)

    def begin_fit(self):
        "Replace default logger with _notify"
        self.old_logger,self.learn.logger = self.logger,self._notify

    def _notify(self, log):
        "Notifies all services and call the old logger."
        msg_body = ""
        for data in zip(self.recorder.metric_names,log):
          msg_body += f"{data[0]}: {str(data[1])}\n"
        self.apobj.notify(title="",body=msg_body)      
        self.old_logger(log)

    def after_fit(self):
        "Restore old logger"
        self.learn.logger = self.old_logger

This is made possible by the awesome Apprise library.

Example:

telegram_addrs = f'"tgram://{bot_token}/{chat_id}" #Telegram Notification
windows = "windows://"  #Windows Desktop Notification
service_addrs=[telegram_addrs,windows]
learn.fit_one_cycle(25, max_lr=1e-3,callbacks=[NotifierCallback(services_addrs)])

There is a huge list of services supported by Apprise which you can view at Apprise Github Page

This is useful if you’re training a model that takes a long time to train and you need to do other work instead of checking often how your model is performing.

23 Likes

@vijayabhaskar Thanks for sharing!

Have you looked at knock knock by Hugging face? How do the 2 frameworks compare?

1 Like