Share your V2 projects here

This is amazing! but the link to Medium is broken right now.
Is this based off a particular lesson in the series?

Nope, completely my own project. I think the link doesn’t work unless you are logged into a medium account.

These are amazing results! So cool

Larger than memory datasets

I’ve created a notebook to show how you can use np.arrays larger than memory. It’s based on np.memmap. I’ve used it to train a 20GB dataset on an 8GB RAM.

12 Likes

Does FastAI have libraries for Unsupervised / Semi Supervised learning. Any thoughts about doing a project in those areas.

1 Like

Look at https://forums.fast.ai/t/fastai2-blog-posts-projects-and-tutorials/65827 @Epoching shared his blogpost explaining fastai2 for semi-supervised problems.

1 Like

Yes! I’m open to any questions and I’m always willing to help :slight_smile:

Hi everyone, I don’t know whether this belongs here, but sharing it anyway.
I made a NotifierCallback that notifies you through almost any Instant Messaging(Telegram/Slack/ and more…), SMS, Email, Push Notification service at each Epoch’s end.
Callback Code:

class NotifierCallback(Callback):
    "Notifies you the losses and the metrics"
    def __init__(self, service_addrs):
        self.service_addrs = L(service_addrs)
        try:
            import apprise
        except:
            raise ModuleNotFoundError("Apprise module not found, Install it to use this Callback.")
        self.apobj = apprise.Apprise()
        for addrs in self.service_addrs:
            self.apobj.add(addrs)

    def begin_fit(self):
        "Replace default logger with _notify"
        self.old_logger,self.learn.logger = self.logger,self._notify

    def _notify(self, log):
        "Notifies all services and call the old logger."
        msg_body = ""
        for data in zip(self.recorder.metric_names,log):
          msg_body += f"{data[0]}: {str(data[1])}\n"
        self.apobj.notify(title="",body=msg_body)      
        self.old_logger(log)

    def after_fit(self):
        "Restore old logger"
        self.learn.logger = self.old_logger

This is made possible by the awesome Apprise library.

Example:

telegram_addrs = f'"tgram://{bot_token}/{chat_id}" #Telegram Notification
windows = "windows://"  #Windows Desktop Notification
service_addrs=[telegram_addrs,windows]
learn.fit_one_cycle(25, max_lr=1e-3,callbacks=[NotifierCallback(services_addrs)])

There is a huge list of services supported by Apprise which you can view at Apprise Github Page

This is useful if you’re training a model that takes a long time to train and you need to do other work instead of checking often how your model is performing.

23 Likes

@vijayabhaskar Thanks for sharing!

Have you looked at knock knock by Hugging face? How do the 2 frameworks compare?

1 Like

Yes Sanyam, I’m familiar with Knock knock, but Apprise supports a huge list of services (50+), Knock Knock only supports a few(12). You can view all the notification services they support here.

1 Like

This is very cool, I wanted to try something like that myself. However from your blog I understand you were using batches of 3 256pixels images and still running out of memory on 2070 GPU?
In that case, as someone just renting a paperspace machine, I guess I’d better wait for more efficient algorithms or better GPUs to become avaialable.

Depends on the paperspace machine you get, from what I can see, the smallest Paperspace GPU still have 8GB of GPU ram, which is the same as the 2070, other machines have more again

Good point @morgan, I wonder about the training time as well. I might spend few bucks to see If I can run the code from the original paper on p5000 and get it to finish.

[Edit]

One NVIDIA V100 32GB GPU.
It takes 4 days.

Never mind then :slight_smile:

1 Like

I used 2 2070S so I had 16GB in total. It took me about 7 days to train it. Though I used a lot of memory saving techniques. I personally do not know of a way to get it to train in a “reasonable” amount of time on a 8GB GPU. I think model architecture changes would have to be made to get it to work on smaller GPUs.

2 Likes

I stumbled across this project – might be a little advanced, but I wonder if a model could be built to automate / streamline this data labeling: https://www.zooniverse.org/projects/edh/rainfall-rescue

Awesome Molly!!

Excellent intuition! This is very similar to what the new electra algorithm is doing! And they achieve state of the art results!!

I’ll trying to implement electra on fastai on the following week, maybe we can collaborate =)

3 Likes

Ah! Didn’t know about that! Looks really cool

1 Like

Been working on 2 projects.

Pupil & Visible Eye Segmentation

This work is a collaboration with Verily, to help quickly diagnose dry eye syndrome using pupil and visible-eye-area segmenation . Also useful for a lot of other similar use-cases they have in mind (including replacing the peripheral vision test).

I cannot share the code / snippets yet, but the results look very promising using fastaiv2.

Bokehlicious Selfies

I was experimenting with PyTorch's pre-trained models, and wrote a quick and dirty pipeline for adding bokehs to selfies. The results look pretty good. If you have ideas on how to improve this further, lets talk :slight_smile:

http://rahulrav.com/blog/bokehlicious.html

11 Likes

Following the advice of @sgugger , I’ve reproduced lesson 2 on another dataset. I used bing image search to get images of the two main types of masks that are being used to protect against COVID-19 ie. N95 respirators and surgical masks. I built and deployed the model on SeeMe.ai. Model achieves an error rate of 0.02.

Gist is at: https://gist.github.com/asiedubrempong/7d56514519af3d531dc6038d4329e6f4
Github repository: https://github.com/asiedubrempong/fastai-experiments#covid-19-masks

I thought of home made masks but I couldn’t get data on that.

7 Likes