Share your V2 projects here

I have written a 2 part blog series, part-1 and part-2 about how to build an end to end pipeline for an image segmentation problem.

It builds a solution for TGS Salt Identification Challenge competition in Kaggle.

By following the blog, you would be able to rank in the top 4% of the Kaggle LB. The blogs have step by step explanation of the key concepts like

  • DataBlocks,

  • Transformations,

  • Using unet_learner

  • Building custom unet architecture based on se_resnext50

  • Adding hypercolumns.

  • Adding custom TTA.

  • Doing K-Fold predictions.

  • Creating a Kaggle submission file.

At the end of the blog, I have also added a few tricks which you can try to improve the solution further.

16 Likes

One small suggestion, instead of converting the mask values from 255 to 1 in change_image, you can try using IntToFloatTensor(div_mask=255) in batch_tfms.

1 Like

What will that do exactly? This is the first time I’ve heard of that :slight_smile: IE would this help in a case of say 0,1,255?

By default the mask tensor gets divided by 1, but for binary segmentation we may want it to be divided by 255 to make the values in the tensor from 0 to 1.

2 Likes

I’m working on fastcook, a collection of useful recipes for our fastai2 driven workflow.

The idea is to provide examples on how to use the “not so standard” bits of the library, it should be helpful to all levels of users, while the focus is on DL, it’s not limited to it at all! For example, type dispatch is an example that will make your life better with any python project :grin:

I’m slowly and steadily adding recipes as I’m working on other projects, most of the examples are in the vision domain since currently it’s my main area of study.

I would like to invite all fastai chefs to add new recipes to the cookbook, it’s all built with jupyter notebooks with nbdev so it should be really easy to add new stuff :grinning:

13 Likes

Thank you @lgvaz for sharing. It’s very well done: concise and informative. A couple of weeks ago, I started this thread: Fastai v2 Recipes (Tips and Tricks) - Wiki. Please feel free to add some of your recipes there if you feel like. I’m presently working on time series classification and forecasting, and I will try to add some recipes to your fastcook repo whenever I will have some time.

2 Likes

That’s amazing! I had no idea about this thread, really really useful, I already see items that could be added to fastcook from there!

I would really appreciate that, if have any questions on how to add stuff to fastcook please feel free to dm me :smile:

Sure! I’ll add some of the quickier recipes there :grinning:

3 Likes

I’m exploring the fastai2 library by writing about various NLP methods in a blog (thanks to fastpages!). The first post is simply finding a dataset in a language other than English (I ended up with Norwegian).

The plan is to test various NLP methods with this dataset, and maybe also MultiFiT at some point. I’m not aware of any pretrained models for MultiFiT (monolingual) in Norwegian, so this would involve training a language model from scratch (I have no idea how to yet). It would be fun to cooperate if anyone have similar interests!

My ambitious plans are destroyed by quarantine so far, so I have switched to the just adoption fastai2 for my old projects :frowning:

I have implemented Metric learning HardNet in fastai2 here

6 Likes

As it is Japanese Castle day today. I made a castle classifier.

Binder: Japanese Castle Classifier

Today is Castle Day in Japan. Castle is read as Shiro 城、。。。Today is the 6th day of the 4th month. 4 can be pronounced Shi,…and 6 as Roku, or Ro, therefore today’s date is Shi-Ro…

1 Like

For my first ever DL project I decided to keep it simple and make a Pokemon Gen 1 classifier using a data set I found on Kaggle.

I was targeting 90% accuracy, and fastai2 got me to 93% after < 90 seconds of training. :joy: I know that’s not a particularly impressive number by today’s standards, but the data set contained various kinds of images from the official pictures to drawings by kids, game screens, pixel art, etc.

On clear images, it’s performing well…

Screenshot 2020-04-06 at 12.45.59

For a more obscure test I thought i’d try it on one of my plushies and it got it!
Screenshot 2020-04-06 at 12.31.17

I took Jeremy and Rachel’s advice and blogged about my experience here. You can try the app for yourself on binder.

It’s been a really interesting couple of weeks. I feel like i’ve learned a bunch already, not only on DL but git, fastpages, general bash, etc. It’s starting to all click.

Thanks to all the people who both post and answer various questions on the forum, i’ve found answers to almost every problem i’ve been having along the way.

Looking forward to @radek 's git intro later. This community rocks :partying_face:

11 Likes

Thanks, that helped. I’ve updated the code to use DataBlock API now.

Quick observation:
The training images I used for segmentation, were of size 512 x 512, and the test image was of size 1000 x 1000.

The thing I noticed was that if I modified the size during training, say explicitly indicating the size as 224, the prediction on the test image was also 224 x 224.

However, If size was not mentioned during training, the size of the predicted mask, was same as that of the size of test image .

When we look at data, it helpfull to use tree command in terminal, we can use it as !tree in notebook.
I thought it good idea write small util to do same.
It not fastai2 project, but i’ll share it.
It only use standard lib, thanks to nbdev it available throw pip - pip install dir_tree


It can be more informative than tree. It can be used in terminal to.
In notebook - tree(path), or ‘patch’ Path and use path.tree()
Couple screenshots.

5 Likes

@jsa169 I have been trying (and loving) deoldify for several family pictures. The results makes my grandparents during covid-19 times very happy and brings us closer despite the distance :smiling_face_with_three_hearts:

The most amazing results I got so far were when using it with old lab pictures from my pharmaceutical company, F. Hoffmann-La Roche AG, and I would like to share them with you all. With love from Switzerland!

I find it super impressive :heart_eyes:

trick: to use the colab notebook from the github repo with google drive images - get first a shareable link and then change ‘open’ to ‘uc’. e.g.
https://drive.google.com/open?id=0BwOhk6mEvQCTVFV should be https://drive.google.com/uc?id=0BwOhk6mEvQCTVFV

12 Likes

I’m very excited to share with you a package called timeseries for fastai v2 that I have been partly working on for last couple of months. It’s still a work in progress.

This extension mimics the unified fastai v2 APIs used for vision, text, and tabular. For those who already used fastai2.vision, they will feel familiar with the timeseries APIs. It uses Datasets, DataBlock, and new TSDataLoaders and TensorTS classes. It has the following mapping:

TensorImage  <---> TensorTS
Conv2D       <---> Conv1D

The package includes 128 Univariate and 30 Multivariate time series datasets. Using URLs_TS class (similar to fastai URLs class) you can play with one of those 158 datasets.

We can train any time series dataset end-to-end with 4 lines of code (here is an example using TSDataLoaders and a multivariate dataset called NATOPS )

path = unzip_data(URLs_TS.NATOPS)
dls = TSDataLoaders.from_files(bs=64,fnames=[path/'NATOPS_TRAIN.arff', path/'NATOPS_TEST.arff']) 
learn = ts_learner(dls)
learn.fit_one_cycle(25, lr_max=1e-3) 

Using the InceptionTime architecture, and just fastai2 default settings we can achieve around 98,5% accuracy in only 20 epochs. The following figure shows some of the predictions results (Predicted/True classes)

image

The package also features Class Activation Map for time series. It offers both CAM and GRAD-CAM as well as user-defined CAM. At the center of this feature, there is one single method called show_cam() you need to use, and it is highly configurable. You can even plug-in your own CAM function, and the rest is taken care by the show_cam() method

Here below is a simple example of the univariate ECG200 dataset classification task (Normal Heartbeat vs. Myocardial Infarction). Like in vision, the colors represent the activation values at a given layer (in this example it’s located before the FC layer (last layer)). Notice how the Myocardial Infarction plots (2nd, 3rd, and 4th) share similar activation zones that are quiet different from those corresponding to Normal Heartbeat plots (1st and 5th). This eases the interpretation of the results obtained using this InceptionTime model.

Please give the timeseries package a try and share your feedback. You might also check out its documentation. If you find it interesting, please share it.

NB: I read some posts here where some of the new comers expressed the fact that is intimidating seeing some full-blown projects that are indeed impressive (I’m not considering mine as one of them) in comparison to theirs. To those I would say: What you are presenting here are both great and unheard of in other courses. Considering the fact that your projects are products obtained after attending just a couple of lessons, it is quiet impressive. Some of the other projects are implemented by veterans who already have more experience with both fastai and deep learning in general which explains the difference between different projects. Good luck in your journey!

31 Likes

Wow well thank you!

1 Like

That isn’t too far from what I would expect it to look like. UGATIT is a research proof of concept. If you want to get better results you will probably have to combine the works from other research papers and study this in depth.

Base UGATIT also has very little image augmentation as well, so you might get better results if you increase the amount of image augmentation you are using.

Pay very close attention to what the human images you are passing into the network look like, and try to copy the lighting, pose and crop very closely. Crop is very important, as not cropping in a similar way to the selfie portion of the datset pretty much makes it not work at all.

Hey everyone,

I used last weeks assignment to identify defects in the battery production process. Unfortunately I am unable to share any of the classes or actual images I used, but I have pulled some images from the internet to describe the basics of the task.

Often there can be defects that occur at the entrance of the metal casing that holds the battery electrode.

These defects can cause many issues down the line, and the sooner they are identified, less waste there is in the production process.

I was able to train using a pre-trained resnet34 on around 1200 images with 15 classes. After around 10 epochs I would save the model, apply a new set of transformations, and train again. Using this technique I was able to reach around 93% accuracy. The application was deployed in a docker container and has been running successfully.

4 Likes

very cool. why not resize on the mobile side before sending to server?