Share your V2 projects here

Do you mind to share your code snippet and full error trace msg?

I run the following with no issue:

FastAI Training

config = unet_config(self_attention=True, act_cls=Mish)
learn = unet_learner(dls, resnet50, metrics=acc_camvid, config=config)
learn.fine_tune(20)

PyTorch Model Definition

class DynamicUnetDIY(SequentialEx):
    "Create a U-Net from a given architecture."

    def __init__(
        self,
        arch=resnet50,
        n_classes=32,
        img_size=(96, 128),
        blur=False,
        blur_final=True,
        y_range=None,
        last_cross=True,
        bottle=False,
        init=nn.init.kaiming_normal_,
        norm_type=None,
        self_attention=True, # Here
        act_cls=Mish, # Here
        n_in=3,
        cut=None,
        **kwargs
    ):
...

TorchServe

>>> time http POST http://127.0.0.1:8080/predictions/fastunet_attention @sample/street_view_of_a_small_neighborhood.png

HTTP/1.1 200
Cache-Control: no-cache; no-store, must-revalidate, private
Expires: Thu, 01 Jan 1970 00:00:00 UTC
Pragma: no-cache
connection: keep-alive
content-length: 16413
x-request-id: 35600873-8657-4998-b822-26340bf2bd1a

{
  "base64_prediction": "GhoaGhoaGhoaGh...RERERERERERERHh4eHh4eHh4eHh4e"
}

real    0m0.386s
user    0m0.309s
sys     0m0.030s

Hye, thanks for your answer and awesome tutorial.
My issue appears to be solved setting:

torch.backends.cudnn.benchmark = False

I have another question, why you reconstruct the Unet on your inference code and not just use torch.load. in my fastai code I do:

torch.save(learn.model, 'model.pt')

and then on torch just do:

model = torch.load('model.pt')

with this, the UNET comes in the file.

1 Like

Thanks! Great tips that would be very helpful to whom runs into a same issue.

As to the reconstruction, the reasons are:

  1. For TorchServe eager mode, it requires its explicit definition
--model-file MODEL_FILE
Path to python file containing model architecture. This parameter is mandatory for eager mode models. The model architecture file must contain only one class definition extended from torch.nn.modules.
  1. According to the answers below:

This way is still not bullet proof and since pytorch is still undergoing a lot of changes, I wouldn’t recommend it.

Good to know, I do think the answers is pretty old.

  1. I was able to run torchserve without passing the model definition, only passing the saved model as I showed you above. As you say, probably it is not the right way, so I converted the UNET to torchscript.
  2. Do you have any tricks on how to make inference work on FP16?

I am curious how the autonomous people do segmentation at 30 FPS with FHD images. This UNET type of models are heavy as hell, the forward pass (eval mode) for the resnet34 unet on 1 FHD image takes 15GB of VRAM!!

Hi Raphael,

I enjoyed reading your article https://tapesoftware.net/fastai-onnx/ and the code is working great with your dataset. Unfortunately, I’m having problem when doing the Inference (under Windows) when using my own dataset (jpg images). The model give wrong/different prediction on Windows when comparing the same image on Colab.
On Colab, I got an accuracy of ~ 90 %.
Some parameters needs to be changed maybe something below !!!
var imageInfo = new SKImageInfo( …

Thanks in advance
Sam

Hi all,

I’d like to share with those of you interested in time series tasks that I’ve just finished a major update to the tsai (timeseriesAI) library and added lots of new functionality, models and tutorial notebooks. It now works well with fastai v2 and Pytorch 1.7.

If you are interested you can find more details in this blog post or in the repo.

9 Likes

Hi all,

This is my first fastai app, it’s called black bird detector and was trained to recognize between black birds, ravens and crows. I used 100 images of each bird species to train the transfer learning model with a resnet18. After adding image augmentation the accuracy is roughly 88%.

The app is rendered with Voila and runs in a Binder backend. You can try it out here: black_bird_detector

image

Cheers!
Andre

6 Likes

I am using fastai in this training software to drive this hardware.

Captured images predict throttle and steering to navigate my basement track. I am doing the training on a NVIDIA jetson Xavier. Training data is collected by controlling the car with a bluetooth Xbox game controller. An onboard NVIDIA jetson Nano does the inference when the car is driving itself.

14 Likes

Hey guys,
I’ve done an in-depth tutorial on Image Colorization using U-Net and conditional GAN and published my project on TowardsDataScience.
You don’t need fancy GPUs or huge datasets to train this model. I’ve developed a strategy which allows you to train the whole model in less than three hours on only 8000 images and still get great results!

Here’s the link: https://towardsdatascience.com/colorizing-black-white-images-with-u-net-and-conditional-gan-a-tutorial-81b2df111cd8
I’ve also uploaded the project on my GitHub repo: https://github.com/moein-shariatnia/Deep-Learning/tree/main/Image%20Colorization%20Tutorial

You can directly open it on Colab with this: https://colab.research.google.com/github/moein-shariatnia/Deep-Learning/blob/main/Image%20Colorization%20Tutorial/Image%20Colorization%20with%20U-Net%20and%20GAN%20Tutorial.ipynb

I hope you enjoy it.

26 Likes

I’m still read the blog post but wow, this looks really great! Thank you for sharing!

1 Like

Thank you! Happy to hear that

Hi moein hope your having a wonderful day.
Wonderful work, well written clear, precise and enjoyable!
Cheers mrfabulous1 :smiley: :smiley:

2 Likes

Hey Victor,

Thanks for sharing this. It’s very helpful to me.

1 Like

Thank you for your kind words! I’m happy you liked it. Cheers! :smiley:

These look awesome!! Well done – somehow I’d missed the memo of how good GANs are now :sweat_smile:

Yesterday, a new competition was launched on Kaggle. A image classification competition!
I wrote a quick fastai starter that does quite well right now. Hope it is helpful!

https://www.kaggle.com/tanlikesmath/cassava-classification-eda-fastai-starter

8 Likes

Hi ilovescience hope all is well!
As usual great work.

Cheers mrfabuluous1 :smiley: :smiley:

1 Like

Here is my attempt. I had to iterate several times before I managed to get it onto production. Thanks for everyone on the forums to help out with the issues.

Here is my github repo. And my app looks like this:

Does anyone have any suggestions as to how I can get rid of the “None” at the bottom. This is because I am rendering the plt.show()

Thanks,
Sam

@ilovescience Thanks for sharing this notebook. Really interesting to look through.
Regarding the Kaggle competition rule of not allowing internet access in this particular competition; I assume that means you couldn’t simply use ‘resnet18’, as an example, like we do in the FastAI lessons because it’s not allowed to download it?

You can use pretrained models. You have to add the model weights as datasets and use them.

1 Like