class DynamicUnetDIY(SequentialEx):
"Create a U-Net from a given architecture."
def __init__(
self,
arch=resnet50,
n_classes=32,
img_size=(96, 128),
blur=False,
blur_final=True,
y_range=None,
last_cross=True,
bottle=False,
init=nn.init.kaiming_normal_,
norm_type=None,
self_attention=True, # Here
act_cls=Mish, # Here
n_in=3,
cut=None,
**kwargs
):
...
TorchServe
>>> time http POST http://127.0.0.1:8080/predictions/fastunet_attention @sample/street_view_of_a_small_neighborhood.png
HTTP/1.1 200
Cache-Control: no-cache; no-store, must-revalidate, private
Expires: Thu, 01 Jan 1970 00:00:00 UTC
Pragma: no-cache
connection: keep-alive
content-length: 16413
x-request-id: 35600873-8657-4998-b822-26340bf2bd1a
{
"base64_prediction": "GhoaGhoaGhoaGh...RERERERERERERHh4eHh4eHh4eHh4e"
}
real 0m0.386s
user 0m0.309s
sys 0m0.030s
Thanks! Great tips that would be very helpful to whom runs into a same issue.
As to the reconstruction, the reasons are:
For TorchServe eager mode, it requires its explicit definition
--model-file MODEL_FILE
Path to python file containing model architecture. This parameter is mandatory for eager mode models. The model architecture file must contain only one class definition extended from torch.nn.modules.
According to the answers below:
This way is still not bullet proof and since pytorch is still undergoing a lot of changes, I wouldn’t recommend it.
Good to know, I do think the answers is pretty old.
I was able to run torchserve without passing the model definition, only passing the saved model as I showed you above. As you say, probably it is not the right way, so I converted the UNET to torchscript.
Do you have any tricks on how to make inference work on FP16?
I am curious how the autonomous people do segmentation at 30 FPS with FHD images. This UNET type of models are heavy as hell, the forward pass (eval mode) for the resnet34 unet on 1 FHD image takes 15GB of VRAM!!
I enjoyed reading your article https://tapesoftware.net/fastai-onnx/ and the code is working great with your dataset. Unfortunately, I’m having problem when doing the Inference (under Windows) when using my own dataset (jpg images). The model give wrong/different prediction on Windows when comparing the same image on Colab.
On Colab, I got an accuracy of ~ 90 %.
Some parameters needs to be changed maybe something below !!!
var imageInfo = new SKImageInfo( …
…
I’d like to share with those of you interested in time series tasks that I’ve just finished a major update to the tsai (timeseriesAI) library and added lots of new functionality, models and tutorial notebooks. It now works well with fastai v2 and Pytorch 1.7.
If you are interested you can find more details in this blog post or in the repo.
This is my first fastai app, it’s called black bird detector and was trained to recognize between black birds, ravens and crows. I used 100 images of each bird species to train the transfer learning model with a resnet18. After adding image augmentation the accuracy is roughly 88%.
Captured images predict throttle and steering to navigate my basement track. I am doing the training on a NVIDIA jetson Xavier. Training data is collected by controlling the car with a bluetooth Xbox game controller. An onboard NVIDIA jetson Nano does the inference when the car is driving itself.
Hey guys,
I’ve done an in-depth tutorial on Image Colorization using U-Net and conditional GAN and published my project on TowardsDataScience.
You don’t need fancy GPUs or huge datasets to train this model. I’ve developed a strategy which allows you to train the whole model in less than three hours on only 8000 images and still get great results!
Yesterday, a new competition was launched on Kaggle. A image classification competition!
I wrote a quick fastai starter that does quite well right now. Hope it is helpful!
Here is my attempt. I had to iterate several times before I managed to get it onto production. Thanks for everyone on the forums to help out with the issues.
Here is my github repo. And my app looks like this:
@ilovescience Thanks for sharing this notebook. Really interesting to look through.
Regarding the Kaggle competition rule of not allowing internet access in this particular competition; I assume that means you couldn’t simply use ‘resnet18’, as an example, like we do in the FastAI lessons because it’s not allowed to download it?