Best way to use my model (Binary Segmentation) on my .NET solution?

Hello,
I created a Binary Segmentation model (like this Github : link Thank to @muellerzr ), and I have a very good and quick result, 0.5sec per image.

Now I want to use this model (with this time of execution <1sec) on my WPF .NET 4.8 application, but I don’t know the best way to use it. My application will execute only on local computer without internet connexion.

I use :
Python 3.8.5
Fastai 2.1.10

There are several solutions, such as :

  • ONNX
  • execute pyhton process

Thanks for your help.

For simple applications, using fastais inference functions should suffice. Have a look at this tutorial.

@muellerzr also has created a nice add-on to fastai for inference:

All this should work without internet connection and can be implemented on your local computers CPU.

3 Likes

I have zero experience with .NET so take this with a pinch of salt but I believe onnxruntime is well supported on .NET because of the Microsoft link. You might have already seen this but if not, there is a video discussing onnxruntime deploying a model within .NET. I have done a little bit of work with ONNX recently and the main things to watch out for are:

  • the conversion of the model from Pytorch/fastai is correct. I’ve sometimes had the model successfully convert but the results are off. The main thing that can cause this are layers not supported by ONNX.
  • you mimic the transformations you did in the training loop, in particular the transformations used on the validation set. This can be a bit of a pain and I guess this might be where things might have to be done more by hand than you might like.
  • there are quite a few options you can tune within onnxruntime to get better performance. TBH I find the onnxruntime docs a little sparse at times but you can often find an answer on their GitHub. My suggestion would be to ignore any of these tuning parameters until you’re confident that your exported onnxmodel gives you the same (or very similar) results to your fastai model.

Hopefully, that helps a bit. There have been a few other threads in the past about onnx/.NET which might also have useful information.

1 Like

I’ve been meaning to have a look at TorchSharp myself, as it aims to provide direct PyTorch bindings. It seems there’s currently no JIT support though, digging through the repo history it seems to have been there at some point in time, but no longer working in recent PyTorch versions so removed. That means you can’t just export a model and run it in C# code without extra work, you’ll have to rewrite the model in C#, and then you’re probably able to load in the trained weights somehow… kind of a pain for sure, and the reason I didn’t get far with it.

I do have PyTorch models running in Xamarin.Android and Xamarin.iOS, which ironically is less of a hassle (though also a hassle, of a different kind).

1 Like