Using Fastai with C# and Microsoft.ML


i switched from tensorflow to fastai for my C# project and now i am facing a small problem.
I used the Resnet34 Model for training in Python with imagenet_normalisation of my databunch.

data = ImageDataBunch.from_folder(path, train=".", bs = bs, size=256, num_workers=6, valid_pct=0.2, device=torch.device('cuda'))
learn = cnn_learner(data, models.resnet34, metrics=error_rate)

which i transform after learning to the onnx format

x = torch.randn(1,3,256,256, requires_grad=False).cuda()
torch_out = torch.onnx._export(learn.model, x, stage-5-5-256.onnx", export_params=True)

But when i use it with my C# code i get negative probabilities for my two classes (-0.9087, 1.1822) which i think come from the wrong image preperation.
The code for loading the onnx-Model:

string inputName = "input_1";
string outputName = "dense_1";

var onnxPipeline = mLContext.Transforms.ResizeImages(resizing: 
ImageResizingEstimator.ResizingKind.Fill, outputColumnName: inputName,  imageWidth: ImageSettings.imageWidth, imageHeight: ImageSettings.imageHeight, inputColumnName: nameof(ImageInputData.Image))
                .Append(mLContext.Transforms.ExtractPixels(outputColumnName: inputName, interleavePixelColors: true, scaleImage: 1 / 255f))
                .Append(mLContext.Transforms.ApplyOnnxModel(outputColumnName: outputName, inputColumnName: inputName, modelFile: onnxModelPath));

var emptyData = mLContext.Data.LoadFromEnumerable(new List<ImageInputData>());
var onnxModel = onnxPipeline.Fit(emptyData);
return onnxModel;

For the tensorflow-model this code works fine, and my suspicion is, that there is a problem with the “.ExctractPixels” part. I think there might be some transformations to the Image i have to add there, but i dont know which.

Does anyone have some experience with the .net ml framework, or some clues of what i am missing?
Any help would be appreciated!

I hope the code is readable and i am sorry for the grammatical errors this topic might have.

I have not used the .net ML framework but you might be impacted by the fact that pytorch puts the softmax in the loss function by convention (while other framework tend to put it in the last layer).

Thus the numbers your are seeing might be raw logits (unnormalized and thus potentially negativ or above one). The easiest way to confirm is to apply a softmax on top of them and see if you get the same accuracy as you had in fastai.

1 Like

That was exactly my problem :slight_smile: