I have just trained the FER dataset (facial emotion recognition) and wanted it to apply it real life images.
My goal is to do a simple function like: img = open_image(testimg_loc) pred_class, pred_idx, outputs = learn.predict(img)
However, the FER dataset is composed of 48x48 pixel grayscale images, so when I import an image to test it, I imagine I need to run the same operations.
How do I go about doing that? I figure there is probably some easy way to do so but I haven’t been able to find it yet…
See here in the section Using the Model for Inference.
You just need to pass the image path to Learner.predict.
The Learner will apply the same preprocessing steps used for the validation set.
So that method technically works (albeit with an incorrect classification), but I think that’s the wrong way to go about it. The reason for that is that the original dataset was 48x48 with grayscale. Given that my new input image is a lot larger with RGB values, it certainly won’t have the same transformations. So I need to downscale and grayscale my input image before I can do anything.