If anyone has a code snippet on how to predict on a full directory of images please post it here.
I have a learner that is trained and I want to predict an a large set of images.
I’m looking for code snippets to do this.
I’ll work on it in the meantime as well starting from the posts on predicting on a single image.
At the same time i’m wondering why we need to apply transforms at all to images we want to predict on? I could understand if we are doing TTA but why just blindly apply transforms?
Got this mostly working:
images = 
image_names = glob.glob(os.path.join(directory, "*.jpg"))
for filename in image_names:
images = np.array(images)
def predict_directory(learn, input_directory, val_tfms):
learn.precompute = False
images = load_images(input_directory)
image_names = glob.glob(os.path.join(input_directory, "*.jpg"))
for index, input_image in enumerate(images):
transformed_image = val_tfms(input_image)
pred = to_np(learn.models.model(V(T(transformed_image[None]).cuda())))
print(pred, np.exp(pred), image_names[index])
if np.exp(pred) > 0.5:
Not working yet as my validation set doesn’t get predicted correctly even though I have 91% accuracy…