Hio fastAI community,
The below code shows as a black image with only values of 1.
> im = imread(fnames) #skimage.io.imread > im = torch.from_numpy(im).float().unsqueeze(dim=0) > im = Image(im) > tfms = get_transforms() > im=im.apply_tfms(tfms,do_resolve=True) > im The below code shows the true image. > im = open_image(fnames) #fastai open_image > tfms = get_transforms() > im=im.apply_tfms(tfms,do_resolve=True) > im
Going directly from numpy to Image doesnt work for me.
Generally, I’m finding it a bit frustrating to go between torch.tensor. nparray and Image when interacting between the libraries. Any idea what I am doing wrong?
Ideally, I would like a way to manually do transforms in a on_batch_begin callback after other modifications involving nparrays.