@JumpyJason
I tried that, but it seems not work ):
imgs = test_df['Image'].map(string_to_img)
preds = learn.get_preds(dl=learn.dls.test_dl(imgs))
decoded_preds = learn.get_preds(dl=learn.dls.test_dl(imgs), with_decoded=True)
print(preds)
print(decoded_preds)
output:
(tensor([[ 0.3652, -0.2001, -0.4073, ..., 0.4606, 0.0130, 0.7156],
[ 0.3540, -0.2094, -0.3962, ..., 0.4514, 0.0024, 0.7101],
[ 0.3500, -0.3029, -0.3497, ..., 0.4191, -0.0114, 0.7321],
...,
[ 0.3605, -0.2154, -0.3960, ..., 0.4887, 0.0117, 0.7206],
[ 0.3479, -0.2149, -0.3969, ..., 0.5086, 0.0184, 0.7256],
[ 0.3425, -0.2235, -0.3874, ..., 0.4936, 0.0244, 0.7236]]), None)
(tensor([[ 0.3652, -0.2001, -0.4073, ..., 0.4606, 0.0130, 0.7156],
[ 0.3540, -0.2094, -0.3962, ..., 0.4514, 0.0024, 0.7101],
[ 0.3500, -0.3029, -0.3497, ..., 0.4191, -0.0114, 0.7321],
...,
[ 0.3605, -0.2154, -0.3960, ..., 0.4887, 0.0117, 0.7206],
[ 0.3479, -0.2149, -0.3969, ..., 0.5086, 0.0184, 0.7256],
[ 0.3425, -0.2235, -0.3874, ..., 0.4936, 0.0244, 0.7236]]), None, tensor([[ 0.3652, -0.2001, -0.4073, ..., 0.4606, 0.0130, 0.7156],
[ 0.3540, -0.2094, -0.3962, ..., 0.4514, 0.0024, 0.7101],
[ 0.3500, -0.3029, -0.3497, ..., 0.4191, -0.0114, 0.7321],
...,
[ 0.3605, -0.2154, -0.3960, ..., 0.4887, 0.0117, 0.7206],
[ 0.3479, -0.2149, -0.3969, ..., 0.5086, 0.0184, 0.7256],
[ 0.3425, -0.2235, -0.3874, ..., 0.4936, 0.0244, 0.7236]]))
In addition, I found a topic about how to decode by PointScaler:
sclr = PointScaler()
sclr(img)
dp = sclr.decode(TensorPoint.create(pred[1]))
This works, but I’m wondering why with_decoded=True doesn’t work as expected ![]()