Bug in ImageList.from_csv and ImageList.from_df API

Hi,

I have noticed a mysterious behaviour while using ImageList APIs for creating DataBunch objects.

[Working on Google Colab]

Here is what I am doing.

I was working with around 1264 images in base_dir/‘Data/ImageData/ModelData/Sl/Train’ folder

  • try create a DataBunch object using block api from a csv file with filename, label columns
    data = (ImageList.from_csv(base_dir, ‘Data/Config/sl_labels.csv’, folder=‘Data/ImageData/ModelData/Sl/Train’)
    .split_by_rand_pct()
    .label_from_df(label_delim=’,’)
    .transform(tfms, size=224)
    .databunch())

Here is how DataBunch looked like
ImageDataBunch;

Train: LabelList (1012 items)
x: ImageList
Image (3, 224, 224),Image (3, 224, 224),Image (3, 224, 224),Image (3, 224, 224),Image (3, 224, 224)
y: CategoryList
CAT1, CAT2, CAT3, CAT4, CAT5
Path: gdrive/My Drive/Project1/Data/ImageData/ModelData/Sleeve/Train;

Valid: LabelList (252 items)
x: ImageList
Image (3, 224, 224),Image (3, 224, 224),Image (3, 224, 224),Image (3, 224, 224),Image (3, 224, 224)
y: CategoryList
CAT1, CAT2, CAT3, CAT4, CAT5
Path: gdrive/My Drive/Project1/Data/ImageData/ModelData/Sl/Train;

Test: None


Train the Model
bs = 64
gc.collect()
learn = cnn_learner(data, models.resnet34, metrics=[error_rate, accuracy], bn_final=True)
learn.unfreeze()
learn.fit_one_cycle(4, max_lr=slice(1e-4,1e-2))
interp = learn.interpret()
interp.plot_top_losses(4)
It throws error

Check below
preds, y, losses = learn.get_preds(ds_type=DatasetType.Train, with_loss=True)
preds.size(), y.size(), losses.size()
(torch.Size([960, 5]), torch.Size([960, 5]), torch.Size([4800]))

HOWEVER, When I repeat all the same steps by creating DataBunch object using ImageList.from_folder() everything works like a charm.

data = (ImageList.from_folder(image_main/‘Sl/Train’)
.split_by_rand_pct()
.label_from_folder()
.transform(size=224)
.databunch(bs=64)
)

Checking for sizes for various objects shows the issue.
preds, y, losses = learn.get_preds(ds_type=DatasetType.Train, with_loss=True)
preds.size(), y.size(), losses.size()
(torch.Size([960, 5]), torch.Size([960]), torch.Size([960]))

The size of losses returned is different in both the cases. It seems in from_csv case losses are returned for entire training set instead of the validation set which is the default causing the out of bounds error.

Please look into this and fix this. Thanks