Peculiar problem with read_dir?


I found a peculiar issue while loading test data set from ImageClassifierData.from_csv() function. I found that it was loading 1 less test file than available and also, it was random as to which file it was missing. Then I started debugging and found that read_dir function was generating one less file name. I guess it is happening because we are using iglob function to create a generator and then use any() function on it to see if there is any test file or not. The any() function while checking up is also consuming one file from generator and hence, always 1 less test file is loaded in the dataloader. Could you please confirm if my assessment is correct?


Huh that sounds odd. Do you want to try running a little experiment (i.e. create a minimal test case) to check?

@jeremy I have faced the same issue with 2 different datasets/problems that I am working on. One less image loaded from the test folder. Example from the iceberg challenge:
test set for iceberg challenge has 8424 images.

@jeremy I am experiencing the same issue, i.e., missing a test file.
Created a small test2 directory with 4 files , only 3
%time data = ImageClassifierData.from_csv(PATH, ‘train’, f’{PATH}labels.csv’, test_name=‘test2’, val_idxs=val_idxs, suffix=’.jpg’, tfms=tfms, bs=bs)
print(len(data.test_ds)) #how many images were given in the test set and how many loaded?
for f in range(len(data.test_ds)): print(data.test_ds.fnames[f])
!ls “data/dogsbreeds/test2/” | head

CPU times: user 140 ms, sys: 4 ms, total: 144 ms
Wall time: 142 ms

I got exactly the same problem while trying to do the dogbreed challenge from lesson2.

If you run the code of read_dir within the notebook (not the function but the code of the function) there is no problem.

The documentation of iglob says this:

Return an iterator which yields the same values as glob() without actually storing them all simultaneously

I tried to change iglob with glob (in the read_dir function of the fastai/ and it seems to solve the problem. I didn’t get any memory problem with this dataset this could mayber happen with larger ones.

1 Like

That was fixed in fastai I while back - sounds like you might need to git pull. We don’t use iglob any more.

Oh right, my bad, I hadn’t pull for sometime. Thanks !