I am sure this has been asked many times before, but I feel like I can’t seem to get the solutions to work in my situation. So I have managed to create and train (succesfully) with the following training data bunch:
data = (ImageList.from_df(df, path=PATH, folder=TRAIN, suffix='.png',
cols='image_id', convert_mode='RGB')
.split_by_idx(range(fold*len(df)//nfolds,(fold+1)*len(df)//nfolds))
.label_from_df(cols=['grapheme_root','vowel_diacritic','consonant_diacritic'])
.transform(get_transforms(do_flip=False,max_warp=0.1), size=sz, padding_mode='zeros')
.databunch(bs=bs)).normalize(imagenet_stats)
My test images which are already of size sz
are in the folder named ./test
. The question is how do I create a dataloader so that I can do something like:
preds = []
for batch in test_dl:
preds.append(learner.model(batch))
My main concern of creating a manual pytorch Dataset
and DataLoader
is that the it might be different to how fastai reads in the RGB channels, and also not sure how to resize the image to size sz
. Putting this aside I feel like fastai ought to have this functionality already built in.
Would really appreciate some feedback. Been banging my head on this for the past few hours, reading through forums and trying random solutions without much luck. Sorry if this is simply a dumb question.
Things I’ve Tried:
This is only something I tried, doesn’t have to be part of the answer.
test_dl = (ImageList.from_folder('./test', convert_mode='RGB')
.split_none())
but this doesn’t give back a databunch. Also in the situation of where my images weren’t already transformed to size sz
how can I ask it to transform it?
p.s. is this easier/ different in fastai v2?