I’m also having this issue, which wasn’t solved by restarting the runtime, but which I did find a workaround for.
Running fastai version 2.6.3. I’m setting up DataLoaders in almost exactly the same way as notebook #04, but I’m still getting the exact same missing attribute error as the OP:
/usr/local/lib/python3.7/dist-packages/fastai/vision/learner.py in _add_norm(dls, meta, pretrained)
187 stats = meta.get('stats')
188 if stats is None: return
--> 189 if not dls.after_batch.fs.filter(risinstance(Normalize)):
190 dls.add_tfms([Normalize.from_stats(*stats)],'after_batch')
191
AttributeError: 'function' object has no attribute 'fs'
I solved the issue by passing a normalize=False
argument to the vision_learner, as suggested in this thread: Get a DataLoaders from training and validation DataLoader - #5 by originof
But I have no idea why this works, since I’m still just dipping toes into ML here.
Looking at the code for the fastai function, this really seems like the kind of thing that shouldn’t be throwing errors if unspecified? Shouldn’t the default be not to normalize data if there isn’t enough information (i.e. no after_batch.fs
) to do so?
I’d like to understand what’s going on here if anyone can explain.
Here’s my code:
train_xs = [None] * 10
train_ys = [None] * 10
for i in range(train_xs):
filenames = (path/'training'/str(i)).ls().sorted()
tensors = [tensor(Image.open(o)) for o in filenames]
train_xs[i] = torch.stack(tensors).float()/255
train_ys[i] = tensor([i] * len(filenames)).unsqueeze(1)
train_xs = torch.cat(train_xs).view(-1, 28*28)
train_ys = torch.cat(train_ys)
train_dset = list(zip(train_xs, train_ys))
(similar code to create valid_dset)
train_dl = DataLoader(train_dset, batch_size=4, shuffle=True)
valid_dl = DataLoader(valid_dset, batch_size=4, shuffle=True)
dls = DataLoaders(train_dl, valid_dl)
dls.c = 10 # define number of categories to avoid runtime error
learn = vision_learner(dls, models.resnet18, loss_func=CrossEntropyLossFlat())