AttributeError: 'list' object has no attribute 'data'

In pascal.ipynb under “Single object detection”, I came across this error. I haven’t had the time to dig into it yet, but wondered if it’s something obvious for somebody who could point me to a right direction.

Thank you!

Hey I also getting the same error when running pascal and I am guessing it’s a bug in validate method of model - but I am not sure if this applies to other notebooks as well or just pascal - haven’t had the time to check.

Changing the validate method here: https://github.com/fastai/fastai/blob/master/fastai/model.py#L204

from

def validate(stepper, dl, metrics):
    batch_cnts,loss,res = [],[],[]
    stepper.reset(False)
    with no_grad_context():
        for (*x,y) in iter(dl):
            y = VV(y)
            preds,l = stepper.evaluate(VV(x), y)
            if isinstance(x,list): batch_cnts.append(len(x[0]))
            else: batch_cnts.append(len(x))
            loss.append(to_np(l))
            res.append([f(preds.data,y.data) for f in metrics])
    return [np.average(loss, 0, weights=batch_cnts)] + list(np.average(np.stack(res), 0, weights=batch_cnts))

to

def validate(stepper, dl, metrics):
    batch_cnts,loss,res = [],[],[]
    stepper.reset(False)
    with no_grad_context():
        for (*x,y) in iter(dl):
            preds,l = stepper.evaluate(VV(x), VV(y))
            if isinstance(x,list): batch_cnts.append(len(x[0]))
            else: batch_cnts.append(len(x))
            loss.append(to_np(l))
            res.append([f(preds.data,y) for f in metrics])
    return [np.average(loss, 0, weights=batch_cnts)] + list(np.average(np.stack(res), 0, weights=batch_cnts))

got things working for me in Single object detection section.

If someone else can confirm that this validate version works for other notebooks as well without any issues, I will create a PR gladly.

3 Likes

There has been a lot, I mean A LOT, of changes to model.py, but looks like something went wrong somewhere around this commit:

Just for the heck of it, I upgraded PyTorch to 0.4 and still get the same error, so it’s not the version mismatch. I think your code change is clean and definitely worth creating a PR for.

Thanks for the quick response :slight_smile:

2 Likes

No problem. Created a PR for the same, let’s see how it goes. :slight_smile:

2 Likes

I just stumbled on this issue and took a look in the debugger. So, y (looking at §4 of the pascal notebook) here is a tuple of a bounding box tensor and a classes tensor. Either one has a .data attribute, but of course the tuple itself doesn’t.

Yeah I guess that could’ve been a typo in the commit – looks easy to make; since Jeremy’s intention in Lesson 9 was to return a tuple of the 2 y’s.

I wonder if there’s any practical difference in not returning y.data instead of y.

Oh… this has already been merged… 7 days ago. Wow, that’s what ya get for not updating fastai daily :smiley:.

2 Likes

Was this ever fixed? I just got latest and I still crash at this same spot, on this line:

learn.fit(lr, 1, cycle_len=3, use_clr=(32,5))

In SingleObjectDetection

Okay…I like this course, but fixing bugs like this is annoying. I know we aren’t supposed to just shift-enter, but I like to know that what I’m working with actually works before I start spending time dissecting it. I do the same thing if I’m looking at code for my job. If it doesn’t run, I’m not going to look at it that much.

Anyways, after a fair amount of debugging the problem is this:
y is a python list of length 2 for Lesson8, pascal.ipynb

So the above change to validate is on the right track, but I’m guessing the model.py changed out from under this solution. As of July18, 2018, changing validate from

def validate(stepper, dl, metrics, seq_first=False):
    batch_cnts,loss,res = [],[],[]
    stepper.reset(False)
    with no_grad_context():
        for (*x,y) in iter(dl):
            y = VV(y)
            preds, l = stepper.evaluate(VV(x), y)
            batch_cnts.append(batch_sz(x, seq_first=seq_first))
            loss.append(to_np(l))
            res.append([f(preds.data, y.data) for f in metrics])
    return [np.average(loss, 0, weights=batch_cnts)] + list(np.average(np.stack(res), 0, weights=batch_cnts))

to this

def validate(stepper, dl, metrics, seq_first=False):
    batch_cnts,loss,res = [],[],[]
    stepper.reset(False)
    with no_grad_context():
        for (*x,y) in iter(dl):
            y = VV(y)
            preds, l = stepper.evaluate(VV(x), y)
            batch_cnts.append(batch_sz(x, seq_first=seq_first))
            loss.append(to_np(l))
            if is_listy(y) :
                res.append([f(preds.data, y) for f in metrics])
            else :
                res.append([f(preds.data, y.data) for f in metrics])
    return [np.average(loss, 0, weights=batch_cnts)] + list(np.average(np.stack(res), 0, weights=batch_cnts))

The part to focus in on above (since everything else is the same) is this addition:

            if is_listy(y) :
                res.append([f(preds.data, y) for f in metrics])
            else :

Which doesn’t try to get “data” off of y, because y is a list. This now assumes your metric function knows how to handle it…which …it sort of does…but not really…

Back in your pascal.ipynb, you have this nice little metrics function:

def detn_acc(input, target):
    _,c_t = target
    c_i = input[:, 4:]
    return accuracy_np(c_i, c_t)

which is probably okay…if you are running on the CPU. But you probably aren’t. See the “_np” at the end of accuracy_np? That means “numpy” which means you should pull your tensors off of the GPU and onto the CPU, so that needs to change like this:

def detn_acc(input, target):
    _,c_t = target
    c_i = input[:, 4:]
    return accuracy_np(c_i.data.cpu(), c_t.data.cpu())

Whew that’s it…woah, nope, now I get this error:

mean is not implemented for type torch.ByteTensor

grrrr
so now we open up metrics.py and see that here,

def accuracy_np(preds, targs):
    preds = np.argmax(preds, 1)
    return (preds==targs).mean()

if we pass in a tensor, pytorch is going to have that == turn it into a bytetensor. But it can’t take the mean of that. So I take a note from what accuracy() is doing and just convert it to a float before taking the mean.

def accuracy_np(preds, targs):
    preds = np.argmax(preds, 1)
    return (preds==targs).float().mean()

That got me passed this bug, but only after a fair amount of digging, and digging on stuff that didn’t have much to do with the lesson at hand.

1 Like

Hi all,

has anyone solved the problem?

No one suggested solution helped me… After SpaceCowboy850’s changes i have an error in accuracy_np(c_i.data.cpu(), c_t.data.cpu()):

RuntimeError: cannot call .data on a torch.Tensor: did you intend to use autograd.Variable?

I git pull the fast.ai and solve it

Hi ksenyakor,

My first error went away when I did “git pull” but then I got similar error as you mentioned.

But I could get rid of that error when I changed my “accuracy_np(c_i, c_t)” to
“accuracy(c_i, c_t)” in function “def detn_acc(input, target):” .

I hope this helps.

1 Like