Target and input size different

Here is my snippet of code:

x,y = dls.one_batch()
x.shape,y.shape
# (torch.Size([2, 3, 1024, 1024]), torch.Size([2, 1024, 1024, 3]))
learn.freeze()
learn.loss_func = custom_loss_func

# finally this line gives me error:
learn.fit(1, 1e-4, wd=1e-5)

# /opt/conda/lib/python3.7/site-packages/torch/_tensor.py:1051: UserWarning: Using a target size 
# (torch.Size([2, 1024, 1024, 3])) that is different to the input size (torch.Size([2, 4, 1024, 1024])). 
# This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.
#   ret = func(*args, **kwargs)

# RuntimeError: The size of tensor a (1024) must match the size of tensor b (3) at non-singleton dimension 3

Why is the input size changed to (torch.Size([2, 4, 1024, 1024])?

Hey there,

This is a late reply but the shape of yb aka the target is adjusted by the user.

In this case, the input is 3 layers RGB of 1024x1024 each is converted into 1024x1024x3 because the the label/target is set to be so.

Another example is regression problem in chapter 6. The goal is to find the coordination of the face center (ex: [1,2]).

So xb, input, is (64,3,224,224): 64 images, each with 3 RGB layers of 224x224
and yb, targeted label, is (64,1,2) because the final value is designed to be coordination of x and y [x,y] so it has the size of (1.2)