I have the following error:
Invalid type error in (torch.cuda.FloatTensor) while running learn.fit.
What did I do:
downloaded Style Color images and csv file from Kaggle
adapted the csv file a little so the first column is the Id
used the resnet34 architecture (is the only directly available in Paperspace?)
Everything works fine until I start learn.fit and have the a.m. error.
Where to look to solve this? Thanks for your help.
Sharing the error code and the code you are using, copy and paste and placing them in
or copying and pasting an image would help others better help you resolve this issue.
I added the notebook so you can follow my steps.
ForumQuestion.pdf (133.6 KB)
Finally it gives the error:
TypeError: eq received an invalid combination of arguments - got (torch.cuda.FloatTensor), but expected one of:
didn’t match because some of the arguments have invalid types: (torch.cuda.FloatTensor) (torch.cuda.ByteTensor other)
didn’t match because some of the arguments have invalid types: (torch.cuda.FloatTensor)
@RogerCulemborg thanks for sharing your code, the error message on the pdf seems to be cut off at the end of page 6.
On the jupyter notebook there should be a create/edit Gist of notebook, please look here as I had to learn the same
@amritv, thanks for sharing your code. Next time, you can press the Gist share button (the yellow highlighted one). It will generate a link for your Jupyter notebook. So, Jeremy can replicate the problem quickly.
Try sharing that way and I will better be able to help.
@RogerCulemborg Just in case, you can’t find
Gist on the toolbar. You need to install
Jupyter Notebook Extension.
Thanks for being so helpful, please find my gist below:
I changed some input especially sz value to 224 because that value is used by resnet (found this while reading your annotated notebook). However, now I got “Expected more than 1 value per channel” error.
With bs=4 I get the torch.cuda.FloatTensor back.
Expected more than 1 value per channel
Can’t say that I see any issues with your code. It may be a Pytorch issue. What version of Pytorch are you running?
check this post right at the end:
I stopped using AWS and moved to my personal laptop, since I am only doing tests for understanding the code.
I struggled with this error for a while Expected more than 1 value per channel when training, got input size [1, 1024] and I tried to figure out what am I doing wrong. I have seen that this error is displayed by the forward function which calls the F.batch_norm. And interpret it as the fact that he aspects and image [1 3 224 224] for example or [1, 256, 14, 14] where 256 is the number of…
hope that helps
Seems to be the latest version…
@RogerCulemborg any luck yet?
I checked my version of torch and its 0.2.0_4. If not resolved do you want to try to downgrade to this version and see if it works?
downgrading --> no succes
another architecture --> no succes
I uses another dataset (“invasive-species-monitoring”) --> works fine, also with different architectures
Conclusion: something wrong in the dataset
I studied dataset.py and found splitting by spaces
I reviewed the csv file and found a space in one label
Replaced “nail polish” by “nail_polish” --> and everything works fine!
Suggestion: please add some help text to ImageClassifierData.from_csv about the format which is expected form the csv file, because I see quite different formats on Kaggle.
Thanks for your help.