Sign Language MNIST using Fastai V1 (2019)

I have gone through Lesson 1 for course V3. I am trying to do the “Hello World” using Sign Language MNIST dataset. The problem I am facing is that the image data itself is provided in a csv file (in the form of grayscale pixel values).

I don’t understand how to use this data in ImageDataBunch object? I don’t have much exposure to fastai library. Reading the docs, I could understand that ImageDataBunch object will wrap up most of the data stuff that later on I can refer to. Any input on using the data in the Lesson 1 flow would be great!

skip this for now. use dataset that you know how to use. later on jeremy does one example with this kind of dataset, although he implements it in pure pytorch.

If you really want to see how to do it, there is one script online that converts this type of data to pictures, but there is also plenty of kernels on kaggle (if you took dataset on kaggle, if not check out standard mnist challenge on kaggle that has the same type of data) and only look under competitions for fastai users. You will see how they did it.

Okay! I think I will skip this for now then and use some other dataset. Thanks for confirming that this type of data is not tackled readily with the available methods directly.