I was trying to replicate lesson5 on my dataset and build a model from scratch. In the lesson, they used the MNIST dataset and they don’t use ImageDataBunch. Rather, they load them as tensors and then they use them to create a DataLoader using DataBunch.create().
If I understand correctly, to do that you need the images in vector format (i.e. each 28x28 image would be a vector size 784). My question is, how can I get all the images in my dataset to have that format? When I load them their shape is 3 x 224 x 224 (they are color images). I tried reshaping each image using a foor loop on data.train_ds and applying reshape to each image tensor.
For resizing each image the DataBlock method could help you with that. Look into the argument
item_tfms. For example this is how you’d resize each image to have a size 28 * 28
item_tfms=Resize(784). If you are using PIL you’ll to use the .resize method in the library.
For more information I recommend looking this notebook in the repo 05_pet_breeds.ipynb. As for flattening you could take the converted image array without the color channel (28 * 28) and use torch.flatten() method.
I hope I have answered your question.
Thank you very much @radikubwa for your very quick response. One extra question, how can I take the image array without the channel?
If you have a PIL object that is something you loaded with PIL you can just use
nameofimage.convert("L") then you could load it into a numpy or torch array to see how the image channels are changing like this
nameofimagearray = np.array(nameofimage).
Thanks very much again for your help!
Hi again @radikubwa. so just to clarify, data_block is different from DataBlock. The first one being from the data_block API mentioned in the course, at least up to lesson5 (https://docs.fast.ai/data_block.html) and DataBlock (https://dev.fast.ai/data.block) from a newer version of the fastai library that is still under development and it is somewhat for more advance utilization of the library?
They are the same, I’d say they are more refined arguments that do more for instance presizing. But how you use the Data blocks is the same. The difference comes when handling different types of problems. Check out other problems in the site docs.fast.ai.