Hi muellerzr hope your well.
I was trying to fix the colab/torchvison issue until 03:00 GMT as I ran the lesson one notebook one day earlier in the day, making my own classifier, I couldn’t understand why it wasn’t working
Hi @mrfabulous1! I saw the numerous issues so first I checked the 3 fastai repositories to see if something was pushed. It wasn’t. This then told me it was PyTorch (probably) given it was with the DataLoaders. From there I checked if torch or torchvision had any recent updates in <4 hours from the post, torchvision was the culprit, releasing 0.5.0
HI NandoBr great post!
Your meetup page looks good.
Do you have an english translation for Heroku? Or should I use google translate?
I have an Heroku account which I haven’t used in ages so I will host my next classifier on it.
Importing basics means if we want to do anything with the basics library we need to do basics.myFunc whereas importing everything FROM basics lets us instead just do myFunc
First of all, I would like to thank you for all the efforts and the nice course!
I was wondering if its possible to input more than two blocks (x and y) in the data block. For example, I would like to input both image and text together to infer a label. Or more specific to vision, an image and a mask to get a prediction.
Yes! Eventually we will cover this with Bounding Boxes but it is possible. On your datablock you can pass in N blocks to blocks then do n_inp=2 to have the first two be your inputs
IE:
DataBlock(blocks=(ImageBlock,ImageBlock,CategoryBlock), n_inp=2)
(Plus the other DataBlock information)
Regarding the transforms, yes, I was thinking if you can apply different transforms to the different inputs ie transforms for images, transforms for text and so on and, if so, how do you specify it in the code.
I am still doing my best to get familiar with the transforms currently. I haven’t looked into that though. Perhaps post in another thread or v2 chat? So it can help others when you get an answer (and find it easier than buried in this thread)
Sorry @mrfabulous1, but I don´t have the deployment tutorial in english yet. It is important to point out that the tutorial is for fastai v1, not for fastai2. I believe it will be necessary to update it with the new library requirements.
I’ve updated the schedule above. This week along with what was originally posted we will go over what deployment looks like using a render template (which in reality is just Starlette) for most any scenario (images, text, and tabular) and running it on your local machine, and exploring and navigating the source code. I’ll release a link tomorrow morning!
Also, next week (week 3) we will go over K-Fold validation as well