Side remark: the format of your text is a bit off. The triple backpacks ``` at the end of your first sentence should be at the beginning of the next line.
Hi, yes, I started from scratch following the setup instructions. But I had already created the fast_template so I thought that may have caused an issue. Thanks for the links – I will take a look at them.
We often talk to people who overestimate both the constraints, and the capabilities of deep learning. Both of these can be problems: underestimating the capabilities means that you might not even try things which could be very beneficial; underestimating the constraints might mean that you fail to consider and react to important issues.
In a very humble tone, looking at the continuity of text - should the opening line be We often talk to people who underestimate both the constraints instead of We often talk to people who overestimate both the constraints
You can look at the requirements.txt I used to deploy fastai2 model to Azure Functions (which is like Lambda). You can remove one of the package specific to Azure when you deploy to Heroku. I do use the CPU version of Pytorch and Torchvision. But somehow due to some reason I needed ipykernel also. Not sure if I need to narrow my imports (Currently I am doing a “from fastai2.vision.all import *” in the deployed model service. I was not sure the minimal set of imports). Anyway my zipped package came to about (300 odd MB) even with ipykernel and a host of dependencies it pulled in. It surely would be good to decouple Jupyter related dependencies in the fastai2 while deploying. Also you can do a --no-cache-dir option when pip install to reduce the size as you may be having some pip caches.
Here is the question from the study group :
When creating a new DataBlock using the batch_tfms , are the transformations applied at random to each batch as it is passed to our CNN?
And when using item_tfms arguments, the selected transform is only applied once to each item in the whole dataset, or does it get re-applied each epoch?
It gets re-applied each time you need to get that item. So it’s applied each epoch.
Depends on the transformation. Normalize will get applied equally to each batch, Rotation will get applied randomly.
Being a item_tfm or a batch_tfm does not define if the transformation will be random or not, you can have a random or deterministic item_tfm or batch_tfm
There even is a trick Jeremy describes to train your network using the same images, but on different sizes, as a form of data augmentation…
Pseudo steps:
data_224 (size 224)
create network with data_224
train network
save network ("some_name_224")
data_299 (size 299)
create network with data_299
load saved network "some_name_224"
train network