That size
parameter is not batch size as I think you intend, rather it is image size, so you are actually resizing to 64 (longest dimension, fastai takes single dimension sizes). I’d have to dig to confirm but think this will make it upsize with the default transforms rather than just not crop. Batch size (bs
parametes) is being left as default, which is also 64 hence hard to spot this.
As you seem pretty capable in TF I would note that ImageDataBunch.from_*
is mainly intended foer very new learners. Using the separate methods is generally the best way to go as it keeps steps separate and helps avoid some issues like this (given the myriad different things your trying to provide parameters for in a single function). Looking at the source for ImageList.from_folder you can see it’s doing:
src = (ImageList.from_folder(...)
.split_by_folder(...)
.label_from_folder(...))
The next bit isn’t so clear from the source, but it’s like:
data = (src.transform(get_transforms(), size=...)
.databunch(bs=...)
.normalize(imagenet_stats))
(showing where that size was going and where batch size would go).
There’s an init
parameter to cnn_learner
taking an init function.
On performance, if you haven’t already you might want to have a poke through fastai.layers
, some of the magic is in well-defined units in there.