Seems like https://arxiv.org/abs/1804.07612 claims that small minibatch sizes are actually better in some cases. From the abstract:
“The best performance has been consistently obtained for mini-batch sizes between m=2 and m=32, which contrasts with recent work advocating the use of mini-batch sizes in the thousands.”
The version of nightly is 1.0.0.dev20190403 py3.6_cuda10.0.130_cudnn7.4.2.0 pytorch
I had a situation were pytorch release version was also installed
it happened when I did
conda update -c fastai fastai
so I have removed it but now 07 or 06 won’t
import torch.nn.functional
fastai=1.0.51=1
I removed 1.0.51 build 1 and replaced with fastai=1.0.50.post1
That clears the
with Hooks
v = x.var((0,2,3)), keepdim=True)
TypeError var(): argument ‘dim’ (posistion1) must be int not tuple
import torch.nn.functional issue
but not the ScriptModule missing
There is a cell
from torch.jit import ScriptModule, script_method, script
from typing import *
which was missing from the git pull version at 18.30 PDT
Now that I have changed back to previous version of fastai and added the cell to 07_batchnorm.ipynb
the notebook is running but has come to a pause at cell 19 get_learn_run.
6.33am I am back to bed. Thanks for you help
RE ScriptModule replace with nn.Module as per later post from jph00
No, the purpose of mom (and eps) is to make training more stable, rather than to decrease the loss for a particular batch. So their gradients don’t help with the task they’re there for!
It’s still just the filter dimension. Remember that all layers of a neural net have a number of “channels” or “filters” - it doesn’t matter what type of data was in the input.