Lesson 1 In-Class Discussion

Create smaller mini batches or get more GPUs/memory.

Where do you set the mini-batch size?

You can read lower in the notebook. 64 is the default.

1 Like

Why does the learning rate find work? Does it pick 30 or N mini batches and trains it and gets the loss? Isn’t that getting a local error per mini batch? How does using this learning rate optimal for the global optimization?

2 Likes

You may still find it useful. You can see what “type” of inputs are giving you trouble by looking at the value of the features.

I tried to install via conda, but conda doesn’t found it.

you need to downgrade to python 3.5 and do pip3 install opencv

1 Like

So, calling the lr_find() is a better approach compared to manually setting up the learning rates?

Thanks! I’ll try it!

Yes! It is.

1 Like

I installed via pip install opencv python but I also see conda install opencv

I would not do this.

2 Likes

I think I’ll install anaconda 3.6

1 Like

Why did you pick somewhere on the learning rate graph where the number was still improving, and not the lowest point?

9 Likes

Is there a logic behind calculating the batch size?
As in, say If I have this much Gig of vRAM, then I should set y amount of batch size?

1 Like

I believe this course requires python 3.6

1 Like

depends on the number of parameters on your neural net.

1 Like

The advised that is simplest is to try (-:.

1 Like

You will have a network with 3 outputs in this case (for 3 classes). Each output will be labelled - that’s how you will know which probability belongs to which class (even for the cat/dog e.g., the two outputs are labeled)

They will each spite out a probability number, which should add up to 1. You pick the one with the highest probability.

For e.g., if you have trained a model for detecting dog, cat and mouse: given an image the network might spite out 0.8, 0.1 and 0.1 - this suggests it thinks the image is that of a dog.

4 Likes

I’m getting the following error when trying this command, is anybody else seeing this or do I need to reinstall CUDA or something:

learn = ConvLearner.pretrained(resnet34, data, precompute=True)

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-20-faabcdaf100f> in <module>()
----> 1 learn = ConvLearner.pretrained(resnet34, data, precompute=True)

~/fastai/part1v2/fastai/courses/dl1/fastai/conv_learner.py in pretrained(self, f, data, ps, xtra_fc, xtra_cut, **kwargs)
     87     def pretrained(self, f, data, ps=None, xtra_fc=None, xtra_cut=0, **kwargs):
     88         models = ConvnetBuilder(f, data.c, data.is_multi, data.is_reg, ps=ps, xtra_fc=xtra_fc, xtra_cut=xtra_cut)
---> 89         return self(data, models, **kwargs)
     90 
     91     @property

~/fastai/part1v2/fastai/courses/dl1/fastai/conv_learner.py in __init__(self, data, models, precompute, **kwargs)
     80         elif self.metrics is None:
     81             self.metrics = [accuracy_multi] if self.data.is_multi else [accuracy]
---> 82         if precompute: self.save_fc1()
     83         self.freeze()
     84         self.precompute = precompute

~/fastai/part1v2/fastai/courses/dl1/fastai/conv_learner.py in save_fc1(self)
    121         if len(self.activations[0])==0:
    122             m=self.models.top_model
--> 123             predict_to_bcolz(m, self.data.fix_dl, act)
    124             predict_to_bcolz(m, self.data.val_dl, val_act)
    125             if self.data.test_dl: predict_to_bcolz(m, self.data.test_dl, test_act)

~/fastai/part1v2/fastai/courses/dl1/fastai/model.py in predict_to_bcolz(m, gen, arr, workers)
     10     m.eval()
     11     for x,*_ in tqdm(gen):
---> 12         y = to_np(m(VV(x)).data)
     13         with lock:
     14             arr.append(y)

~/anaconda3/envs/fastai/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    222         for hook in self._forward_pre_hooks.values():
    223             hook(self, input)
--> 224         result = self.forward(*input, **kwargs)
    225         for hook in self._forward_hooks.values():
    226             hook_result = hook(self, input, result)

~/anaconda3/envs/fastai/lib/python3.6/site-packages/torch/nn/modules/container.py in forward(self, input)
     65     def forward(self, input):
     66         for module in self._modules.values():
---> 67             input = module(input)
     68         return input
     69 

~/anaconda3/envs/fastai/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    222         for hook in self._forward_pre_hooks.values():
    223             hook(self, input)
--> 224         result = self.forward(*input, **kwargs)
    225         for hook in self._forward_hooks.values():
    226             hook_result = hook(self, input, result)

~/anaconda3/envs/fastai/lib/python3.6/site-packages/torch/nn/modules/conv.py in forward(self, input)
    252     def forward(self, input):
    253         return F.conv2d(input, self.weight, self.bias, self.stride,
--> 254                         self.padding, self.dilation, self.groups)
    255 
    256 

~/anaconda3/envs/fastai/lib/python3.6/site-packages/torch/nn/functional.py in conv2d(input, weight, bias, stride, padding, dilation, groups)
     50     f = ConvNd(_pair(stride), _pair(padding), _pair(dilation), False,
     51                _pair(0), groups, torch.backends.cudnn.benchmark, torch.backends.cudnn.enabled)
---> 52     return f(input, weight, bias)
     53 
     54 

RuntimeError: CUDNN_STATUS_INTERNAL_ERROR