Wiki: Lesson 1

(Nithin Reddy) #315

Hi, when we create an array of 3 learning rates. Are all the layers divided evenly and then applied to the layers?

(Semihcan) #316

Interesting Result: I tried to classify stock images of women vs barbie doll images (it was my wife’s idea) for the ‘homework’ for week 1. I was surprised that I only got an accuracy of 0.5, like a coin flip. Anyone know why resnet would not perform well here? The results seem to vary from iteration to iteration but the accuracy hovers around 50%. In this run for example, you see that most of the predictions are barbies (<0.5)

edit: I think the reason may have to do with the fact that I only used 20 images in train and validation sets. In lesson 1, Jeremy had said using 10 images for each class would be ok but as I listen to Lesson 2, I hear he suggests ‘a couple hundred’. So maybe my problem is that I did not download hundreds of images. Although I thought that maybe this would not be necessary as the model was pre-trained.
I am guessing that I will gain more understanding as the course progresses.

(Mutlucan Tokat) #317

Do we store augmented data in hard drive? Or they just stored in the memory?


I was wondering the same

(Todd Richard Johnson) #319

This seemed like an interesting problem that might have real world implications, such as determining whether a passenger in a car pool lane is real or a dummy. I made up a larger set of images (only around 100 of each type) and tried the lesson 1 approach. Although this set is still too small, I could easily get around 90% accuracy on the validation set. Sometimes I could get around 96% with very low validation loss. However, the validation set is quite small, so it doesn’t take much to drop accuracy.

The dataset, instructions on creating it, and my notebook exploring the data are available here:

(Benjamin Shaw) #322

I followed this to fix this: Jupyter notebook KeyError: ‘allow_remote_access’

Updated the key c.NotebookApp.ip to ‘localhost’ in file $HOME/.jupyter/


Hey guys,

Here is a minor update to the function plot_val_with_title so it works with multiclass image classification:

def plot_val_with_title(idxs, title):
    imgs = [load_img_id(data.val_ds,x) for x in idxs]
    pred_labels = np.array(data.classes)[preds[idxs]]
    pred_labels_prob = np.exp(np.max(log_preds[idxs], axis=1))
    title_probs = ["{}:{:.2f}".format(lab,prob) for lab,prob in \
                   zip(pred_labels, pred_labels_prob)]
    return plots(imgs, rows=1, titles=title_probs, figsize=(16,8)) if len(imgs)>0 else print('Not Found.')

(joshua) #324

I’m currently working through Fast.AI lesson one on a paperspace machine. I have pulled the latest fastai repo, and have updated the Python / Anaconda libraries as shown here

In the section Our first model: quick start

I am running into the following error when running this code snippet.

data = ImageClassifierData.from_paths(PATH, tfms=tfms_from_model(arch, sz))
learn = ConvLearner.pretrained(arch, data, precompute=True), 3)

Error Below:
0%| | 0/360 [00:00<?, ?it/s]

TypeError Traceback (most recent call last)
1 arch=resnet34
2 data = ImageClassifierData.from_paths(PATH, tfms=tfms_from_model(arch, sz))
----> 3 learn = ConvLearner.pretrained(arch, data, precompute=True)
4, 3)

~/fastai/courses/dl1/fastai/ in pretrained(cls, f, data, ps, xtra_fc, xtra_cut, custom_head, precompute, pretrained, **kwargs)
112 models = ConvnetBuilder(f, data.c, data.is_multi, data.is_reg,
113 ps=ps, xtra_fc=xtra_fc, xtra_cut=xtra_cut, custom_head=custom_head, pretrained=pretrained)
–> 114 return cls(data, models, precompute, **kwargs)
116 @classmethod

~/fastai/courses/dl1/fastai/ in init(self, data, models, precompute, **kwargs)
98 if hasattr(data, ‘is_multi’) and not data.is_reg and self.metrics is None:
99 self.metrics = [accuracy_thresh(0.5)] if else [accuracy]
–> 100 if precompute: self.save_fc1()
101 self.freeze()
102 self.precompute = precompute

~/fastai/courses/dl1/fastai/ in save_fc1(self)
177 m=self.models.top_model
178 if len(self.activations[0])!=len(
–> 179 predict_to_bcolz(m,, act)
180 if len(self.activations[1])!=len(
181 predict_to_bcolz(m,, val_act)

~/fastai/courses/dl1/fastai/ in predict_to_bcolz(m, gen, arr, workers)
16 m.eval()
17 for x,*_ in tqdm(gen):
—> 18 y = to_np(m(VV(x)).data)
19 with lock:
20 arr.append(y)

~/fastai/courses/dl1/fastai/ in VV(x)
67 def VV(x):
68 ‘’‘creates a single or a list of pytorch tensors, depending on input x. ‘’’
—> 69 return map_over(x, VV_)
71 def to_np(v):

~/fastai/courses/dl1/fastai/ in map_over(x, f)
6 def is_listy(x): return isinstance(x, (list,tuple))
7 def is_iter(x): return isinstance(x, collections.Iterable)
----> 8 def map_over(x, f): return [f(o) for o in x] if is_listy(x) else f(x)
9 def map_none(x, f): return None if x is None else f(x)
10 def delistify(x): return x[0] if is_listy(x) else x

~/fastai/courses/dl1/fastai/ in VV_(x)
63 def VV_(x):
64 ‘’‘creates a volatile tensor, which does not require gradients. ‘’’
—> 65 return create_variable(x, True)
67 def VV(x):

~/fastai/courses/dl1/fastai/ in create_variable(x, volatile, requires_grad)
51 if type (x) != Variable:
52 if IS_TORCH_04: x = Variable(T(x), requires_grad=requires_grad)
—> 53 else: x = Variable(T(x), requires_grad=requires_grad, volatile=volatile)
54 return x

~/fastai/courses/dl1/fastai/ in T(a, half, cuda)
39 a = to_half(a) if half else torch.FloatTensor(a)
40 else: raise NotImplementedError(a.dtype)
—> 41 if cuda: a = to_gpu(a, non_blocking=True)
42 return a

~/fastai/courses/dl1/fastai/ in to_gpu(x, *args, **kwargs)
88 def to_gpu(x, *args, **kwargs):
89 ‘’‘puts pytorch variable to gpu, if cuda is available and USE_GPU is set to true. ‘’’
—> 90 return x.cuda(*args, **kwargs) if USE_GPU else x
92 def noop(*args, **kwargs): return

TypeError: _cuda() got an unexpected keyword argument ‘non_blocking’

(Bikash Gyawali) #325

Hello Rachel,

I am following the online MooC and running into library problems. Can you please tell what are the versions of pytorch, spacy, fastai and torchtext that need to be installed.