Part 2 Lesson 8 wiki

Should we change from [‘bbox’] to [‘bbox_new’] as below? Otherwise, the bbox_str won’t capture the new orientations.

largest_bbox[‘bbox_str’] = largest_bbox[‘bbox_new’].apply(lambda x: ’ '.join(str(y) for y in x))

3 Likes

Oh yes. You’re right. I’ll update the gist in a while.

Thank you.

Edit: Done.

1 Like

argument to the open_image function has to be casted to string. So use like this open_image(str(IMG_PATH/trn_fns[i]))

Cast argument to string, use like this-
open_image(str(IMG_PATH/trn_fns[i]))

i have an error in F.softmax(predict_batch(learn.model,x),-1)

x,y=next(iter(md.val_dl))
probs=F.softmax(predict_batch(learn.model,x),-1)
x,preds=to_np(x),to_np(probs)
preds=np.argmax(preds,-1)

NameError: name ‘predict_batch’ is not defined

I have searched for predict_batch in whole notebook but it wasn’t there.Maybe it is used in past lectures.Please help me.
Thanks in advance.

1 Like

thank you

Just found this really great intro to matplotlib, focusing on the OO API https://realpython.com/python-matplotlib-guide/

15 Likes

Same issue here. I’ve installed CTAGs but cannot find symbols.

Shift+Command+F works for me as well.

Hi, after installing ctags please add

“python.workspaceSymbols.ctagsPath”: “ctags”

in your user settings.json file in the visual studio code. The symbols now work for me.

These predictions come from a fully trained model as in the notebook:

I just wanted to say: wow. This is so impressive that a neural net can do this! And that is building on top of resnet that was designed to do something quite different.

This is amazing

5 Likes

What you can do to come up with the number 25088 is removing the nn.Linear() part and simply checking the size of the flattened final layer.

head_reg4 = nn.Sequential(Flatten())
learn = ConvLearner.pretrained(f_model, md, custom_head=head_reg4)
learn.opt_fn = optim.Adam
learn.crit = nn.L1Loss()
learn.summary()

It will show you

('Flatten-123',
OrderedDict([('input_shape', [-1, 512, 7, 7]),
             ('output_shape', [-1, 25088]),
             ('nb_params', 0)]))])

at the end of the output.

2 Likes

Hi @yggg, ‘quit’ pdb properly prevents your Jupyter from hang https://github.com/ipython/ipython/issues/10516.

1 Like

I found a few things.

Moving in Atom – handy navigation manual.

atom-ui-ide. Among other things, it lets you find all references of a function. (either this or python-ide gives you hover-documentation like VSC). It’s sort-of a base package for IDE functionality.

python-ide builds atop that and allows you to search symbols / function declarations in the current project - not just file. However, it requires python language server to work, which is maintained by Palantir – so I don’t know how shady/safe that is. It also let’s you hover over functions/classes for documentation, even for out-of-project imports. It also let’s you CMD-click on a function and go straight to its declaration, even out-of-project. I haven’t seen this work in all cases (worked for sklearn.ndimage imports but not sklearn.metrics), but I’ve been able to CMD-click directly into the NumPy source code w/ this.

atom-ctags enables the built-in Atom search features. It builds a ctags file of recognized symbols, per project. I think VSC does this automatically behind the scenes. It also let’s you use CMD-Shift-R (Mac) to search symbols in a project (the “opim” search Jeremy did).


The functionality does come with a price. On my MacBook, enabling auto-ide-ui adds a solid half-second to Atom’s start time. Enabling it with python-ide makes that almost a full second but feels longer.

Having played with it a bit, I think if I want to keep Atom’s speed & minimalism, I’d stick with using the atom-ctags to let me search symbols or go to definitions (CMD-Shift-Down or CMD-Shift-Up to come back) – although it doesn’t always work: I’m not sure when/not symbols/ctags are generated.

I may check out VSC for Mac (or just Visual Studio?) if I find I need the functionality, but that’s what I’ve found so far.

4 Likes

I was wondering why we use “predict_batch” function for making predictions in case of largest item classifier but don’t use it in other cases. As far as I can see this code:

x,y = next(iter(md.val_dl))
predict_batch(learn.model, x)

and this code:

x,y = next(iter(md.val_dl))
learn.model(VV(x))

give the same output. Why do we use “predict_batch” then?

Just because it ensures that eval and reset are called first. I don’t use it in some of the lessons since I want to teach how to do it manually.

1 Like

All the videos are “Unlisted” in Youtube. Now that part 2 is officially launched, is this intended?

The links in the time line are broken.

Many thanks - fixed now.

re: learning greek letters - my comprehension skyrocketed when I picked up Mathematical Notation: A Guide for Engineers and Scientists and could finally search for symbols online / vocalize them etc etc. highly recommend it.

1 Like