Thank you very much Jeremy for your detailed explanation.
According to your suggestion, I will certainly test the results of the model by changing the corner points and will update in case of performance difference.
Thank you very much Jeremy for your detailed explanation.
According to your suggestion, I will certainly test the results of the model by changing the corner points and will update in case of performance difference.
Slight clarification, it remembers it for the life of the shell. If you e.g. shut down your machine and start it up again, you’ll need to activate your conda environment again.
We may be discussing different things. I was referring to the selection of env/interpreter in vscode. This is remembered across sessions.
Please don’t feel bad. I am learning this myself
So, I may have been too quick last time. You actually want to look at this:
('Conv2d-100',
OrderedDict([('input_shape', [-1, 256, 14, 14]),
('output_shape', [-1, 512, 7, 7]),
('trainable', False),
('nb_params', 1179648)])),
This corresponds to the start of the following block:
(7): Sequential(
(0): BasicBlock(
(conv1): Conv2d (256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True)
<removed>
So, if you put all the info together:
This gives you output that is 512 x 7 x 7
Best,
A
…because the stride is 2x2
While I found it slightly difficult to wrap my head around the collections.defaultdict
and a ton of dictionaries to create the final bounding box dataset (for largest item classifier), I’ve put up a gist on how to get this done in real quick time using pandas
and a couple of apply commands.
Jeremy suggested to improve the apply part (cos it’s serial) and I’ll do it soon.
Ganesh from the SF weekend study group also shared this idea. Thank you.
Edit: Updated the link of the gist with the minor changes to the source code.
Yeah I like this a lot better than how I did it.
I am a mac user too and vouch for PyCharm
Hmm thanks I think I understand well this part
But do you have any idea why the output of the block 107:
(‘ReLU-107’,
OrderedDict([(‘input_shape’, [-1, 512, 7, 7]),
(‘output_shape’, [-1, 512, 7, 7]),
does not match the input of the block 108:
(‘BasicBlock-108’,
OrderedDict([(‘input_shape’, [-1, 256, 14, 14]),
(‘output_shape’, [-1, 512, 7, 7]),
How can this “work” without any in-between operation ?
Install CTAGs…
Should we change from [‘bbox’] to [‘bbox_new’] as below? Otherwise, the bbox_str
won’t capture the new orientations.
largest_bbox[‘bbox_str’] = largest_bbox[‘bbox_new’].apply(lambda x: ’ '.join(str(y) for y in x))
Oh yes. You’re right. I’ll update the gist in a while.
Thank you.
Edit: Done.
argument to the open_image function has to be casted to string. So use like this open_image(str(IMG_PATH/trn_fns[i]))
Cast argument to string, use like this-
open_image(str(IMG_PATH/trn_fns[i]))
i have an error in F.softmax(predict_batch(learn.model,x),-1)
x,y=next(iter(md.val_dl))
probs=F.softmax(predict_batch(learn.model,x),-1)
x,preds=to_np(x),to_np(probs)
preds=np.argmax(preds,-1)
NameError: name ‘predict_batch’ is not defined
I have searched for predict_batch in whole notebook but it wasn’t there.Maybe it is used in past lectures.Please help me.
Thanks in advance.
thank you
Just found this really great intro to matplotlib, focusing on the OO API https://realpython.com/python-matplotlib-guide/
Same issue here. I’ve installed CTAGs but cannot find symbols.
Shift+Command+F works for me as well.
Hi, after installing ctags please add
“python.workspaceSymbols.ctagsPath”: “ctags”
in your user settings.json file in the visual studio code. The symbols now work for me.