Is it possible to constrain the bbox coordinates in some way? Say don’t let it exceed the image dimensions. Is there any evidence of this improving performance?
It’s a common practice to use the L1 error since L2 error heavily penalises larger errors.
Are those custom metrics (detn_loss, and detn_accuracy) on the training set or the validation set? Is there a way to specify?
I tried L2 and the boxes were pretty bad =X
I must have missed the “custom head” memo. Is there a section of a notebook somebody can link me to? Or a TL;DR that can be thrown in here?
learn = ConvLearner.pretrained(f_model, md, custom_head=head_reg4)
basically retain everything in CNN except for the last layers, which you swap in your custom Sequential layers.
We’re actually just using one dataloader to load the images, the other one only gives us the Y.
So we’re basically re-building a network, but keeping the last few layers of a pre-trained network?
we’re throwing away the last few layers.
we’re keeping the convolution and filters.
Where’s the better way using pandas? Anyone have the link handy?
Going back to drop out rate:
Shall we play around with it? Is there any utility class/fct in pytorch that automatically adjusts the weights when dropout rate of a layer is changed?
Or accommodation of adding/deleting a drop out layer and weight adjustment?
What was the loss function used for multi-class classification?
in my experience, drop out rate is just something you have to try different values for until you get something that works well for you
It’s here.
you can apply some kind of hyperparameter optimization algorithm to it if you want to be systematic
Can Jeremy explain use_clr=(32,5) usage in learn.fit?
any reason in the bbox only, you did not use sigmoid * 224 to bound the output of bbox prediction but you use it in the bbox and cat prediction loss function?