How is your data actually labeled in the folder? (IE what are the folder names). And the ??
will pull up the source code as a pop up in your notebook
Thank you for replying! My folder structure looks like this:
I tried to look at the source to see if I could change a few things to get mine to work with ??
. Iām not the best pythonista though lol.
??
isnāt actually used for the code, itās used to see the source code. (Itās a Jupyter Notebook functionality).
Ahhh wait the issue is youāre using parent_label
. We specifically donāt use parent_label
alone as it wonāt work. Otherwise, we would have simply used RegexLabeller
as well (which doesnāt work either). MultiCategoryBlock
expects a list of labels, so with this in mind you need a custom get_y
similar to what we did in that notebook, how we wrapped our get_y
with a RegexLabeller and a lambda function that converted our label to a list of one.
Oh ok. I think Iām beginning to understand the method. Pipeline
composed those two functions to finally spit out a single label for each image according to, in the case of dog breeds, a regex expression. So for a parent_label
case I will need to make a function to get the parent folder of the image and then use a lambda similarly to the unknown
notebook. I think
Thank you so much for your quick reply! And thank you for the ??
clarification. I come from a JS background and a lot of this is overwhelming/cryptic.
Yes exactly. Weād simply replace our RegexLabeller with your parent_label
(and thatās it)
Beautiful!
Thank you so much for your time and effort!
Iām still a bit hazy on how Pipeline([parent_label, lambda label: [label]])
works though.
Is this saying: grab the parent label from the parent folder, and then pass that to label
argument of the lambda, which returns a list with a single element parent_label
?
Exactly that, they read Left ā Right
Excellent! Thank you so much again! Your examples and videos have really helped me out!
@muellerzr
I started with the fastai ML 2018 course along with the v3 Deep Learning course and completed about 2-3 lessons in each. Do I continue with the v3 or start with your fastai2 course ?
Also, is the ML course necessary or I can make do with v3 or your course ?
I also read that v4 is coming soon. That also added to my doubts.
V4 merges the two, so Iād recommend v4. In the interim if you want to know about the v2 API study my notebooks (you donāt necessarily have to watch my videos, but I do always appreciate it ). The new course uses v2 and once itās officially released v2 will replace fastaiv1 (v1 will live in its own repo)
Did you ever have much success with creating an Efficient net backbone Unet?
Hello! Iāve been playing with object detection using fastai2 and BBox scaling when resizing images stopped working properly. Iām using colab. I believe it was working properly until yesterday. So Iām wondering if something is wrong with my setup or code, or thatās common problem. Does Zacharyās object detection notebook run correctly on colab as for now?
Scaling PointBlock seems to work fine btw
Ok, the problem is probably not with scaling but with decoding, so BBoxes arenāt displayed properly by show_batch()
@arampacha we can do very little without your code. However are you adjusting the Resizeās method? It defaults to Crop, which can have side effects. But otherwise it works just fine how it should.
@muellerzr Thanks for reply. Iāve just ran exactly your object detection notebook (only changing to !pip install fastai2==0.0.20).
And I get this:
img, bb, c = dls.valid.one_batch()
decoded = dls.valid.decode_batch([img, bb, c], max_n=1)
print(bb[:1])
print(decoded[0][1])
tensor([[[-0.1261, -0.0147, 0.7830, 1.0000],
[-0.1261, 0.1261, 0.3490, 0.9003],
[ 0.9296, 0.3079, 1.0000, 0.8123],
[ 0.7067, 0.3021, 0.8886, 0.6012],
[ 1.0000, 0.3490, 1.0000, 0.4252],
[ 0.0000, 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000, 0.0000]]], device='cuda:0')
TensorBBox([[218.4751, 168.0000, 445.7478, 341.0000],
[218.4751, 192.0000, 337.2434, 324.0000],
[482.4047, 223.0000, 500.0000, 309.0000],
[426.6862, 222.0000, 472.1407, 273.0000],
[500.0000, 230.0000, 500.0000, 243.0000],
[250.0000, 170.5000, 250.0000, 170.5000],
[250.0000, 170.5000, 250.0000, 170.5000],
[250.0000, 170.5000, 250.0000, 170.5000],
[250.0000, 170.5000, 250.0000, 170.5000],
[250.0000, 170.5000, 250.0000, 170.5000]])
Here are raw and decoded versions of bboxes for one image. Image size is 224 so the decoded values are supposed to be in [0;224], right?
I canāt seem to figure this out, how does fastai v2 specify an optimizer before specifying the learning rate?
For example, when I create the learner object, I can pass opt_func=ranger as one of the arguments. But if I pass the underlying function: Lookahead(RAdam(p, lr=lr, mom=0.95, wd=0.01, eps=1e-5), it asks for the p and lr variables. Iāve figured that p is āparamsā and lr is obviously learning rateā¦
But the thing Iām confused about is that the learning rate is only entered during the learn.fit step, which is after the constructing of the learner object.
So how do I enter a loss function during creating the learner object when the lr is only given later?
You need to pass them in as partial
's, IE something that doesnāt use all the parameters.
I want to say itās as simple as Lookahead(partial(RAdam, mom=0.95,wd=0.01,eps=1e-5))
but Iām not 100% sure. You should though use instead partial(ranger)
and just pass in those parameters you want to use
For a general overview of how fastai2 optimizers work have a read here: https://github.com/fastai/fastbook/blob/master/16_accel_sgd.ipynb
Thanks, got to learn this partial function thingy⦠it appears a lot in fastai.