A walk with fastai2 - Vision - Study Group and Online Lectures Megathread

How is your data actually labeled in the folder? (IE what are the folder names). And the ?? will pull up the source code as a pop up in your notebook

Thank you for replying! My folder structure looks like this:

I tried to look at the source to see if I could change a few things to get mine to work with ??. I’m not the best pythonista though lol.

?? isn’t actually used for the code, it’s used to see the source code. (It’s a Jupyter Notebook functionality).

Ahhh wait the issue is you’re using parent_label. We specifically don’t use parent_label alone as it won’t work. Otherwise, we would have simply used RegexLabeller as well (which doesn’t work either). MultiCategoryBlock expects a list of labels, so with this in mind you need a custom get_y similar to what we did in that notebook, how we wrapped our get_y with a RegexLabeller and a lambda function that converted our label to a list of one.

Oh ok. I think I’m beginning to understand the method. Pipeline composed those two functions to finally spit out a single label for each image according to, in the case of dog breeds, a regex expression. So for a parent_label case I will need to make a function to get the parent folder of the image and then use a lambda similarly to the unknown notebook. I think :slight_smile:

Thank you so much for your quick reply! And thank you for the ?? clarification. I come from a JS background and a lot of this is overwhelming/cryptic.

Yes exactly. We’d simply replace our RegexLabeller with your parent_label (and that’s it)

Beautiful! correct_label
Thank you so much for your time and effort!

I’m still a bit hazy on how Pipeline([parent_label, lambda label: [label]]) works though.

Is this saying: grab the parent label from the parent folder, and then pass that to label argument of the lambda, which returns a list with a single element parent_label?

Exactly that, they read Left → Right

1 Like

Excellent! Thank you so much again! Your examples and videos have really helped me out!

@muellerzr
I started with the fastai ML 2018 course along with the v3 Deep Learning course and completed about 2-3 lessons in each. Do I continue with the v3 or start with your fastai2 course ?

Also, is the ML course necessary or I can make do with v3 or your course ?

I also read that v4 is coming soon. That also added to my doubts.

V4 merges the two, so I’d recommend v4. In the interim if you want to know about the v2 API study my notebooks (you don’t necessarily have to watch my videos, but I do always appreciate it :slight_smile: ). The new course uses v2 and once it’s officially released v2 will replace fastaiv1 (v1 will live in its own repo)

1 Like

Did you ever have much success with creating an Efficient net backbone Unet?

Hello! I’ve been playing with object detection using fastai2 and BBox scaling when resizing images stopped working properly. I’m using colab. I believe it was working properly until yesterday. So I’m wondering if something is wrong with my setup or code, or that’s common problem. Does Zachary’s object detection notebook run correctly on colab as for now?
Scaling PointBlock seems to work fine btw

Ok, the problem is probably not with scaling but with decoding, so BBoxes aren’t displayed properly by show_batch()

@arampacha we can do very little without your code. However are you adjusting the Resize’s method? It defaults to Crop, which can have side effects. But otherwise it works just fine how it should.

@muellerzr Thanks for reply. I’ve just ran exactly your object detection notebook (only changing to !pip install fastai2==0.0.20).
And I get this:

img, bb, c = dls.valid.one_batch()
decoded = dls.valid.decode_batch([img, bb, c], max_n=1)
print(bb[:1])
print(decoded[0][1])
tensor([[[-0.1261, -0.0147,  0.7830,  1.0000],
     [-0.1261,  0.1261,  0.3490,  0.9003],
     [ 0.9296,  0.3079,  1.0000,  0.8123],
     [ 0.7067,  0.3021,  0.8886,  0.6012],
     [ 1.0000,  0.3490,  1.0000,  0.4252],
     [ 0.0000,  0.0000,  0.0000,  0.0000],
     [ 0.0000,  0.0000,  0.0000,  0.0000],
     [ 0.0000,  0.0000,  0.0000,  0.0000],
     [ 0.0000,  0.0000,  0.0000,  0.0000],
     [ 0.0000,  0.0000,  0.0000,  0.0000]]], device='cuda:0')
TensorBBox([[218.4751, 168.0000, 445.7478, 341.0000],
    [218.4751, 192.0000, 337.2434, 324.0000],
    [482.4047, 223.0000, 500.0000, 309.0000],
    [426.6862, 222.0000, 472.1407, 273.0000],
    [500.0000, 230.0000, 500.0000, 243.0000],
    [250.0000, 170.5000, 250.0000, 170.5000],
    [250.0000, 170.5000, 250.0000, 170.5000],
    [250.0000, 170.5000, 250.0000, 170.5000],
    [250.0000, 170.5000, 250.0000, 170.5000],
    [250.0000, 170.5000, 250.0000, 170.5000]])

Here are raw and decoded versions of bboxes for one image. Image size is 224 so the decoded values are supposed to be in [0;224], right?

I can’t seem to figure this out, how does fastai v2 specify an optimizer before specifying the learning rate?

For example, when I create the learner object, I can pass opt_func=ranger as one of the arguments. But if I pass the underlying function: Lookahead(RAdam(p, lr=lr, mom=0.95, wd=0.01, eps=1e-5), it asks for the p and lr variables. I’ve figured that p is ā€œparamsā€ and lr is obviously learning rate…

But the thing I’m confused about is that the learning rate is only entered during the learn.fit step, which is after the constructing of the learner object.

So how do I enter a loss function during creating the learner object when the lr is only given later?

You need to pass them in as partial's, IE something that doesn’t use all the parameters.

I want to say it’s as simple as Lookahead(partial(RAdam, mom=0.95,wd=0.01,eps=1e-5)) but I’m not 100% sure. You should though use instead partial(ranger) and just pass in those parameters you want to use

1 Like

For a general overview of how fastai2 optimizers work have a read here: https://github.com/fastai/fastbook/blob/master/16_accel_sgd.ipynb

1 Like

Thanks, got to learn this partial function thingy… it appears a lot in fastai.