Lambda Layer

Hi.

I was making some experiments about feature extraction with v1. Look at this screenshot:

This is the last block, the one fastai adds as tail of a pretrained.
It seems that v1 adds a lambda layer alongside the usual stuff. AFAIK, this is undocumented.

Assuming that such lambda is the same as keras’ lambdas (a custom anonymous layer with no trainable weights), I’d like to know what this specific lambda is intended for.

Thanks!

I’ve moved this to the advanced category.

1 Like

Thanks, I did not posted in here since it didn’t seem so advanced.

But, er…, since we are here, you know, you would be the perfect person for answering my question :slight_smile:

Anything not covered in lesson 1 is advanced.

Sorry I’m busy prepping the lesson now.

1 Like

I’m on mobile now so can not verify for sure, but I remember in the notebook dev 001, there is a lambda layer to flatten a layer :smiley: . Can you check it in the fastai_doc ?

1 Like

Yes its to flatten a layer.

When creating the classification head a Flatten layers is called:

From fastai/layers.py:

class Lambda(nn.Module):
    "An easy way to create a pytorch layer for a simple `func`."
    def __init__(self, func:LambdaFunc):
        "create a layer that simply calls `func` with `x`"
        super().__init__()
        self.func=func

    def forward(self, x): return self.func(x)

def Flatten()->Tensor:
    "Flattens `x` to a single dimension, often used at the end of a model."
    return Lambda(lambda x: x.view((x.size(0), -1)))
3 Likes

Thanks guys. Like they used to say back in the old times, if it seems a Flatten and behaves like a Flatten, then it’s definitely a Flatten :slight_smile:

For more info you can see the source code for create_head (invoked within create_cnn when no custom head is defined) where a Flatten layer is added to flatten the output of AdaptiveConcatPool2d().

2 Likes