EfficientNet

Notebook 05_EfficientNet_and_Custom_Weights.ipynb fails on google colab on line

learn.summary()

with error

RuntimeError: running_mean should contain 3072 elements not 6144

I think if you have the latest versions of timm, fastai, and walkwithfastai this shouldnā€™t be a problem.

1 Like

Actually this is an active bug I think. This is different than the timm notebook (itā€™s on my things to tackle next week)

4 Likes

Hi @muellerzr ,

I think fastai has updated the create_head code. We donā€™t have to explicitly multiply nf with 2 when concat is True, create_head already does it.

So in create_timm_model function we donā€™t need to do this ā†’ nf = num_features_model(nn.Sequential(*body.children())) * (2 if concat_pool else 1)

Because of this nf is multiplying 4 times instead of 2.

You are right! The timm tutorial has it the right way, but the efficientnet one does not! Will update today :slight_smile:

Timm tutorial: Utilizing the `timm` Library Inside of `fastai` (Intermediate) | walkwithfastai

@ne1s0n and @RadhikaBansal this has now been updated :slight_smile:

4 Likes

Thanks for the effort! Unfortunately, it still encounters problems. Iā€™ve opened an issue on GitHub. Basically Google Colab defaults to newer library versions. In particular fastai now masks the internal _update_first_layer function. If I add the old definition the notebook runs.

1 Like

Thanks @ne1s0n! Iā€™ve added in the import. Not sure if this was a miss on my part or what, but seems I forgot an import along the way!

For those wanting the direct, its: from fastai.vision.learner import _update_first_layer

Has anyone faced mismatched keys with the state_dict? Timm model state doct and fastai trained timm model have slightly different keys in their respective state_dicts. Iā€™ve tested by retraining using current versions and am not sure if there is a simple workaround to this problem other than renaming the keys to match. All other model information appears to match perfectly. Thanks for any help

Dan :slight_smile: