Building Dynamic Unets

Hey folks,

Could someone shed light on how Dynamic Unets are formed? I am specifically looking for the answers to the following questions:

  • Is the pretrained encoder changed while building the Unet?
  • Is self attention given only to the decoder?

Thanks!

  • Is the pretrained encoder changed - If you unfreeze the model and train, then yes the pretrained model also gets trained.

  • Is self attention given only to the decoder? - Try printing the architecture. You can do something like this.

learn.model[4:]
1 Like