Unet_learner: about self attention and decoder shape

Hi all,
I am trying to use U-Net for semantic segmentation as explained in: https://walkwithfastai.com/Segmentation

I would like to know two things about unet_learner():

  1. Does the decoder have the same architecture as the encoder? For example: if I use resnet34 on the encoder, does the decoder automatically adopt the same layer type?

  2. Where is self-attention enforced on the U-Net? Only at the bottom of the decoder or in every level?