Tiramisu structure

accroding densenet article and video, denseNet final version, which is called “denseNet B”, inculdes bottleneck layers - 1X1 convolutions which reduce the current features to 4Xk (k is the growth rate)

I did not see such bottlenecks in the article, nor in Jeremy’s implemenation.
Is there any specific reason? would it be a good idea to include such layer in the Tiramisu?