NNs in their simplest form are just functions inside a function. Each layer takes what the previous layer gives it, does some computation on it, and handles it to the layer above.
Most layers that do interesting things not only take the data from the layer below, but also contain some trainable parameters specific to that layer. Still, we cannot just remove a layer if we donât want to train it - the layers up the layer chain depend on them doing their work, performing their calculations. So by freezing, we still keep the earlier layers in place and have them do their computations, but we donât train them - we do not alter the trainable parameters as a result of seeing data.
So to sum up, all layers perform their calculations, but it is only non-frozen layers that update their parameters based on the data our network sees.