I’m working on a classification problem, and i intend to to have branched model, by that i mean…using a part of the network then adding a few more layers to it, to obtain different output, its like transfer learning but i dont have the weights so we’ll be learning the weights while we are training the network. I have doubt regarding the backprop. I mean how will the backprop behave if there subnetwork branching from the main
I imagine you are referring to using a ‘custom_head’ in fastai. The back propagation will happen just fine. Pytorch computes the gradients for you. You can define your head with the layers you want, put it as the new head of an existing network and train it.
Hope it helps,
sure, this helps! thank you