In NeurIPS conference last year, there was an important paper: Neural Ordinary Differential Equations
It is easy to implement in fast ai, instead of resnet block we had in lesson 7, we can easily change it to odeint
def __init__(self, odefunc): super(ODEBlock, self).__init__() self.odefunc = odefunc self.integration_time = torch.tensor([0, 1]).float() def forward(self, x): self.integration_time = self.integration_time.type_as(x) out = odeint(self.odefunc, x, self.integration_time, rtol=args.tol, atol=args.tol) return out @property def nfe(self): return self.odefunc.nfe @nfe.setter def nfe(self, value): self.odefunc.nfe = value
and in this case, we have numper of function executation nfe) instead of layers.
This approach makes it easy to get good result with cheaper memory footprint and it can be made arbitrary complicated through a continuous function layer instead of discrete layers in the current approach.
Is it possible to add to fast.ai please? Or I can help in adding it, if I am allowed to.
You will need to import the library which is a black box: