Network deconvolution - Train neural networks faster

Going throught ICLR 2020 submissions, I found this interesting paper - Network deconvolution.

It speeds up training speed of the network by simply adding a layer before convolution. I integrated it with fastai2 repo and did some experiments on imagenette.

Although, I couldn’t beat benchmarks, it was able to improve accuracy a lot (upto 10%) for many networks. I have shared results of some head-on comparison with/without deconv in this colab notebook.


Hi Deepak. This is certainly an interesting technique.

Do you know there is already an active forum topic on this paper?

@Pomo it’s been discussed before but it’s on the new course forums, which isn’t available for everyone right now :slight_smile: (but will be when course-v4 is released)