Going throught ICLR 2020 submissions, I found this interesting paper - Network deconvolution.
It speeds up training speed of the network by simply adding a layer before convolution. I integrated it with fastai2 repo and did some experiments on imagenette.
Although, I couldn’t beat benchmarks, it was able to improve accuracy a lot (upto 10%) for many networks. I have shared results of some head-on comparison with/without deconv in this colab notebook.