[Paper] 31 x speedup in DL computations using hashing

  • A novel hashing based technique to drastically reduce the amount of computation needed to train and test deep networks.
  • Our new algorithm for deep learning reduces the overall computational cost of forward and back-propagation by operating on significantly fewer (sparse) nodes.
  • As a consequence, our algorithm uses only 5% of the total multiplications, while keeping on average within 1% of the accuracy of the original model.
  • We demonstrate the scalability and sustainability (energy efficiency) of our proposed algorithm via rigorous experimental evaluations on several real datasets.

Might be associated with this repository:

1 Like