Interesting: Accelerating Deep Learning by Focusing on the Biggest Losers

This paper introduces Selective-Backprop, a technique that accelerates the training of deep neural networks (DNNs) by prioritizing examples with high loss at each iteration. Selective-Backprop uses the output of a training example’s forward pass to decide whether to use that example to compute gradients and update parameters, or to skip immediately to the next example. By reducing the number of computationally-expensive backpropagation steps performed, Selective-Backprop accelerates training. Evaluation on CIFAR10, CIFAR100, and SVHN, across a variety of modern image models, shows that Selective-Backprop converges to target error rates up to 3.5x faster than with standard SGD and between 1.02–1.8x faster than a state-of-the-art importance sampling approach. Further acceleration of 26 for selection, thus also skipping forward passes of low priority examples.

Paper: http://arxiv.org/pdf/1910.00762v1
Code: https://anonymous.4open.science/r/c6d4060d-bdac-4d31-839e-8579650255b3/

@LessW2020 @jeremy You should take a look at this!! If the reality is as good as the claims, this is interesting as it would give you the wall-clock ‘speed’ of Lamb without the massive batches which sometimes you cannot use because of memory limits.

6 Likes