we, the MLCommons Algorithms working group, have recently launched the AlgoPerf competition for neural network training methods. Since the members of this forum might know a thing or two about how to train deep networks, we want to invite everyone to submit contenders either based on new or existing techniques
To sweeten the deal, there’s a total of $50,000 USD in prizes up for grabs. But beyond the cash rewards, we believe that claiming the bragging rights to the best neural network training method (and a share of Adam’s citation count, likely to come with being crowned a winner) is even more valuable.
Selecting the right training method for neural networks, be it the optimizer, learning rate, or learning rate schedule, has long plagued deep learning practitioners. It is also a frequent and popular topic on this forum, e.g. How do we decide the optimizer used for training, Meet Ranger, or What’s your go-to optimizer in 2021. Despite decades of effort by the entire ML community, training a neural network remains far from an enjoyable user experience due to the trial and error involved in making these critical choices. A big reason for this chaos is a lack of a standardized evaluation protocol for these training methods.
Should you train with “AdamW with one cycle policy”, “Ranger with a cosine schedule” or one of the other (literally) hundreds of training methods that have been suggested mostly in the last 10 years (e.g. AdaBelief, Lion, VeLO, Sophia, Prodigy, …)? Now they (finally) have a proper stage to compete.
Whether you have a favorite way of training neural networks and want to see how it stacks up against others, you’ve developed a novel training method and want to showcase its benefits convincingly, or you simply want to contribute to finding the most promising way to train neural networks amidst the ever-growing list of optimization methods—this is your chance. Submit your entry to the AlgoPerf competition!
For detailed information on how to participate, check out our Call for Submission and visit our GitHub repository for the code, technical documentation, and competition rules. If you have any questions regarding the competition, compute support, or issues with the code, please don’t hesitate to reach out to us here, via email, or on GitHub. We’re here to help!