I’m trying to figure out an idiomatic way to force learner/dataloader to train only on one, fixed batch. I’m doing this following Karpathy’s excellent advice to overfit one batch, from: A Recipe for Training Neural Networks
My current solution is to scoop up a single batch from my existing dataloader (created from datablock) and create a new, synthetic dataloader that wraps that batch:
Thanks for the reply and suggestions! Both of them suffer from a problem: I want one, fixed batch to be fed for training. However, looking at partial_dataloaders I wonder if I can supply get_idxs as an argument to dataloaders? As a last resort, I could roll out my own version of partial_dataloaders that keeps sampled idxs fixed.
I’m curious, how did you find out about these classes? I browsed docs rather carefully and didn’t come across these utilities.