Big dataset and sampling in Deep Learning

Hi everyone,
how can I get a fast(er) cycle of training and changing things, when dealing with a larger dataset?
Is there a clever sampling algorithm in fastai, that computes a representative subset of a dataset? (I know I can use pandas sampling method for a random samples.)

And why is the learn.fit() function “measured in epoches”? I mean one epoche can be quite a lot.
Is there a possibility to just use a number of minibatches, so that for every .fit call the dataset gets explored further?
Sorry if these questions have been asked already.
Thanks