Fit_one_cycle on one big databunch vs on smaller ones

Option 1: learnerWithLotsOfData.fit_one_cycle()
Option 2: run through all the data in parts:

What noteworthy differences are there between Options 1 and 2?
I know of parameter scheduling (e.g. learning rate), which will run full cycle for each execution, but is something scaled by the size of the dataset?

I ask this for the sake of planning learning on multiple input resolutions.