Option 1: learnerWithLotsOfData.fit_one_cycle()
Option 2: run through all the data in parts:
learnerWithPartOfData.fit_one_cycle()
learnerWithPartOfData.data=nextPartOfData
repeat
What noteworthy differences are there between Options 1 and 2?
I know of parameter scheduling (e.g. learning rate), which will run full cycle for each execution, but is something scaled by the size of the dataset?
I ask this for the sake of planning learning on multiple input resolutions.