hi i see that when i pass batch size param to data = (src.transform((trn_tfms, _), size=512,bs=16,num_workers=1)
.databunch().normalize(protein_stats))
Number of batches are still remaining same as that with 64. Because of this my gpu memory is getting overun…
number batches with bs=64,417 when i change to it 16,24 number of batches should increase in progressor but that remains still same.