Meet Ranger - RAdam + Lookahead optimizer

So after adding transforms a big difference in performance emerges: 73.6% vs 69.08%, V1 vs V2. So its probably our implementation of the v2 transforms, that is driving the difference (maaaybe a small chance its a difference in the implementation of the transforms, but unlikely I’d guess).

Will try do an ablation test tomorrow to see if I can narrow down the culprit. Note that I need to look properly for a v2 version of the 3rd transform below (“resize and crop”)

Transforms used (V1 naming)

  • flip_lr
  • presize(128, scale=(0.35,1)) (Resize images to size using RandomResizedCrop)
  • size=128 (equivalent to resize and crop, “no transform” version above used size=(128,128) which is equal to squish

Fastai V1 Result

(73.6+74.2+73.8+72+74.6)/5 = 73.64%

Databunch code:

img_ls = ImageList.from_folder(src).split_by_folder(train='train', valid='val').label_from_folder()

img_ls = img_ls.transform(([flip_lr(p=0.5)], []), size=(128))

data =img_ls.databunch(bs=64, num_workers=nw).presize(128, scale=(0.35,1)).normalize(imagenet_stats)

Fastai V2 Result

(66.8+71+67.8+71+68.8)/5 = 69.08

Databunch code:

tfms = [[PILImage.create], [parent_label, lbl_dict.__getitem__, Categorize()]]

item_tfms = [FlipItem(0.5)]

dsrc = DataSource(items, tfms, splits=split_idx)

batch_tfms = [Cuda(), IntToFloatTensor(), Normalize(*imagenet_stats)]

dbch = dsrc.databunch(item_tfms=item_tfms,
                      after_item=[ToTensor(), RandomResizedCrop(128, min_scale=0.35)],                       after_batch=batch_tfms, 
                      bs=64, 
                      num_workers=nw)
3 Likes