Is it possible to use different lr when training using KFold CV?

The code is like:

# train_path = training_set
# initialize the Stratified K-Fold
folds = 10

skf = StratifiedKFold(n_splits=folds, shuffle=True, random_state=9)

# grab all the labels from the dataset
fnames = get_image_files(train_path)
random.shuffle(fnames)
labels = [parent_label(fn) for fn in fnames]

for train_idx, val_idx in skf.split(fnames, labels):
    train_ds = DataBlock(
        blocks=(ImageBlock, CategoryBlock),
        get_items=get_image_files,
        get_y=parent_label,
        splitter=IndexSplitter(val_idx),
        item_tfms=item_tfms,
        batch_tfms=batch_tfms
      )
    dls = train_ds.dataloaders(train_path, bs=64)

    # === create leaner ===
    learn = vision_learner(dls, resnet34, pretrained=True, metrics=accuracy)

    # using lr_find to find lr
    s_lrs = learn.lr_find()   # default lr_find(suggest_func=(valley))
    s_lrs = s_lrs.valley

    # === training ===
    learn.fine_tune(epochs=10, base_lr=s_lrs)
    _, val = learn.validate()

Is it possible to use the lr_find to set lr for learner’s fine_tune like this, when using Kfold CV so that in each fold, the lr will change due to ‘different’ training and validation set?

Sure just call the lr finder in the loop and use one of the suggestion methods to get the lr. e.g.:

lr = learn.lr_find().valley

Thank you for the reply. I have a new question now:

assuming that I am using learn.fine_tune(epochs=10) within a KFold CV where n_folds=10, is it still consider as 10 epochs or it becomes 10*10 epochs [since in each fold we undergo fine_tune(10)]?

Its 11 epochs per model. (Each fine tune is n+1)

Does it means there will be just 11 epochs or 11*10 = 110 epochs? (Sorry for asking this simple question because I just start learning DL)

11*10, a set of 11 for each model.

1 Like