Implementing Curriculum Learning in fastai for Multi-Label Image Segmentation

Hello everyone,

I’m currently exploring the implementation of curriculum learning in fastai for a multi-label image segmentation task. I’ve come across Jeremy’s insightful example of using weighted dataloaders in one of his live coding sessions, and it got me thinking about an approach involving per-item loss computation.

Here’s my current approach:

  1. Train the model’s head for 12 epochs.
  2. Perform a full model training for another 12 epochs.
  3. Compute per-item losses to create weighted dataloaders (WDL).
  4. Train the model using the WDL for 12 epochs.
  5. Recompute per-item losses to revise the WDL.
  6. Train the model again using the updated WDL for 12 epochs.

My main question is: Does this strategy make sense for implementing curriculum learning, or should I create different versions of the dataset (easy/hard) to train the model on specific difficulty levels?

Moreover, I’m wondering if it’s possible to modify dataloaders to automatically select hard examples for training. Is this a viable option within the fastai framework?

I would greatly appreciate any insights, guidance, or experiences you can share regarding implementing curriculum learning in fastai. Your expertise will be immensely valuable to me.

Thank you all for your time and support!

Best regards,
Bilal