I have found Fast.ai forums to have been the best place for receiving critical feedback on my academic research works and hence I want to demonstrate my new work on analysing transfer learning under constraints of model and data sparsity in our paper titled: " Reprogramming under constraints: Revisiting efficient and reliable transferability of lottery tickets".
We uncover some interesting behaviour on how model and data sparsity affect transfer when faced with lottery tickets and/ or with few-shot adaptation. And further, we concretely provide evidence of detrimental impact of such constraints (model sparsity) on the calibration of transferred models irregardless of transfer learning algorithms studied within the scope of this work (linear probing and visual prompting).
In my personal opinion, there is a lot of focus on accuracy when it comes to transfer but not on other reliability metrics like calibration so we hope our work sets a new precedent in making reliability metrics standard in evaluation. All code and experiment logs are open and publicly accessible via the repository here - GitHub - landskape-ai/Reprogram_LT: [arXiv] "Reprogramming under constraints: Revisiting efficient and reliable transferability of lottery tickets" by Diganta Misra, Agam Goyal, Bharat Runwal, and Pin-Yu Chen