I’ve been trying out different sets of images on the fast ai lesson 1 code, I also tried out different learning rates and number of epochs to see how if affects the accuracy.
I tried the code on some chest x-ray data I downloaded from kaggle and found that I could only get the accuracy to 62%, but I suspect I should be able to increase the accuracy by training with more images and using differential learning rates. I’ve made some notes on the analyses I’ve done so far, in case anyone else will find them useful.
My main conclusions were:
Low learning rate and insufficient epochs can give reduced accuracy.
When using fewer images more epochs are needed. In the Horses vs. Cows with only 47 images per label a learning rate of 0.01 with 20 epochs was needed for an accuracy of 0.965, but when the number of images was increased to 87, 10 epochs with a learning rate of 0.01 was sufficient to produce 100% accuracy.
When using everyday images 100% accuracy can be achieved even with 4 labels and <100 images. However, when using medical images like Chest X-rays it was difficult to get above 63% accuracy even with 4,621 images and >200 epochs.
Interpretation_fast_ai_image_recognition_analyses.pdf (249.3 KB)