📝 Deep Learning Lesson 1 Notes

data is your ImageDataBunch object.
data.valid_ds is referring to the validation dataset that is present in your ImageDataBunch object.
Similarly data.train_ds refers to the training data set.

1 Like

Hi All
In regards to the “Results” section of the lesson 1 where we are checking the results,I was just wondering where we do (len(data.valid_ds)==len(losses)==len(idxs), if it comes true, does it mean all our training data sets items are in the top losses? if so doesn’t that mean our training was not worthy?

No, it’s ensuring that each of these objects have the same length. If they had different lengths, then it means that there was a problem somewhere.

1 Like

Cheers for the answer. I found the answer also on another forum. Just for others who might come up with the same problem in future, as you said the line checks to have the similar length of the “validation dataset”, “loss value” and the “indexes of the loss values”; but this doesn’t mean that all the validation data set items are losses but there should be a loss value/ metric associated with each of the item in the validation data set

HI all

I was trying to train my model based on the resnet50 but when I ran the learner based on the resnet50 the error rate actually became worse…! I was wondering if anyone would have an idea of the reason behind? wasn’t it supposed to increase the accuracy? the lesson 1 model itself became much more accurate! please see below for the snippet.


image
Many thanks in advance.

So to do some more experiments on this and debug, I changed my ImageDataBunch back to “data” from “dataTwo” and I also renamed my learner object back to “learn” from “learnFity”! and surprisingly when I ran the training with resnet50, the model became much more accurate (as below)! can anyone help with any explanation?

image

This looks amazing! As @lbt suggested it would be extremely useful to see the 2-7 lesson mindmaps also if you have them.

Really love the mind maps!! :heart_eyes: :heart:

Is there a thread to show how to import any kind of dataset for image classification, when I try the get_image_files() method on other datasets in URLs like MNIST or FOOD, I get empty lists. Is there an official documentation of as to how to import datasets from the URLs object and deconstruct POSIX file paths to get actual images?

wow. It is wonderful. Great work

Thank you Leovcld, those mindmaps are very helpful!

very understandable, may you can make the same for the other lessons , thanks DrC

hey i read your replies on forum , you have a great experience with fastai.
Also this blog where you summarised all of the videos is really great and helpful for beginners. Can you please help me this doubt no one is answering to doubts on forum, maybe its inactive after 2019.
Can you help me on how to perform evaluation of object detection model on fastai?
I have already trained the model , and I have test data also ready , its an object detection model retinanet trained on midog 2021 challenge dataset.
I need various evaluation metrics for my model based on iou thresholding on bounding boxes predictions of model over ground truth bounding boxes(classic MSCOCO format object detection to classification evals)
This is my sample code:

train, valid ,test = ObjectItemListSlide(train_images), ObjectItemListSlide(valid_images), ObjectItemListSlide(test_images)
item_list = ItemLists(".", train, valid)
lls = item_list.label_from_func(lambda x: x.y, label_cls=SlideObjectCategoryList)
lls = lls.transform(tfms, tfm_y=True, size=patch_size)
data = lls.databunch(bs=batch_size, collate_fn=bb_pad_collate,num_workers=0).normalize()


learn = Learner(data, model, loss_func=crit, 
                callback_fns=[ShowGraph,CSVLogger,partial(GradientClipping, clip=2.0)])  
learn.split([model.encoder[6], model.c5top5])
learn.freeze_to(-2)
learn.load('trained_model_bs64_GC',with_opt=True)
#test_data
item_list_t = ItemLists(".", train, test)
lls_t = item_list.label_from_func(lambda x: x.y, label_cls=SlideObjectCategoryList)
lls_t = lls_t.transform(tfms, tfm_y=True, size=patch_size)
data_t= lls_t.databunch(bs=batch_size, collate_fn=bb_pad_collate,num_workers=0).normalize()
detect_thresh = 0.5 
nms_thresh = 0.2 
image_count=15 

show_results_side_by_side(learn, anchors, detect_thresh=detect_thresh, nms_thresh=nms_thresh, image_count=image_count)

I can see the results after the last function but its just prediction of box over with score over random patches of my data ,
I need the precision,recall, accuracy , confusion matrix ,roc auc curve ,etc, on all the test images . The metric for classification is iou =0.5 over the bounding box if the bounding boxe predicted by machine has iou >0.5 it is to be considered as true positive for positive ground truth, and vice versa.
Can you please share a notebook on how can I perform such an evaluation of model? Any kind of notebooks, resources, code snippets are welcome.
Thanking all of you for the great support on this wonderful platform.
You can mail me, or message me on this forum, all suggestions are really welcome.
Warm regards,
Harshit
Harshit_joshi@iiitb.ac.in