Mean average precision of retina net model

(mohamed amin houidi) #1

hello guys,
for the past week i’ve been training a retina net model on pascal with fast ai by running the pascal.ipynb notebook in the fast ai repo. the maximum mAP i managed to get was 17 %. i wanted to ask whether or not any of you have tried it, and how much mAP they got, because it seemed pretty low to me, esp for a state of the art model like retina net


(jaideep v) #2

hi… mohamed1
May I ask you
1_ how you built databunch and labels using fastai v1
2) do you have any git hub repo which you can point it to. I am working on building model for RSNA pneumonia detection kaggle competition to have hands on Obj detection.
Any help would appreciable.


(mohamed amin houidi) #3

the notebook i was talking about is this one:

but i dont recommend using it,there is definitely sth wrong with the retina net implementation in it.
instead i would look for another pytorch implementation and try to follow the above notebook to create a fast ai learner out of it, like this one :

also check this from the second place winner in the pneumonia challenge:


(jaideep v) #4

yes m referring to pytorch one…
have you also checked this one out… this is in dev phase currently

i was planning to mix both of them and build more elegant one.


(mohamed amin houidi) #5

i think the one you sent me was an older version of the one i sent you, but i haven’t tried it. if you ever happen to find what’s wrong with it,please tell me. 17 % mAP is just too low for should be around 90%.


(jaideep v) #6

sure i would let you know between
my stuck at Databunch point
below is the one m trying to use for creating db

    x = [list(x) for x in zip(train_df.bbox, train_df.Target)]
img2bbox = dict(zip(train_df.patientId, x))
get_y_func = lambda o:img2bbox[  o[o.rfind('/')+1:]]
    tfms = get_transforms(do_flip=True, flip_vert=False)
    data = (ObjectItemList.from_df(train_df, path=path1/'train')
            .transform(tfms, tfm_y=True, size=(sz, sz))
            .databunch(bs=bs, collate_fn=bb_pad_collate)


  1. I get an error at label by func . Saying int is not iterable.

2 data = (ObjectItemList.from_df(train_df, path=path1/‘train’)
3 .split_from_df(col=4)
----> 4 .label_from_func(get_y_func)
5 .transform(tfms, tfm_y=True, size=(sz, sz))
6 .databunch(bs=bs, collate_fn=bb_pad_collate)

8 frames
/usr/local/lib/python3.6/dist-packages/fastai/vision/ in process_one(self, item)
331 super().process(ds)
–> 333 def process_one(self,item): return [item[0], [self.c2i.get(o,None) for o in item[1]]]
335 def generate_classes(self, items):

TypeError: ‘int’ object is not iterable

  1. I need to customize the open function of image List. Since my images are in dcm format so need dycom lib to read them . BUt if we use objectItemlist as source object how we can customize the open function.

    class MyImageList(ImageList):
    def open(self, fn:PathOrStr)->Image:
    return open_dicom(fn)


(mohamed amin houidi) #7

i see. sorry i cant give you any feedback on the matter, since i haven’t seen the pneumonia dataset and dont know how similar it is to coco and pascal.but before creating the data bunch test whether or not you’re reading the bbox coordinates correctly by plotting them on their respective images, like in the pascal notebook.


(jaideep v) #8

ok… do you know why is need of bb_collate_fn1 function ?


(mohamed amin houidi) #9

nope, sorry :stuck_out_tongue:


(jaideep v) #10

ok i managed to build a dbunch…
Now the problem lies in localisation of bbox . I feel it is not building bbox on right position as per coordinates when i m doing the show batch … this could be happening in your case also .


(mohamed amin houidi) #11

sorry for late reply, didnt see you there.
show batch works fine for me when following the pascal.ipynb. if you’re running coco_tiny nb, a lot of objects were ommitted so the pictures do seem kinda off, but the bboxes are correct.