Fastai v2 vision

Do we have anything similar to Alexnet-style-PCA-noise in fastai2?

I found one implementation in fastai_imagenet repo

I came to know that bag of tricks paper used this augmentation in their pipeline, if this has been purposefully dropped, is there any specific reason?

I have tried running xresnet50 against resnet50 both model using pretrained weights. But I still see that xresnet is doing worse. Any idea why?

Is there any block that follows the format of the target in instance segmentation??

For example the Mask-RCNN one!

target = {}
        target["boxes"] = boxes
        target["labels"] = labels
        target["masks"] = masks
        target["image_id"] = image_id
        target["area"] = area
        target["iscrowd"] = iscrowd

Thatā€™s multiple blocks youā€™d want to use. I assume itā€™s Coco style.

The boxes would go into a BBox (and lbl), segmentation to segmentation, etc

1 Like

During training, the model expects both the input tensors, as well as a targets (list of dictionary), containing:

  • boxes ( FloatTensor[N, 4] ): the ground-truth boxes in [x1, y1, x2, y2] format, with values of x between 0 and W and values of y between 0 and H
  • labels ( Int64Tensor[N] ): the class label for each ground-truth box
  • masks ( UInt8Tensor[N, H, W] ): the segmentation binary masks for each instance

The model returns a Dict[Tensor] during training, containing the classification and regression losses for both the RPN and the R-CNN, and the mask loss.

During inference, the model requires only the input tensors, and returns the post-processed predictions as a List[Dict[Tensor]] , one for each input image. The fields of the Dict are as follows:

  • boxes ( FloatTensor[N, 4] ): the predicted boxes in [x1, y1, x2, y2] format, with values of x between 0 and W and values of y between 0 and H
  • labels ( Int64Tensor[N] ): the predicted labels for each image
  • scores ( Tensor[N] ): the scores or each prediction
  • masks ( UInt8Tensor[N, 1, H, W] ): the predicted masks for each instance, in 0-1 range. In order to obtain the final segmentation masks, the soft masks can be thresholded, generally with a value of 0.5 ( mask >= 0.5 )

https://pytorch.org/docs/stable/torchvision/models.html#torchvision.models.detection.maskrcnn_resnet50_fpn

So, you suggest me to combine several blocks??

I have been looking into the DataBlocks that you said. How could I achieve the combination of the these two blocks?

def BBoxLblBlock(vocab=None, add_na=True):
    "A `TransformBlock` for labeled bounding boxes, potentially with `vocab`"
    return TransformBlock(type_tfms=MultiCategorize(vocab=vocab, add_na=add_na), item_tfms=BBoxLabeler)

def MaskBlock(codes=None):
    "A `TransformBlock` for segmentation masks, potentially with `codes`"
    return TransformBlock(type_tfms=PILMask.create, item_tfms=AddMaskCodes(codes=codes), batch_tfms=IntToFloatTensor)

It may be because SaveModelCallback() defaults with_opt=False, whereas Learner.save() defaults with_opt=True.

Perhaps try this?

saved_model = SaveModelCallback(fname=exp_name, with_opt=True)

I did, and IMO, it shouldnā€™t be the case, as we donā€™t need optimizer state for validating results. It seems to be working now and fails rarely.

cool, glad itā€™s working for u now, and that the opt state wasnā€™t the culprit (shouldnā€™t be, as u pointed out.)

1 Like

(some time laterā€¦)

I wasnā€™t able to get transforms working at test time. Using the code:

test_items = get_image_files(TEST_DIR)
after_batch = [IntToFloatTensor(),XX]
after_item = [ToTensor(), IntToFloatTensor(),YY]
test_dl = learn.dls.test_dl(test_items, after_item=after_item, after_batch=after_batch)
test_dl.show_batch()

If I put Dihedral(p=1., draw=5) for example at XX or YY, it has no effect. Nor does Rotate(45) or Zoom(2.) or Flip(p=1.) at Y. However, Resize(32) does have an effect, which is the mysterious part.

I am simply trying to perform Dihedral (8 times) per test item. What I am doing as a workaround is patching load_image to do the transform but itā€™d be nice to understand how to do it in native fastai2.

I was experimenting with ImageWoof dataset and this is my datablock definition:

dblock = DataBlock(blocks=(ImageBlock,CategoryBlock),
                   get_items=get_image_files,
                   splitter=GrandparentSplitter(valid_name='val'),
                   get_y=Pipeline(parent_label,lbl_dict.__getitem__),
                   item_tfms=Resize(320),
                   batch_tfms=[*aug_transforms(size=192),Normalize.from_stats(*imagenet_stats)])

where lbl_dict is a dictionary, mapping imagenet labels to human readable labels.

  1. For some reason, this is not being added in the actual pipeline. Hereā€™s the summary output:
Setting-up type transforms pipelines
Collecting items from /root/.fastai/data/imagewoof2-320
Found 12954 items
2 datasets of sizes 9025,3929
Setting up Pipeline: PILBase.create
Setting up Pipeline: parent_label -> Categorize

Building one sample
  Pipeline: PILBase.create
    starting from
      /root/.fastai/data/imagewoof2-320/train/n02087394/n02087394_5449.JPEG
    applying PILBase.create gives
      PILImage mode=RGB size=329x320
  Pipeline: parent_label -> Categorize
    starting from
      /root/.fastai/data/imagewoof2-320/train/n02087394/n02087394_5449.JPEG
    applying parent_label gives
      n02087394
    applying Categorize gives
      TensorCategory(1)

Final sample: (PILImage mode=RGB size=329x320, TensorCategory(1))


Setting up after_item: Pipeline: Resize -> ToTensor
Setting up before_batch: Pipeline: 
Setting up after_batch: Pipeline: IntToFloatTensor -> AffineCoordTfm -> LightingTfm -> Normalize

Building one batch
Applying item_tfms to the first sample:
  Pipeline: Resize -> ToTensor
    starting from
      (PILImage mode=RGB size=329x320, TensorCategory(1))
    applying Resize gives
      (PILImage mode=RGB size=320x320, TensorCategory(1))
    applying ToTensor gives
      (TensorImage of size 3x320x320, TensorCategory(1))

Adding the next 3 samples

No before_batch transform to apply

Collating items in a batch

Applying batch_tfms to the batch built
  Pipeline: IntToFloatTensor -> AffineCoordTfm -> LightingTfm -> Normalize
    starting from
      (TensorImage of size 4x3x320x320, TensorCategory([1, 1, 1, 1], device='cuda:0'))
    applying IntToFloatTensor gives
      (TensorImage of size 4x3x320x320, TensorCategory([1, 1, 1, 1], device='cuda:0'))
    applying AffineCoordTfm gives
      (TensorImage of size 4x3x192x192, TensorCategory([1, 1, 1, 1], device='cuda:0'))
    applying LightingTfm gives
      (TensorImage of size 4x3x192x192, TensorCategory([1, 1, 1, 1], device='cuda:0'))
    applying Normalize gives
      (TensorImage of size 4x3x192x192, TensorCategory([1, 1, 1, 1], device='cuda:0'))
  1. Iā€™m working with simple cnn_learner and thereā€™s some problem with learn.validate(). Hereā€™s my cnn_learner definition
learn=cnn_learner(dls,xresnet50,metrics=[error_rate],cbs=[save_model],
                  model_dir='/content/models').to_fp16()

I tried passing in metrics and cbs as lists as well as single objects, but in both the cases, following error is being thrown:

IndexError                                Traceback (most recent call last)

<ipython-input-10-631604a2e07b> in <module>()
----> 1 learn.validate()

14 frames

/usr/local/lib/python3.6/dist-packages/fastai2/learner.py in validate(self, ds_idx, dl, cbs)
    211             self(_before_epoch)
    212             self._do_epoch_validate(ds_idx, dl)
--> 213             self(_after_epoch)
    214         return getattr(self, 'final_record', None)
    215 

/usr/local/lib/python3.6/dist-packages/fastai2/learner.py in __call__(self, event_name)
    132     def ordered_cbs(self, event): return [cb for cb in sort_by_run(self.cbs) if hasattr(cb, event)]
    133 
--> 134     def __call__(self, event_name): L(event_name).map(self._call_one)
    135     def _call_one(self, event_name):
    136         assert hasattr(event, event_name)

/usr/local/lib/python3.6/dist-packages/fastcore/foundation.py in map(self, f, *args, **kwargs)
    374              else f.format if isinstance(f,str)
    375              else f.__getitem__)
--> 376         return self._new(map(g, self))
    377 
    378     def filter(self, f, negate=False, **kwargs):

/usr/local/lib/python3.6/dist-packages/fastcore/foundation.py in _new(self, items, *args, **kwargs)
    325     @property
    326     def _xtra(self): return None
--> 327     def _new(self, items, *args, **kwargs): return type(self)(items, *args, use_list=None, **kwargs)
    328     def __getitem__(self, idx): return self._get(idx) if is_indexer(idx) else L(self._get(idx), use_list=None)
    329     def copy(self): return self._new(self.items.copy())

/usr/local/lib/python3.6/dist-packages/fastcore/foundation.py in __call__(cls, x, *args, **kwargs)
     45             return x
     46 
---> 47         res = super().__call__(*((x,) + args), **kwargs)
     48         res._newchk = 0
     49         return res

/usr/local/lib/python3.6/dist-packages/fastcore/foundation.py in __init__(self, items, use_list, match, *rest)
    316         if items is None: items = []
    317         if (use_list is not None) or not _is_array(items):
--> 318             items = list(items) if use_list else _listify(items)
    319         if match is not None:
    320             if is_coll(match): match = len(match)

/usr/local/lib/python3.6/dist-packages/fastcore/foundation.py in _listify(o)
    252     if isinstance(o, list): return o
    253     if isinstance(o, str) or _is_array(o): return [o]
--> 254     if is_iter(o): return list(o)
    255     return [o]
    256 

/usr/local/lib/python3.6/dist-packages/fastcore/foundation.py in __call__(self, *args, **kwargs)
    218             if isinstance(v,_Arg): kwargs[k] = args.pop(v.i)
    219         fargs = [args[x.i] if isinstance(x, _Arg) else x for x in self.pargs] + args[self.maxi+1:]
--> 220         return self.fn(*fargs, **kwargs)
    221 
    222 # Cell

/usr/local/lib/python3.6/dist-packages/fastai2/learner.py in _call_one(self, event_name)
    135     def _call_one(self, event_name):
    136         assert hasattr(event, event_name)
--> 137         [cb(event_name) for cb in sort_by_run(self.cbs)]
    138 
    139     def _bn_bias_state(self, with_bias): return bn_bias_params(self.model, with_bias).map(self.opt.state)

/usr/local/lib/python3.6/dist-packages/fastai2/learner.py in <listcomp>(.0)
    135     def _call_one(self, event_name):
    136         assert hasattr(event, event_name)
--> 137         [cb(event_name) for cb in sort_by_run(self.cbs)]
    138 
    139     def _bn_bias_state(self, with_bias): return bn_bias_params(self.model, with_bias).map(self.opt.state)

/usr/local/lib/python3.6/dist-packages/fastai2/callback/core.py in __call__(self, event_name)
     22         _run = (event_name not in _inner_loop or (self.run_train and getattr(self, 'training', True)) or
     23                (self.run_valid and not getattr(self, 'training', False)))
---> 24         if self.run and _run: getattr(self, event_name, noop)()
     25         if event_name=='after_fit': self.run=True #Reset self.run to True at each end of fit
     26 

/usr/local/lib/python3.6/dist-packages/fastai2/callback/tracker.py in after_epoch(self)
     79         if self.every_epoch: self._save(f'{self.fname}_{self.epoch}')
     80         else: #every improvement
---> 81             super().after_epoch()
     82             if self.new_best: self._save(f'{self.fname}')
     83 

/usr/local/lib/python3.6/dist-packages/fastai2/callback/tracker.py in after_epoch(self)
     37     def after_epoch(self):
     38         "Compare the last value to the best up to know"
---> 39         val = self.recorder.values[-1][self.idx]
     40         if self.comp(val - self.min_delta, self.best): self.best,self.new_best = val,True
     41         else: self.new_best = False

/usr/local/lib/python3.6/dist-packages/fastcore/foundation.py in __getitem__(self, idx)
    326     def _xtra(self): return None
    327     def _new(self, items, *args, **kwargs): return type(self)(items, *args, use_list=None, **kwargs)
--> 328     def __getitem__(self, idx): return self._get(idx) if is_indexer(idx) else L(self._get(idx), use_list=None)
    329     def copy(self): return self._new(self.items.copy())
    330 

/usr/local/lib/python3.6/dist-packages/fastcore/foundation.py in _get(self, i)
    330 
    331     def _get(self, i):
--> 332         if is_indexer(i) or isinstance(i,slice): return getattr(self.items,'iloc',self.items)[i]
    333         i = mask2idxs(i)
    334         return (self.items.iloc[list(i)] if hasattr(self.items,'iloc')

IndexError: list index out of range

I understood the reason of 1st issue. Pipeline accepts a list of functions and it should be passed in as Pipeline([parent_label,lbl_dict.__getitem__]) and not as Pipeline(parent_label,lbl_dict.__getitem__).

Since Pipeline class accept 2 parameters, the problem left unnoticed as my 2nd argument was silently being assigned to split_idx.

Is it possible to tackle such scenarios by type-annotations? or at least by throwing an exception when split_idx receives something other than a list?

Resize transform is not working.
image

Resize is applied to Image.Image (or PILImage's), not Tensor. Itā€™s a Pillow resize function. You should call it before image2tensor

2 Likes

Understood, thank you very much :smiley:

It seems that youā€™ve looked at satellite data very closely! Do you have recommendations for importing a single GeoTIFF image as a pytorch array?
Or perhaps for converting GeoTIFF to another format? Iā€™ve tried imagemagick and gdal_translate without success so far, because my (Sentinel-1) file contains float values which neither of them like.
EDIT: Iā€™ve now created a new topic on this here: Using GeoTIFF images

This error still persists. Not sure what part of the code is causing the issue, as I tried some examples with PETS and MNIST dataset, which worked fine with learn.validate().

Let me know if I should provide any extra information.

However, my current notebook of CIFAR dataset is having this issue. Resharing the stacktrace:

IndexError                                Traceback (most recent call last)

<ipython-input-37-631604a2e07b> in <module>()
----> 1 learn.validate()

14 frames

/usr/local/lib/python3.6/dist-packages/fastai2/learner.py in validate(self, ds_idx, dl, cbs)
    216             self(_before_epoch)
    217             self._do_epoch_validate(ds_idx, dl)
--> 218             self(_after_epoch)
    219         return getattr(self, 'final_record', None)
    220 

/usr/local/lib/python3.6/dist-packages/fastai2/learner.py in __call__(self, event_name)
    132     def ordered_cbs(self, event): return [cb for cb in sort_by_run(self.cbs) if hasattr(cb, event)]
    133 
--> 134     def __call__(self, event_name): L(event_name).map(self._call_one)
    135     def _call_one(self, event_name):
    136         assert hasattr(event, event_name)

/usr/local/lib/python3.6/dist-packages/fastcore/foundation.py in map(self, f, *args, **kwargs)
    375              else f.format if isinstance(f,str)
    376              else f.__getitem__)
--> 377         return self._new(map(g, self))
    378 
    379     def filter(self, f, negate=False, **kwargs):

/usr/local/lib/python3.6/dist-packages/fastcore/foundation.py in _new(self, items, *args, **kwargs)
    325     @property
    326     def _xtra(self): return None
--> 327     def _new(self, items, *args, **kwargs): return type(self)(items, *args, use_list=None, **kwargs)
    328     def __getitem__(self, idx): return self._get(idx) if is_indexer(idx) else L(self._get(idx), use_list=None)
    329     def copy(self): return self._new(self.items.copy())

/usr/local/lib/python3.6/dist-packages/fastcore/foundation.py in __call__(cls, x, *args, **kwargs)
     45             return x
     46 
---> 47         res = super().__call__(*((x,) + args), **kwargs)
     48         res._newchk = 0
     49         return res

/usr/local/lib/python3.6/dist-packages/fastcore/foundation.py in __init__(self, items, use_list, match, *rest)
    316         if items is None: items = []
    317         if (use_list is not None) or not _is_array(items):
--> 318             items = list(items) if use_list else _listify(items)
    319         if match is not None:
    320             if is_coll(match): match = len(match)

/usr/local/lib/python3.6/dist-packages/fastcore/foundation.py in _listify(o)
    252     if isinstance(o, list): return o
    253     if isinstance(o, str) or _is_array(o): return [o]
--> 254     if is_iter(o): return list(o)
    255     return [o]
    256 

/usr/local/lib/python3.6/dist-packages/fastcore/foundation.py in __call__(self, *args, **kwargs)
    218             if isinstance(v,_Arg): kwargs[k] = args.pop(v.i)
    219         fargs = [args[x.i] if isinstance(x, _Arg) else x for x in self.pargs] + args[self.maxi+1:]
--> 220         return self.fn(*fargs, **kwargs)
    221 
    222 # Cell

/usr/local/lib/python3.6/dist-packages/fastai2/learner.py in _call_one(self, event_name)
    135     def _call_one(self, event_name):
    136         assert hasattr(event, event_name)
--> 137         [cb(event_name) for cb in sort_by_run(self.cbs)]
    138 
    139     def _bn_bias_state(self, with_bias): return bn_bias_params(self.model, with_bias).map(self.opt.state)

/usr/local/lib/python3.6/dist-packages/fastai2/learner.py in <listcomp>(.0)
    135     def _call_one(self, event_name):
    136         assert hasattr(event, event_name)
--> 137         [cb(event_name) for cb in sort_by_run(self.cbs)]
    138 
    139     def _bn_bias_state(self, with_bias): return bn_bias_params(self.model, with_bias).map(self.opt.state)

/usr/local/lib/python3.6/dist-packages/fastai2/callback/core.py in __call__(self, event_name)
     22         _run = (event_name not in _inner_loop or (self.run_train and getattr(self, 'training', True)) or
     23                (self.run_valid and not getattr(self, 'training', False)))
---> 24         if self.run and _run: getattr(self, event_name, noop)()
     25         if event_name=='after_fit': self.run=True #Reset self.run to True at each end of fit
     26 

/usr/local/lib/python3.6/dist-packages/fastai2/callback/tracker.py in after_epoch(self)
     79         if self.every_epoch: self._save(f'{self.fname}_{self.epoch}')
     80         else: #every improvement
---> 81             super().after_epoch()
     82             if self.new_best: self._save(f'{self.fname}')
     83 

/usr/local/lib/python3.6/dist-packages/fastai2/callback/tracker.py in after_epoch(self)
     37     def after_epoch(self):
     38         "Compare the last value to the best up to know"
---> 39         val = self.recorder.values[-1][self.idx]
     40         if self.comp(val - self.min_delta, self.best): self.best,self.new_best = val,True
     41         else: self.new_best = False

/usr/local/lib/python3.6/dist-packages/fastcore/foundation.py in __getitem__(self, idx)
    326     def _xtra(self): return None
    327     def _new(self, items, *args, **kwargs): return type(self)(items, *args, use_list=None, **kwargs)
--> 328     def __getitem__(self, idx): return self._get(idx) if is_indexer(idx) else L(self._get(idx), use_list=None)
    329     def copy(self): return self._new(self.items.copy())
    330 

/usr/local/lib/python3.6/dist-packages/fastcore/foundation.py in _get(self, i)
    330 
    331     def _get(self, i):
--> 332         if is_indexer(i) or isinstance(i,slice): return getattr(self.items,'iloc',self.items)[i]
    333         i = mask2idxs(i)
    334         return (self.items.iloc[list(i)] if hasattr(self.items,'iloc')

IndexError: list index out of range

Hereā€™s the colab link of my notebook.

I did some debugging with this error and found that, thereā€™s some idx being assigned to self (SaveModelCallback) during the validation process:

 def validate(self, ds_idx=1, dl=None, cbs=None):
        if dl is None: dl = self.dls[ds_idx]
        with self.added_cbs(cbs), self.no_logging(), self.no_mbar():
            self(_before_epoch)
            self._do_epoch_validate(ds_idx, dl)
            self(_after_epoch)
        return getattr(self, 'final_record', None)

and further down in the stacktrace

val = self.recorder.values[-1][self.idx]

This line is trying access self.idx element from the values, given

self.idx=2
self.recorder.values=[(#2) [1.4143638610839844,0.3345000147819519]]

So clearly, itā€™s a IndexError: list index out of range. @ sgugger could you please look into this once

1 Like

Need some way to define order and split_idx through constructor.

Iā€™m trying to use kornia augmentations as batch transforms and these are the only arguments I need to modify. The only way I could think of is subclassing the Transform and wrapping each and every augmentation with that class, just the way itā€™s done with Albumentations in pets tutorial

Also, to clarify, Transform with split_idx=0 will be applied on train-set, split_idx=1 for valid/test and split_idx=None for both, am I right ?

Yes. For more details, you might check out the following post:

split_idx , or how to selectively apply a transform to train and valid (test) datasets?