Fastai v2 vision

Understood, thank you very much :smiley:

It seems that you’ve looked at satellite data very closely! Do you have recommendations for importing a single GeoTIFF image as a pytorch array?
Or perhaps for converting GeoTIFF to another format? I’ve tried imagemagick and gdal_translate without success so far, because my (Sentinel-1) file contains float values which neither of them like.
EDIT: I’ve now created a new topic on this here: Using GeoTIFF images

This error still persists. Not sure what part of the code is causing the issue, as I tried some examples with PETS and MNIST dataset, which worked fine with learn.validate().

Let me know if I should provide any extra information.

However, my current notebook of CIFAR dataset is having this issue. Resharing the stacktrace:

IndexError                                Traceback (most recent call last)

<ipython-input-37-631604a2e07b> in <module>()
----> 1 learn.validate()

14 frames

/usr/local/lib/python3.6/dist-packages/fastai2/learner.py in validate(self, ds_idx, dl, cbs)
    216             self(_before_epoch)
    217             self._do_epoch_validate(ds_idx, dl)
--> 218             self(_after_epoch)
    219         return getattr(self, 'final_record', None)
    220 

/usr/local/lib/python3.6/dist-packages/fastai2/learner.py in __call__(self, event_name)
    132     def ordered_cbs(self, event): return [cb for cb in sort_by_run(self.cbs) if hasattr(cb, event)]
    133 
--> 134     def __call__(self, event_name): L(event_name).map(self._call_one)
    135     def _call_one(self, event_name):
    136         assert hasattr(event, event_name)

/usr/local/lib/python3.6/dist-packages/fastcore/foundation.py in map(self, f, *args, **kwargs)
    375              else f.format if isinstance(f,str)
    376              else f.__getitem__)
--> 377         return self._new(map(g, self))
    378 
    379     def filter(self, f, negate=False, **kwargs):

/usr/local/lib/python3.6/dist-packages/fastcore/foundation.py in _new(self, items, *args, **kwargs)
    325     @property
    326     def _xtra(self): return None
--> 327     def _new(self, items, *args, **kwargs): return type(self)(items, *args, use_list=None, **kwargs)
    328     def __getitem__(self, idx): return self._get(idx) if is_indexer(idx) else L(self._get(idx), use_list=None)
    329     def copy(self): return self._new(self.items.copy())

/usr/local/lib/python3.6/dist-packages/fastcore/foundation.py in __call__(cls, x, *args, **kwargs)
     45             return x
     46 
---> 47         res = super().__call__(*((x,) + args), **kwargs)
     48         res._newchk = 0
     49         return res

/usr/local/lib/python3.6/dist-packages/fastcore/foundation.py in __init__(self, items, use_list, match, *rest)
    316         if items is None: items = []
    317         if (use_list is not None) or not _is_array(items):
--> 318             items = list(items) if use_list else _listify(items)
    319         if match is not None:
    320             if is_coll(match): match = len(match)

/usr/local/lib/python3.6/dist-packages/fastcore/foundation.py in _listify(o)
    252     if isinstance(o, list): return o
    253     if isinstance(o, str) or _is_array(o): return [o]
--> 254     if is_iter(o): return list(o)
    255     return [o]
    256 

/usr/local/lib/python3.6/dist-packages/fastcore/foundation.py in __call__(self, *args, **kwargs)
    218             if isinstance(v,_Arg): kwargs[k] = args.pop(v.i)
    219         fargs = [args[x.i] if isinstance(x, _Arg) else x for x in self.pargs] + args[self.maxi+1:]
--> 220         return self.fn(*fargs, **kwargs)
    221 
    222 # Cell

/usr/local/lib/python3.6/dist-packages/fastai2/learner.py in _call_one(self, event_name)
    135     def _call_one(self, event_name):
    136         assert hasattr(event, event_name)
--> 137         [cb(event_name) for cb in sort_by_run(self.cbs)]
    138 
    139     def _bn_bias_state(self, with_bias): return bn_bias_params(self.model, with_bias).map(self.opt.state)

/usr/local/lib/python3.6/dist-packages/fastai2/learner.py in <listcomp>(.0)
    135     def _call_one(self, event_name):
    136         assert hasattr(event, event_name)
--> 137         [cb(event_name) for cb in sort_by_run(self.cbs)]
    138 
    139     def _bn_bias_state(self, with_bias): return bn_bias_params(self.model, with_bias).map(self.opt.state)

/usr/local/lib/python3.6/dist-packages/fastai2/callback/core.py in __call__(self, event_name)
     22         _run = (event_name not in _inner_loop or (self.run_train and getattr(self, 'training', True)) or
     23                (self.run_valid and not getattr(self, 'training', False)))
---> 24         if self.run and _run: getattr(self, event_name, noop)()
     25         if event_name=='after_fit': self.run=True #Reset self.run to True at each end of fit
     26 

/usr/local/lib/python3.6/dist-packages/fastai2/callback/tracker.py in after_epoch(self)
     79         if self.every_epoch: self._save(f'{self.fname}_{self.epoch}')
     80         else: #every improvement
---> 81             super().after_epoch()
     82             if self.new_best: self._save(f'{self.fname}')
     83 

/usr/local/lib/python3.6/dist-packages/fastai2/callback/tracker.py in after_epoch(self)
     37     def after_epoch(self):
     38         "Compare the last value to the best up to know"
---> 39         val = self.recorder.values[-1][self.idx]
     40         if self.comp(val - self.min_delta, self.best): self.best,self.new_best = val,True
     41         else: self.new_best = False

/usr/local/lib/python3.6/dist-packages/fastcore/foundation.py in __getitem__(self, idx)
    326     def _xtra(self): return None
    327     def _new(self, items, *args, **kwargs): return type(self)(items, *args, use_list=None, **kwargs)
--> 328     def __getitem__(self, idx): return self._get(idx) if is_indexer(idx) else L(self._get(idx), use_list=None)
    329     def copy(self): return self._new(self.items.copy())
    330 

/usr/local/lib/python3.6/dist-packages/fastcore/foundation.py in _get(self, i)
    330 
    331     def _get(self, i):
--> 332         if is_indexer(i) or isinstance(i,slice): return getattr(self.items,'iloc',self.items)[i]
    333         i = mask2idxs(i)
    334         return (self.items.iloc[list(i)] if hasattr(self.items,'iloc')

IndexError: list index out of range

Here’s the colab link of my notebook.

I did some debugging with this error and found that, there’s some idx being assigned to self (SaveModelCallback) during the validation process:

 def validate(self, ds_idx=1, dl=None, cbs=None):
        if dl is None: dl = self.dls[ds_idx]
        with self.added_cbs(cbs), self.no_logging(), self.no_mbar():
            self(_before_epoch)
            self._do_epoch_validate(ds_idx, dl)
            self(_after_epoch)
        return getattr(self, 'final_record', None)

and further down in the stacktrace

val = self.recorder.values[-1][self.idx]

This line is trying access self.idx element from the values, given

self.idx=2
self.recorder.values=[(#2) [1.4143638610839844,0.3345000147819519]]

So clearly, it’s a IndexError: list index out of range. @ sgugger could you please look into this once

1 Like

Need some way to define order and split_idx through constructor.

I’m trying to use kornia augmentations as batch transforms and these are the only arguments I need to modify. The only way I could think of is subclassing the Transform and wrapping each and every augmentation with that class, just the way it’s done with Albumentations in pets tutorial

Also, to clarify, Transform with split_idx=0 will be applied on train-set, split_idx=1 for valid/test and split_idx=None for both, am I right ?

Yes. For more details, you might check out the following post:

split_idx , or how to selectively apply a transform to train and valid (test) datasets?

I confirm this is happening due to SaveModelCallback. I tried removing that callback and calling learn.validate which indeed got rid of the error. @sgugger should I open an issue on fastai2 repo about this?

Code to reproduce the error:

path = untar_data(URLs.PETS)/'images'
def is_cat(x): return x[0].isupper()
dls = ImageDataLoaders.from_name_func(
    path, get_image_files(path), valid_pct=0.2, seed=21,
    label_func=is_cat, item_tfms=Resize(224))
learn = cnn_learner(dls, resnet34, metrics=error_rate, cbs=SaveModelCallback(monitor='error_rate',fname='test'))
learn.fine_tune(1)
learn.load('test')
learn.validate()

You can, but in general, callbacks that are only relevant for training should be passed in your finet_tune/fit/fit_one_cycle call. I wouldn’t say this is a bug in the library per se.

1 Like

Now this makes sense :slightly_smiling_face: but still, if possible, try to fix this one. I often call the fit method multiple times instead of calling it once for large no. of epochs, and passing SaveModelCallback to every call seems inefficient.

ClassificationInterpretation.plot_confusion_matrix() doesn’t have the return_fig option anymore, unlike v1. I was wondering if this is a deliberate choice?
I found it quite useful as it made saving the matrices to disk trivial.

Using dls.valid.show_batch() on imagewoof shows images from same class despite being unique=False (default), how can I get shuffled examples ?

I learned a neat little trick for this :wink:

After exploring what’s different with train and valid DataLoader, I found shuffle is one of them.

So dls.train.shuffle is True while dls.valid.shuffle is False. So if you need a random batch from validation dataset, you just set it True like so:

dls.valid.shuffle=True
xb,yb = dls.valid.one_batch()
dls.valid.shuffle=False
3 Likes

I’d like to train two models that have the same body, then combine them together at the end of training for inference only i.e. two separate heads on two separate datasets with a common body.

Is it possible to shuffle fitting one minibatch from dataset1, then one minibatch from dataset2?

I know how to make this work in PyTorch; sample code is available on their forums: https://discuss.pytorch.org/t/combining-multiple-models-and-datasets/82623/2

What is the easiest way of drawing the segmentation mask predicted by unet_learner over the input image?

Is there any way I could decode a batch of images down to filenames? I’m looking for ways to access a particular image from learn.show_results output and try evaluating the same image on some different model architecture to compare the outcomes.

Sure (this should work, I didn’t double check, it’s just from my CC code):


    img, lbl = x.dataset[idx]
    fn = x.items[idx]
    fn = re.search('([^\/]\d+.*$)', str(fn)).group(0)

(I use this in ClassConfusion). x is a DataLoader. (You don’t have to have the re.search, I just didn’t want the full thing just the filename)

what does idx here represent? Let’s say I got a batch from dataloader and showed the same using:

xb,yb = dl.one_batch()
dl.show_batch((xb, yb))

Now, is it possible to find the filenames of images in this batch?

I don’t believe so, no. Because at that point the only thing you can get back is the PIL image. If you really want to keep track of that, you should do something beforehand to know that. For mine it came from the top_losses. If you worry about shuffling and dropping you should disable those so you can grab it’s index.

Else the other method is you can look inside dl.get_idxs() and this will return the list of indexs into the Dataset for you to look at and grab. So for instance one_batch returns the first n (n=bs) from your index’s.

(again, DL is a dataloader)

1 Like

This looks great! Will try that out and let you know how it works

1 Like

Hi I thought I’d check here for whether this kind of approach is the right one:

I have been trying to train a semantic segmentation model using v2, but where the masks are ~2x size of the actual images.

I thought that the natural place to start would be item_tfms, but unfortunately the results are not as expected. I load the data like so:

dls = SegmentationDataLoaders.from_label_func(path, bs=1,
        fnames = imgs, 
        item_tfms=[Resize(564, 1024),RandomResizedCrop(512)],
        label_func = label_fcn,
        codes = codes,                      
        batch_tfms=[*aug_transforms(size=(512,512), flip_vert=True), 
        Normalize.from_stats(*imagenet_stats)])

But for some reason, when I run learn.dls.show_batch(figsize=(10,10),max_n=20) the sem-seg map is still misaligned. IMHO, the leading hypothesis is that the resize does not happen before the RandomResizedCrop. Is that possible? Any other thoughts?