Fastai v2 vision

Okay, then how should I store them before batch? L of PILMasks ? And I suppose in that case, it won’t be possible to display them with dl.show_batch

Hmmm. So we are displaying the three masks as one overall mask? (that’s also a three channel “picture”). Then do the stack immediately once you have the images. Then have a custom show that undos them all and shows the three masks (mabye side by side?) (something similar was done with SiameseImage's show, see here:

class SiameseImage(Tuple):
  def show(self, ctx=None, **kwargs):
    im1, im2, is_same = self
    return show_image(torch.cat([im1,im2], dim=2), title=is_same, ctx=ctx, **kwargs)
1 Like

That seems good idea to me. Will try to do that.

How can I write type-annotation for L class with specific type, like the way we used write f(o:List[str]) in v1?

I don’t think you can. Also, I’m not sure we dispatch over things like List[str] has it doesn’t work with isinstance.

Correct - we don’t dispatch over generic types.

This is the exact error message I’m getting if I tried to do so. I don’t understand what it means but I was wondering how can I deal with an encode supposed to work on List/tuple/L of certain type, should I subclass List itself and mention that as a type-annotation?

Yes you’d need to do that. Python doesn’t support run-time inspection of generic types, sadly.

1 Like

Okay. Thanks!

I’ve defined custom normalize behavior for one of my blocks as:

@Normalize
def encodes(self, x:TensorContinuous):
  return x / x.get_meta('max').cuda()

@Normalize
def decodes(self, x:TensorContinuous):
    f = to_cpu if x.device.type=='cpu' else noop
    return (x*f(x.get_meta('max').cuda()))

Now, when I don’t include that block while constructing:

dblock = DataBlock(blocks=(ImageBlock, CategoryBlock),
                   get_x=get_x,get_y=get_y,
                  #  getters=getters,
                   splitter=ColSplitter('is_val'),
                   item_tfms=Resize(size,method='squish'),
                   batch_tfms = [*aug_transforms(max_zoom=0, flip_vert=True, max_warp=0)])

the ImageTensor has range 1

x,y = dls.one_batch(); x.max() - x.min()

Output: TensorImage(1., device='cuda:0')

But when I include that particular block, range becomes 5

def ContinuousBlock(labels=None):
  return TransformBlock(type_tfms=[ContinuousSetup(labels)],batch_tfms=[Normalize])

dblock = DataBlock(blocks=(ImageBlock, ContinuousBlock, CategoryBlock),
                   getters=getters,
                   splitter=ColSplitter('is_val'),
                   item_tfms=Resize(size,method='squish'),
                   batch_tfms = [*aug_transforms(max_zoom=0, flip_vert=True, max_warp=0)])

dls = dblock.dataloaders(df,bs=bs)
w,x,y = dls.one_batch(); w.max() - w.min()

Output: TensorImage(5.2235, device='cuda:0')

What could be the reason behind change of ImageBlock behavior ?

Now I’ve managed to get this forward a lot, current version is here.. I noticed that in order to utilize augmentations like lighting it’s a lot easier to consider our data to be 2-dimensional during most of the augmenting, and then just before normalization just to do

class ToVolumetric(Transform):
    "Transforms batch of  2D images to 3D images"
    order = 99
    
    def __init__(self, split_idx=None, as_item=True):
        super().__init__(split_idx=split_idx, as_item=as_item)
        
    def encodes(self, o:TensorImage):  
         return o[:,None]
    def decodes(self, o:TensorImage): 
         return o[:,0]

There are still some thing I’d like to improve, mostly related to visualizations. Automatically figuring which of the three possible visualizations to use based on the input channels (that must be manually input each time) maybe isn’t the best way. Also, having to manually insert these channels doesn’t really work with segmentation visualizations. I’m still unsure where to put some defaults for RGB-channels or scaling parameters, so if anyone can help with this I’d be grateful.

Anyway, working with this has been a really good tutorial for fastai2.

3 Likes

Normalize is also applied on images. It’s no in your first block but in the second, that’s why you see the difference.

But I suppose if I include Normalize in batch_tfms of DataBlock, it should have worked right? but it didn’t. And I heard that even if we don’t include Normalize, it is added by default, is it the case?
The only stable behavior I observed is when I normalize with ImageNet stats (which I’m guessing due to fixed stats)

One more doubt: when we use ItemTransform, the encodes method return a tuple, is it necessary that every subsequent transform on that data has to be ItemTransform? or is there any way we could collate the tuple into single object ?

This is done with cnn_leaner and unet_learner, outside of the Dataloaders creation. (Don’t know on the rest myself :slight_smile: )

1 Like

I was trying to implement @muellerzr’s Segmentation example, the colab notebook is here

Have done nothing unusual, tried changing arch back to resnet34, tried uninstalling fastai2 and installing it with: pip install git+https://github.com/fastai/fastai2.git --upgrade.

Still getting following error:

AttributeError                            Traceback (most recent call last)

/usr/local/lib/python3.6/dist-packages/fastai2/learner.py in fit(self, n_epoch, lr, wd, cbs, reset_opt)
    171             try:
--> 172                 self._do_begin_fit(n_epoch)
    173                 for epoch in range(n_epoch):

26 frames

/usr/local/lib/python3.6/dist-packages/fastai2/learner.py in _do_begin_fit(self, n_epoch)
    143     def _do_begin_fit(self, n_epoch):
--> 144         self.n_epoch,self.loss = n_epoch,tensor(0.);         self('begin_fit')
    145 

/usr/local/lib/python3.6/dist-packages/fastai2/learner.py in __call__(self, event_name)
    107 
--> 108     def __call__(self, event_name): L(event_name).map(self._call_one)
    109     def _call_one(self, event_name):

/usr/local/lib/python3.6/dist-packages/fastcore/foundation.py in map(self, f, *args, **kwargs)
    361              else f.__getitem__)
--> 362         return self._new(map(g, self))
    363 

/usr/local/lib/python3.6/dist-packages/fastcore/foundation.py in _new(self, items, *args, **kwargs)
    314     def _xtra(self): return None
--> 315     def _new(self, items, *args, **kwargs): return type(self)(items, *args, use_list=None, **kwargs)
    316     def __getitem__(self, idx): return self._get(idx) if is_indexer(idx) else L(self._get(idx), use_list=None)

/usr/local/lib/python3.6/dist-packages/fastcore/foundation.py in __call__(cls, x, *args, **kwargs)
     40 
---> 41         res = super().__call__(*((x,) + args), **kwargs)
     42         res._newchk = 0

/usr/local/lib/python3.6/dist-packages/fastcore/foundation.py in __init__(self, items, use_list, match, *rest)
    305         if (use_list is not None) or not _is_array(items):
--> 306             items = list(items) if use_list else _listify(items)
    307         if match is not None:

/usr/local/lib/python3.6/dist-packages/fastcore/foundation.py in _listify(o)
    241     if isinstance(o, str) or _is_array(o): return [o]
--> 242     if is_iter(o): return list(o)
    243     return [o]

/usr/local/lib/python3.6/dist-packages/fastcore/foundation.py in __call__(self, *args, **kwargs)
    207         fargs = [args[x.i] if isinstance(x, _Arg) else x for x in self.pargs] + args[self.maxi+1:]
--> 208         return self.fn(*fargs, **kwargs)
    209 

/usr/local/lib/python3.6/dist-packages/fastai2/learner.py in _call_one(self, event_name)
    110         assert hasattr(event, event_name)
--> 111         [cb(event_name) for cb in sort_by_run(self.cbs)]
    112 

/usr/local/lib/python3.6/dist-packages/fastai2/learner.py in <listcomp>(.0)
    110         assert hasattr(event, event_name)
--> 111         [cb(event_name) for cb in sort_by_run(self.cbs)]
    112 

/usr/local/lib/python3.6/dist-packages/fastai2/callback/core.py in __call__(self, event_name)
     22                (self.run_valid and not getattr(self, 'training', False)))
---> 23         if self.run and _run: getattr(self, event_name, noop)()
     24         if event_name=='after_fit': self.run=True #Reset self.run to True at each end of fit

/usr/local/lib/python3.6/dist-packages/fastai2/callback/progress.py in begin_fit(self)
    100         "Prepare file with metric names."
--> 101         self.path.parent.mkdir(parents=True, exist_ok=True)
    102         self.file = (self.path/self.fname).open('a' if self.append else 'w')

AttributeError: 'str' object has no attribute 'parent'


During handling of the above exception, another exception occurred:

AttributeError                            Traceback (most recent call last)

<ipython-input-31-d81c6bd29d71> in <module>()
----> 1 learn.lr_find()

/usr/local/lib/python3.6/dist-packages/fastai2/callback/schedule.py in lr_find(self, start_lr, end_lr, num_it, stop_div, show_plot, suggestions)
    217     n_epoch = num_it//len(self.dls.train) + 1
    218     cb=LRFinder(start_lr=start_lr, end_lr=end_lr, num_it=num_it, stop_div=stop_div)
--> 219     with self.no_logging(): self.fit(n_epoch, cbs=cb)
    220     if show_plot: self.recorder.plot_lr_find()
    221     if suggestions:

/usr/local/lib/python3.6/dist-packages/fastai2/learner.py in fit(self, n_epoch, lr, wd, cbs, reset_opt)
    180 
    181             except CancelFitException:             self('after_cancel_fit')
--> 182             finally:                               self('after_fit')
    183 
    184     def validate(self, ds_idx=1, dl=None, cbs=None):

/usr/local/lib/python3.6/dist-packages/fastai2/learner.py in __call__(self, event_name)
    106     def ordered_cbs(self, cb_func): return [cb for cb in sort_by_run(self.cbs) if hasattr(cb, cb_func)]
    107 
--> 108     def __call__(self, event_name): L(event_name).map(self._call_one)
    109     def _call_one(self, event_name):
    110         assert hasattr(event, event_name)

/usr/local/lib/python3.6/dist-packages/fastcore/foundation.py in map(self, f, *args, **kwargs)
    360              else f.format if isinstance(f,str)
    361              else f.__getitem__)
--> 362         return self._new(map(g, self))
    363 
    364     def filter(self, f, negate=False, **kwargs):

/usr/local/lib/python3.6/dist-packages/fastcore/foundation.py in _new(self, items, *args, **kwargs)
    313     @property
    314     def _xtra(self): return None
--> 315     def _new(self, items, *args, **kwargs): return type(self)(items, *args, use_list=None, **kwargs)
    316     def __getitem__(self, idx): return self._get(idx) if is_indexer(idx) else L(self._get(idx), use_list=None)
    317     def copy(self): return self._new(self.items.copy())

/usr/local/lib/python3.6/dist-packages/fastcore/foundation.py in __call__(cls, x, *args, **kwargs)
     39             return x
     40 
---> 41         res = super().__call__(*((x,) + args), **kwargs)
     42         res._newchk = 0
     43         return res

/usr/local/lib/python3.6/dist-packages/fastcore/foundation.py in __init__(self, items, use_list, match, *rest)
    304         if items is None: items = []
    305         if (use_list is not None) or not _is_array(items):
--> 306             items = list(items) if use_list else _listify(items)
    307         if match is not None:
    308             if is_coll(match): match = len(match)

/usr/local/lib/python3.6/dist-packages/fastcore/foundation.py in _listify(o)
    240     if isinstance(o, list): return o
    241     if isinstance(o, str) or _is_array(o): return [o]
--> 242     if is_iter(o): return list(o)
    243     return [o]
    244 

/usr/local/lib/python3.6/dist-packages/fastcore/foundation.py in __call__(self, *args, **kwargs)
    206             if isinstance(v,_Arg): kwargs[k] = args.pop(v.i)
    207         fargs = [args[x.i] if isinstance(x, _Arg) else x for x in self.pargs] + args[self.maxi+1:]
--> 208         return self.fn(*fargs, **kwargs)
    209 
    210 # Cell

/usr/local/lib/python3.6/dist-packages/fastai2/learner.py in _call_one(self, event_name)
    109     def _call_one(self, event_name):
    110         assert hasattr(event, event_name)
--> 111         [cb(event_name) for cb in sort_by_run(self.cbs)]
    112 
    113     def _bn_bias_state(self, with_bias): return bn_bias_params(self.model, with_bias).map(self.opt.state)

/usr/local/lib/python3.6/dist-packages/fastai2/learner.py in <listcomp>(.0)
    109     def _call_one(self, event_name):
    110         assert hasattr(event, event_name)
--> 111         [cb(event_name) for cb in sort_by_run(self.cbs)]
    112 
    113     def _bn_bias_state(self, with_bias): return bn_bias_params(self.model, with_bias).map(self.opt.state)

/usr/local/lib/python3.6/dist-packages/fastai2/callback/core.py in __call__(self, event_name)
     21         _run = (event_name not in _inner_loop or (self.run_train and getattr(self, 'training', True)) or
     22                (self.run_valid and not getattr(self, 'training', False)))
---> 23         if self.run and _run: getattr(self, event_name, noop)()
     24         if event_name=='after_fit': self.run=True #Reset self.run to True at each end of fit
     25 

/usr/local/lib/python3.6/dist-packages/fastai2/callback/progress.py in after_fit(self)
     37     def after_fit(self):
     38         if getattr(self, 'mbar', False):
---> 39             self.mbar.on_iter_end()
     40             delattr(self, 'mbar')
     41         self.learn.logger = self.old_logger

/usr/local/lib/python3.6/dist-packages/fastprogress/fastprogress.py in on_iter_end(self)
    155             total_time = format_time(time.time() - self.main_bar.start_t)
    156             self.text = f'Total time: {total_time} <p>' + self.text
--> 157         self.out.update(HTML(self.text))
    158 
    159     def add_child(self, child):

AttributeError: 'NBMasterBar' object has no attribute 'out'

Interesting! Have you used Sentinel-1 and Sentinel-2 co-registered data as well? If yes could you point me to how you pre-processed the data? Cheers

Small bug:

Passing in specified folders to get_image_files does not seem to exclude the rest of the folders. Current minimal example is using the Pets notebook on course. I can pass in:

imgs = get_image_files(path, folders=['hello'])

(which obviously is not a folder) and len(imgs) is greater than 0

Can you file an issue with it and the reproducer? I may forget about it otherwise.

You got it :slight_smile:

1 Like

Image segmentation with 2 Image inputs

RuntimeError: CUDA error: device-side assert triggered

Here you can find a notebook that reproduces an issue I’m struggling with for some time. The scenario is the following:

  • The URLs.CAMVID_TINY dataset

  • A datablock with 2 Image inputs: DataBlock(blocks=(ImageBlock, ImageBlock, MaskBlock), …)

  • A custom class that handles 2 image inputs (in this example it simply ignores the second image):

    class CustomSequentialEx(SequentialEx):
    def forward(self, x, x2):

  • A CustomDynamicUnet that is identic to the original DynamicUnet but has CustomSequentialEx as a base class.

  • A Custom_unet_learner that is identic to the original unet_learner but initialises a CustomDynamicUnet instead.

and … errors with learn.lr_find / learn.fit_one_cycle.

Can someone help me with this one. I’m out of ideas at the moment.

https://colab.research.google.com/drive/1WRcKjBMkQFMZfF6JGSWqOs4qwaIoAS79

The error is the following:

<ipython-input-15-2ea9996b6c00> in <module>
----> 1 learn.fit_one_cycle(10, slice(lr), pct_start=0.9, wd=1e-2)

~/workspace/fastai2/fastai2/callback/schedule.py in fit_one_cycle(self, n_epoch, lr_max, div, div_final, pct_start, wd, moms, cbs, reset_opt)
    110     scheds = {'lr': combined_cos(pct_start, lr_max/div, lr_max, lr_max/div_final),
    111               'mom': combined_cos(pct_start, *(self.moms if moms is None else moms))}
--> 112     self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt, wd=wd)
    113 
    114 # Cell

~/workspace/fastai2/fastai2/learner.py in fit(self, n_epoch, lr, wd, cbs, reset_opt)
    189                     try:
    190                         self.epoch=epoch;          self('begin_epoch')
--> 191                         self._do_epoch_train()
    192                         self._do_epoch_validate()
    193                     except CancelEpochException:   self('after_cancel_epoch')

~/workspace/fastai2/fastai2/learner.py in _do_epoch_train(self)
    162         try:
    163             self.dl = self.dls.train;                        self('begin_train')
--> 164             self.all_batches()
    165         except CancelTrainException:                         self('after_cancel_train')
    166         finally:                                             self('after_train')

~/workspace/fastai2/fastai2/learner.py in all_batches(self)
    140     def all_batches(self):
    141         self.n_iter = len(self.dl)
--> 142         for o in enumerate(self.dl): self.one_batch(*o)
    143 
    144     def one_batch(self, i, b):

~/workspace/fastai2/fastai2/learner.py in one_batch(self, i, b)
    154             self.opt.zero_grad()
    155         except CancelBatchException:                         self('after_cancel_batch')
--> 156         finally:                                             self('after_batch')
    157 
    158     def _do_begin_fit(self, n_epoch):

~/workspace/fastai2/fastai2/learner.py in __call__(self, event_name)
    121     def ordered_cbs(self, cb_func): return [cb for cb in sort_by_run(self.cbs) if hasattr(cb, cb_func)]
    122 
--> 123     def __call__(self, event_name): L(event_name).map(self._call_one)
    124     def _call_one(self, event_name):
    125         assert hasattr(event, event_name)

~/workspace/fastcore/fastcore/foundation.py in map(self, f, *args, **kwargs)
    360              else f.format if isinstance(f,str)
    361              else f.__getitem__)
--> 362         return self._new(map(g, self))
    363 
    364     def filter(self, f, negate=False, **kwargs):

~/workspace/fastcore/fastcore/foundation.py in _new(self, items, *args, **kwargs)
    313     @property
    314     def _xtra(self): return None
--> 315     def _new(self, items, *args, **kwargs): return type(self)(items, *args, use_list=None, **kwargs)
    316     def __getitem__(self, idx): return self._get(idx) if is_indexer(idx) else L(self._get(idx), use_list=None)
    317     def copy(self): return self._new(self.items.copy())

~/workspace/fastcore/fastcore/foundation.py in __call__(cls, x, *args, **kwargs)
     39             return x
     40 
---> 41         res = super().__call__(*((x,) + args), **kwargs)
     42         res._newchk = 0
     43         return res

~/workspace/fastcore/fastcore/foundation.py in __init__(self, items, use_list, match, *rest)
    304         if items is None: items = []
    305         if (use_list is not None) or not _is_array(items):
--> 306             items = list(items) if use_list else _listify(items)
    307         if match is not None:
    308             if is_coll(match): match = len(match)

~/workspace/fastcore/fastcore/foundation.py in _listify(o)
    240     if isinstance(o, list): return o
    241     if isinstance(o, str) or _is_array(o): return [o]
--> 242     if is_iter(o): return list(o)
    243     return [o]
    244 

~/workspace/fastcore/fastcore/foundation.py in __call__(self, *args, **kwargs)
    206             if isinstance(v,_Arg): kwargs[k] = args.pop(v.i)
    207         fargs = [args[x.i] if isinstance(x, _Arg) else x for x in self.pargs] + args[self.maxi+1:]
--> 208         return self.fn(*fargs, **kwargs)
    209 
    210 # Cell

~/workspace/fastai2/fastai2/learner.py in _call_one(self, event_name)
    124     def _call_one(self, event_name):
    125         assert hasattr(event, event_name)
--> 126         [cb(event_name) for cb in sort_by_run(self.cbs)]
    127 
    128     def _bn_bias_state(self, with_bias): return bn_bias_params(self.model, with_bias).map(self.opt.state)

~/workspace/fastai2/fastai2/learner.py in <listcomp>(.0)
    124     def _call_one(self, event_name):
    125         assert hasattr(event, event_name)
--> 126         [cb(event_name) for cb in sort_by_run(self.cbs)]
    127 
    128     def _bn_bias_state(self, with_bias): return bn_bias_params(self.model, with_bias).map(self.opt.state)

~/workspace/fastai2/fastai2/callback/core.py in __call__(self, event_name)
     21         _run = (event_name not in _inner_loop or (self.run_train and getattr(self, 'training', True)) or
     22                (self.run_valid and not getattr(self, 'training', False)))
---> 23         if self.run and _run: getattr(self, event_name, noop)()
     24         if event_name=='after_fit': self.run=True #Reset self.run to True at each end of fit
     25 

~/workspace/fastai2/fastai2/learner.py in after_batch(self)
    414         if len(self.yb) == 0: return
    415         mets = self._train_mets if self.training else self._valid_mets
--> 416         for met in mets: met.accumulate(self.learn)
    417         if not self.training: return
    418         self.lrs.append(self.opt.hypers[-1]['lr'])

~/workspace/fastai2/fastai2/learner.py in accumulate(self, learn)
    366     def accumulate(self, learn):
    367         self.count += 1
--> 368         self.val = torch.lerp(to_detach(learn.loss.mean(), gather=False), self.val, self.beta)
    369     @property
    370     def value(self): return self.val/(1-self.beta**self.count)