Fastaiv2 segmentation inference with arbitrary image sizes

In fastaiv1, we can do inference (one image at a time) on arbitrary sized images as long as we have set the size to ‘None’:

learn.data.single_dl.dataset.tfmargs['size'] = None

Is there any way to achieve the same thing in fastai v2?

Sure, let me try and work out a relatively decent way on how to. I think something like this should work:

learn.dls.valid.after_item = Pipeline([ToTensor()])
# if you have any batch transforms with Resize here you can also just set it to normalize like this:
learn.dls.valid.after_batch = Pipeline([IntToFloatTensor(), Normalize.from_stats(*imagenet_stats)])
dl = learn.dls.test_dl(myfnames, bs=1)

Essentially what we’ve done here is gotten rid of any transform that has a Resize (aug_transforms and other augmentation doesn’t run on the validation set)

Also I think just affection the after_item should do the trick, let me know what happens please!!! :slight_smile:

1 Like

Thanks! You are right (and your final sentence is true, too): all that’s necessary (based on how I was doing resizing for training) is to modify after_item like you did:

learn.dls.valid.after_item = Pipeline([ToTensor()])

Checking for my own understanding: what you’re basically doing here is saying that any of the item transformations that are applied to the individual items get replaced with a generic pipeline that simply turns the images into a tensor.

Thank you!

Correct! and 99% of the time those item_tfms should just be Resize (since everything needs to be the same size to batch), so we can just keep on the ToTensor() and we’re good to go :slight_smile:

I wonder if someone could write a helper function that wipes out all non-essential transforms for us to use :wink: (IE everything but ToTensor, IntToFloatTensor, and Normalize)

1 Like

I was also able to use this to confirm that the fastai v2 library is capable of performing semantic segmentation on images with odd (literally, non-even) rows, which was a limitation in fastai v1.

1 Like

I have created a unet based Image colorization model now I am trying to do inference by loading the image using PILImageBW.create method. And I carried out the steps as follows:

img = PILImageBW.create(‘test3.jpg’)
model.dls.valid.after_item = Pipeline([ToTensor()])
model.dls.valid.after_batch = Pipeline([IntToFloatTensor(), Normalize.from_stats(*imagenet_stats)])
dl = model.dls.test_dl([img])
pred, _, _ = model.predict(img)

And I am running into the following error:


RuntimeError Traceback (most recent call last)
in
----> 1 pred, _, _ = model.predict(img)

D:\Softwares\Anaconda\lib\site-packages\fastai\learner.py in predict(self, item, rm_type_tfms, with_input)
246 def predict(self, item, rm_type_tfms=None, with_input=False):
247 dl = self.dls.test_dl([item], rm_type_tfms=rm_type_tfms, num_workers=0)
–> 248 inp,preds,_,dec_preds = self.get_preds(dl=dl, with_input=True, with_decoded=True)
249 i = getattr(self.dls, ‘n_inp’, -1)
250 inp = (inp,) if i==1 else tuplify(inp)

D:\Softwares\Anaconda\lib\site-packages\fastai\learner.py in get_preds(self, ds_idx, dl, with_input, with_decoded, with_loss, act, inner, reorder, cbs, **kwargs)
233 if with_loss: ctx_mgrs.append(self.loss_not_reduced())
234 with ContextManagers(ctx_mgrs):
–> 235 self._do_epoch_validate(dl=dl)
236 if act is None: act = getattr(self.loss_func, ‘activation’, noop)
237 res = cb.all_tensors()

D:\Softwares\Anaconda\lib\site-packages\fastai\learner.py in _do_epoch_validate(self, ds_idx, dl)
186 if dl is None: dl = self.dls[ds_idx]
187 self.dl = dl
–> 188 with torch.no_grad(): self._with_events(self.all_batches, ‘validate’, CancelValidException)
189
190 def _do_epoch(self):

D:\Softwares\Anaconda\lib\site-packages\fastai\learner.py in with_events(self, f, event_type, ex, final)
153
154 def with_events(self, f, event_type, ex, final=noop):
–> 155 try: self(f’before
{event_type}’) ;f()
156 except ex: self(f’after_cancel
{event_type}’)
157 finally: self(f’after_{event_type}’) ;final()

D:\Softwares\Anaconda\lib\site-packages\fastai\learner.py in all_batches(self)
159 def all_batches(self):
160 self.n_iter = len(self.dl)
–> 161 for o in enumerate(self.dl): self.one_batch(*o)
162
163 def _do_one_batch(self):

D:\Softwares\Anaconda\lib\site-packages\fastai\data\load.py in iter(self)
102 for b in _loadersself.fake_l.num_workers==0:
103 if self.device is not None: b = to_device(b, self.device)
–> 104 yield self.after_batch(b)
105 self.after_iter()
106 if hasattr(self, ‘it’): del(self.it)

D:\Softwares\Anaconda\lib\site-packages\fastcore\transform.py in call(self, o)
196 self.fs.append(t)
197
–> 198 def call(self, o): return compose_tfms(o, tfms=self.fs, split_idx=self.split_idx)
199 def repr(self): return f"Pipeline: {’ -> '.join([f.name for f in self.fs if f.name != ‘noop’])}"
200 def getitem(self,i): return self.fs[i]

D:\Softwares\Anaconda\lib\site-packages\fastcore\transform.py in compose_tfms(x, tfms, is_enc, reverse, **kwargs)
148 for f in tfms:
149 if not is_enc: f = f.decode
–> 150 x = f(x, **kwargs)
151 return x
152

D:\Softwares\Anaconda\lib\site-packages\fastcore\transform.py in call(self, x, **kwargs)
71 @property
72 def name(self): return getattr(self, ‘_name’, _get_name(self))
—> 73 def call(self, x, **kwargs): return self._call(‘encodes’, x, **kwargs)
74 def decode (self, x, **kwargs): return self._call(‘decodes’, x, **kwargs)
75 def repr(self): return f’{self.name}:\nencodes: {self.encodes}decodes: {self.decodes}’

D:\Softwares\Anaconda\lib\site-packages\fastcore\transform.py in _call(self, fn, x, split_idx, **kwargs)
81 def _call(self, fn, x, split_idx=None, **kwargs):
82 if split_idx!=self.split_idx and self.split_idx is not None: return x
—> 83 return self._do_call(getattr(self, fn), x, **kwargs)
84
85 def _do_call(self, f, x, **kwargs):

D:\Softwares\Anaconda\lib\site-packages\fastcore\transform.py in do_call(self, f, x, **kwargs)
88 ret = f.returns_none(x) if hasattr(f,‘returns_none’) else None
89 return retain_type(f(x, **kwargs), x, ret)
—> 90 res = tuple(self.do_call(f, x, **kwargs) for x
in x)
91 return retain_type(res, x)
92

D:\Softwares\Anaconda\lib\site-packages\fastcore\transform.py in (.0)
88 ret = f.returns_none(x) if hasattr(f,‘returns_none’) else None
89 return retain_type(f(x, **kwargs), x, ret)
—> 90 res = tuple(self.do_call(f, x, **kwargs) for x_ in x)
91 return retain_type(res, x)
92

D:\Softwares\Anaconda\lib\site-packages\fastcore\transform.py in do_call(self, f, x, **kwargs)
87 if f is None: return x
88 ret = f.returns_none(x) if hasattr(f,‘returns_none’) else None
—> 89 return retain_type(f(x, **kwargs), x, ret)
90 res = tuple(self.do_call(f, x, **kwargs) for x
in x)
91 return retain_type(res, x)

D:\Softwares\Anaconda\lib\site-packages\fastcore\dispatch.py in call(self, *args, **kwargs)
127 elif self.inst is not None: f = MethodType(f, self.inst)
128 elif self.owner is not None: f = MethodType(f, self.owner)
–> 129 return f(*args, **kwargs)
130
131 def get(self, inst, owner):

D:\Softwares\Anaconda\lib\site-packages\fastai\data\transforms.py in encodes(self, x)
356 self.mean,self.std = x.mean(self.axes, keepdim=True),x.std(self.axes, keepdim=True)+1e-7
357
–> 358 def encodes(self, x:TensorImage): return (x-self.mean) / self.std
359 def decodes(self, x:TensorImage):
360 f = to_cpu if x.device.type==‘cpu’ else noop

D:\Softwares\Anaconda\lib\site-packages\fastai\torch_core.py in _f(self, *args, **kwargs)
329 def _f(self, *args, **kwargs):
330 cls = self.class
–> 331 res = getattr(super(TensorBase, self), fn)(*args, **kwargs)
332 return retain_type(res, self, as_copy=True)
333 return _f

RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!

RuntimeError Traceback (most recent call last)
in
----> 1 pred, _, _ = model.predict(img)

D:\Softwares\Anaconda\lib\site-packages\fastai\learner.py in predict(self, item, rm_type_tfms, with_input)
246 def predict(self, item, rm_type_tfms=None, with_input=False):
247 dl = self.dls.test_dl([item], rm_type_tfms=rm_type_tfms, num_workers=0)
–> 248 inp,preds,_,dec_preds = self.get_preds(dl=dl, with_input=True, with_decoded=True)
249 i = getattr(self.dls, ‘n_inp’, -1)
250 inp = (inp,) if i==1 else tuplify(inp)

D:\Softwares\Anaconda\lib\site-packages\fastai\learner.py in get_preds(self, ds_idx, dl, with_input, with_decoded, with_loss, act, inner, reorder, cbs, **kwargs)
233 if with_loss: ctx_mgrs.append(self.loss_not_reduced())
234 with ContextManagers(ctx_mgrs):
–> 235 self._do_epoch_validate(dl=dl)
236 if act is None: act = getattr(self.loss_func, ‘activation’, noop)
237 res = cb.all_tensors()

D:\Softwares\Anaconda\lib\site-packages\fastai\learner.py in _do_epoch_validate(self, ds_idx, dl)
186 if dl is None: dl = self.dls[ds_idx]
187 self.dl = dl
–> 188 with torch.no_grad(): self._with_events(self.all_batches, ‘validate’, CancelValidException)
189
190 def _do_epoch(self):

D:\Softwares\Anaconda\lib\site-packages\fastai\learner.py in with_events(self, f, event_type, ex, final)
153
154 def with_events(self, f, event_type, ex, final=noop):
–> 155 try: self(f’before
{event_type}’) ;f()
156 except ex: self(f’after_cancel
{event_type}’)
157 finally: self(f’after_{event_type}’) ;final()

D:\Softwares\Anaconda\lib\site-packages\fastai\learner.py in all_batches(self)
159 def all_batches(self):
160 self.n_iter = len(self.dl)
–> 161 for o in enumerate(self.dl): self.one_batch(*o)
162
163 def _do_one_batch(self):

D:\Softwares\Anaconda\lib\site-packages\fastai\data\load.py in iter(self)
102 for b in _loadersself.fake_l.num_workers==0:
103 if self.device is not None: b = to_device(b, self.device)
–> 104 yield self.after_batch(b)
105 self.after_iter()
106 if hasattr(self, ‘it’): del(self.it)

D:\Softwares\Anaconda\lib\site-packages\fastcore\transform.py in call(self, o)
196 self.fs.append(t)
197
–> 198 def call(self, o): return compose_tfms(o, tfms=self.fs, split_idx=self.split_idx)
199 def repr(self): return f"Pipeline: {’ -> '.join([f.name for f in self.fs if f.name != ‘noop’])}"
200 def getitem(self,i): return self.fs[i]

D:\Softwares\Anaconda\lib\site-packages\fastcore\transform.py in compose_tfms(x, tfms, is_enc, reverse, **kwargs)
148 for f in tfms:
149 if not is_enc: f = f.decode
–> 150 x = f(x, **kwargs)
151 return x
152

D:\Softwares\Anaconda\lib\site-packages\fastcore\transform.py in call(self, x, **kwargs)
71 @property
72 def name(self): return getattr(self, ‘_name’, _get_name(self))
—> 73 def call(self, x, **kwargs): return self._call(‘encodes’, x, **kwargs)
74 def decode (self, x, **kwargs): return self._call(‘decodes’, x, **kwargs)
75 def repr(self): return f’{self.name}:\nencodes: {self.encodes}decodes: {self.decodes}’

D:\Softwares\Anaconda\lib\site-packages\fastcore\transform.py in _call(self, fn, x, split_idx, **kwargs)
81 def _call(self, fn, x, split_idx=None, **kwargs):
82 if split_idx!=self.split_idx and self.split_idx is not None: return x
—> 83 return self._do_call(getattr(self, fn), x, **kwargs)
84
85 def _do_call(self, f, x, **kwargs):

D:\Softwares\Anaconda\lib\site-packages\fastcore\transform.py in do_call(self, f, x, **kwargs)
88 ret = f.returns_none(x) if hasattr(f,‘returns_none’) else None
89 return retain_type(f(x, **kwargs), x, ret)
—> 90 res = tuple(self.do_call(f, x, **kwargs) for x
in x)
91 return retain_type(res, x)
92

D:\Softwares\Anaconda\lib\site-packages\fastcore\transform.py in (.0)
88 ret = f.returns_none(x) if hasattr(f,‘returns_none’) else None
89 return retain_type(f(x, **kwargs), x, ret)
—> 90 res = tuple(self.do_call(f, x, **kwargs) for x_ in x)
91 return retain_type(res, x)
92

D:\Softwares\Anaconda\lib\site-packages\fastcore\transform.py in do_call(self, f, x, **kwargs)
87 if f is None: return x
88 ret = f.returns_none(x) if hasattr(f,‘returns_none’) else None
—> 89 return retain_type(f(x, **kwargs), x, ret)
90 res = tuple(self.do_call(f, x, **kwargs) for x
in x)
91 return retain_type(res, x)

D:\Softwares\Anaconda\lib\site-packages\fastcore\dispatch.py in call(self, *args, **kwargs)
127 elif self.inst is not None: f = MethodType(f, self.inst)
128 elif self.owner is not None: f = MethodType(f, self.owner)
–> 129 return f(*args, **kwargs)
130
131 def get(self, inst, owner):

D:\Softwares\Anaconda\lib\site-packages\fastai\data\transforms.py in encodes(self, x)
356 self.mean,self.std = x.mean(self.axes, keepdim=True),x.std(self.axes, keepdim=True)+1e-7
357
–> 358 def encodes(self, x:TensorImage): return (x-self.mean) / self.std
359 def decodes(self, x:TensorImage):
360 f = to_cpu if x.device.type==‘cpu’ else noop

D:\Softwares\Anaconda\lib\site-packages\fastai\torch_core.py in _f(self, *args, **kwargs)
329 def _f(self, *args, **kwargs):
330 cls = self.class
–> 331 res = getattr(super(TensorBase, self), fn)(*args, **kwargs)
332 return retain_type(res, self, as_copy=True)
333 return _f

RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!

Any help will be highly appreciated as I have to submit it for my college project

Can you try not overriding the batch transforms? Or when you make Normalize there’s a cuda param, set it to False

1 Like

Thanks @muellerzr that worked like a charm.

Sure. Here’s my other solution so far that’s a bit more encompassing:

@patch
def remove(self:Pipeline, t):
    "Remove an instance of `t` from `self` if present"
    for i,o in enumerate(self.fs):
        if isinstance(o, t.__class__): self.fs.pop(i)
@patch
def set_base_transforms(self:DataLoader):
    "Removes all transforms with a `size` parameter"
    attrs = ['after_item', 'after_batch']
    for i, attr in enumerate(attrs):
        tfms = getattr(self, attr)
        for j, o in enumerate(tfms):
            if hasattr(o, 'size'):
                tfms.remove(o)
        setattr(self, attr, tfms)

To use:

test_dl = learn.dls.test_dl(fnames[:3], bs=1)
test_dl.set_base_transforms()

I may come up with a better solution though, but this is more what I have in mind :slight_smile:
(I’ll also just update this post if I do)

1 Like

One more very good and easy method how to make use of only item tfms for test set
test_dl = learn.dls.test_dl(df,after_item= Pipeline([item_tfms,ToTensor()]))

does fastai2 segmentation loaders take care of affine transformation on masks for corresponding ones on image ?