Fastai v2 vision

I’m trying to recreate Heatmap part of lesson-6 pets-more.ipynb notebook on test_dl using fastai2. As I observed, there’s no show_heatmap parameter for ClassificationInterpretation which used to be in v1 I guess. Could someone help me on building those GradCAM heatmaps for single example? If worked fine, I’ll try to patch that as a method of ClassificationInterpretation.

What is the equivalent of open_image in fastaiv2?

If you’re looking for a way to show an image from filename, load_image will do the job; show_image could be used to display a tensor.

1 Like

We create a heatmap here: https://github.com/fastai/fastbook/blob/master/18_CAM.ipynb

2 Likes

Just in case if you want to do it in FastaiV2, you can check it out with this notebook:


I hope you will build your own heat map at the end.

image

1 Like

Thanks. Exactly what I was looking for. Big help!

How can I get multiple ctx while showing a batch? Say I’ve a DataBlock with 3 ImageBlocks, out of which two are input images and 3rd is the output. How can I display second image on different axis rather than just concatenating to base image?

I tried to understand how we get 2 image outputs while working with image to image models like GANs or Unets, but wasn’t able to trace that down.

There is no way that we know of since matplotlib does not make it easy to have subplots on a subplot.

But where’s the part of code where you pass in ctx to individual show method? Also where’s the code which decides the no. of subplots at first place? I was then planning to write my own show_batch method accordingly.

show_batch is the function that passes along the contexts and creates them. You cna look at any of the version written in vision.dat/text.data for inspiration.

Is there an easy way to pass some transforms in with a test set? Just a few days into v2 here and on the learning curve, no pun intended. I tried the following to little effect, and couldn’t find through a scour of docs & source:

learn.get_preds(dl=learn.dls.test_dl(test_items, item_tfms = [RandomCrop(224)] ))

You’d want to adjust the after_item and after_batch of the DataLoader to change this. I’d start by exploring dl.after_item (on your test DataLoader)

1 Like

I think you can do it with gridspec.

As far as I remember, we tried to use it for the Siamese example in 10_tutorial.pets, then just concatenated the images cause we failed.

After some digging into source code, I understood few things,

@typedispatch
def show_batch(x:TensorImage, y, samples, ctxs=None, max_n=10, rows=None, cols=None, figsize=None, **kwargs):
    if ctxs is None: ctxs = get_grid(min(len(samples), max_n), rows=rows, cols=cols, figsize=figsize)
    ctxs = show_batch[object](x, y, samples, ctxs=ctxs, max_n=max_n, **kwargs)
    return ctxs

@typedispatch
def show_batch(x:TensorImage, y:TensorImage, samples, ctxs=None, max_n=10, rows=None, cols=None, figsize=None, **kwargs):
    if ctxs is None: ctxs = get_grid(min(len(samples), max_n), rows=rows, cols=cols, add_vert=1, figsize=figsize, double=True)
    for i in range(2):
        ctxs[i::2] = [b.show(ctx=c, **kwargs) for b,c,_ in zip(samples.itemgot(i),ctxs[i::2],range(max_n))]
    return ctxs

In second method, ctxs[i::2] returns every alternate item from the list (starting with i) and you’re iterating over it twice, which eventually shows two images side by side.

While using DataBlock, internally we make use of TfmDL’s show_batch method which calls the previous one after decoding the items.

 def _pre_show_batch(self, b, max_n=9):
        "Decode `b` to be ready for `show_batch`"
        b = self.decode(b)
        if hasattr(b, 'show'): return b,None,None
        its = self._decode_batch(b, max_n, full=False)
        if not is_listy(b): b,its = [b],L((o,) for o in its)
        return detuplify(b[:self.n_inp]),detuplify(b[self.n_inp:]),its

    def show_batch(self, b=None, max_n=9, ctxs=None, show=True, **kwargs):
        if b is None: b = self.one_batch()
        if not show: return self._pre_show_batch(b, max_n=max_n)
        show_batch(*self._pre_show_batch(b, max_n=max_n), ctxs=ctxs, max_n=max_n, **kwargs)

By looking at the return values of _pre_show_batch, I guess b[:self.n_inp] will become x, b[self.n_inp:] will become y and its will become samples (the decoded ones).

Now few of the use cases are:

  1. Object detection is a kind of multi-modal task where n_inp=1 and outputs are BBoxBlock, BBoxLblBlock. Since there will be only one input TensorImage, it ends up calling the first version of show_batch and rest of the blocks will share the same ctx which is desirable behavior to show something like this:
  2. GANs clearly define two ImageBlocks which makes the call to second show_batch method. So we get:

I’m guessing that, since y has no specified type in first show_batch method, it can handle tuples and somehow we end up calling show method of each item in the tuple (maybe due list comprehension).

I can think two ways to tackle this situation:

  1. Write a show_batch with type-dispatch for tuple of TensorImage like so:
def show_batch(x:(TensorImage,TensorImage), y:TensorImage, samples, ctxs=None, max_n=10, rows=None, cols=None, figsize=None, **kwargs):

I guess I need to change ctxs[i::3] and it should work. But again, this won’t work when we’ll have custom types, or even not with TensorMask, TensorImageBw, etc.

  1. Count no. of items having base class fastai2.torch_core.TensorImageBase by something like inspect.getmro(type(t)), then make grid accordingly. But what should be the type of x in this case?

I was able to think this far only :sweat_smile: Correct me if I’ve learned anything wrong.

You need to create your own subclass of tuple with two TensorImage. x:(TensorImage,TensorImage) will be understood by Python as "either a TensorImage or a TensorImage`. I’m putting together a tutorial that will show how to customize show_batch and show_results.

5 Likes

I’ve stumbled upon this part earlier as well; once you start with an ItemTransform, encodes keep on returning tuple nonetheless of what you return, I’ll try to this again but I see one problem with this: how could I define independent transforms for the elements of tuple then (which in this case, will be my own subclass)? I can guess it should be something related to order of Transform, but don’t know how to.

Also, by doing this you mean I got to keep both the images in a single Block? (which, in my case are need to be dealt differently)

Thank you @sgugger for the siamese tutorial notebook !

It’s still a work in progress, so please tell me if there are things there that don’t seem clear.

2 Likes

When implementing the show_batch for the x:ImageTuple

def show_batch(x:ImageTuple, y, samples, ctxs=None, max_n=6, rows=None, cols=2, figsize=None, **kwargs):

you are saying:

Here we only dispatch on the x , but we could have custom behaviors depending on the targets.

1°. What do you mean by

only dispatch on the x

?

2°. The x and y in:

ctxs = show_batch[object](x, y, samples, ctxs=ctxs, max_n=max_n, **kwargs)

seem to be useless and can be replaced by None, None. What could we use them for since, as you said, the actual samples are in the samples variable?

3°. If instead of the CategoryBlock I use a different target like MaskBlock (along with the ImageTupleBlock) should I change the show method of the MaskBlock if I want to have the Mask drawn at the right of the ImageTupleBlock (like drawing 3 images in a row) or is it better to have a custom plotting/drawing of the samples instead of calling show_batch[object]?

Thanks!

1 Like