Is there way to override "show_batch/results" when you have multiple inputs?

I have a function that returns a list of tuples that look something like this from the before_batch callback:

(BaseInput(inputs), BaseInput(inputs), *sample[2:])

show_batch bombs with a AttributeError: 'tuple' object has no attribute 'show'.

Do I need to wrap that how tuple in a custom class (that derives from Tuple)? Or is there a way to setup the typedispatched show_batch such as: x:(BaseInput,BaseInput) so that it would know to call it?

Yes you do. Python sucks and does not let us check types inside a tuple/collection :frowning: (e.g. Tuple[Thing, OtherThing] is not something you can use as a type in python, it’s just an artifice in the type annotation system).

1 Like

What is the best approach to “show” the things in my custom tuple?

Imagine my inputs is really just a TensorText and my *sample[2:] is TensorCategory :

CustomTuple(BaseInput(inputs), BaseInput(inputs), *sample[2:])
def show_batch(x:CustomTuple, y, samples, ctxs=None, max_n=6, **kwargs):
    pdb.set_trace()
    pass

I would use samples, but it is empty when going this approach (y is empty too).

So just wondering what the best approach would be to get those TensorText and TensorCategory objects “showing” what they should?

Not sure since I haven’t done it in a while. Follow the siamese tutorial for examples.

1 Like

Ah yah cool … forgot about that :slight_smile:

I see that show_batch accepts a kwargs argument … how can I pass custom values into it? Is there a way to pass something via a transform or when I create the DataLoaders object vis dblock.dataloaders(...)?

Thanks!

I have worked on this, but it will take me awhile to figure out what I did, I kind of set this up months ago.

class CycleImage(Tuple):
    def toTensor(self):
        selfie,anime = self
        return torch.cat([selfie,anime], dim=2)
    def show(self, ctx=None, **kwargs): 
        img=self.toTensor().detach().cpu()
        selfie=denorm('selfie',img[:,:,0:img_size])
        anime=denorm('anime',img[:,:,img_size:])
        return show_image(torch.cat([selfie,anime], dim=2),  ctx=ctx)

You then have to make sure your images are still encoded as the above tuple type, and do not lose it through your pipeline. Sorry I am still trying to figure out what I did myself.

Mine ended up working eventually, you can look through it here to see if anything helps. This is probably some of the worse code I have ever written, so only skim it. There may even been dead lines that are not used anymore. I was doing this when I had no idea how to use fastai2, and spent a few months working on the project before getting everything to work.

2 Likes