[HELP] Making show_results work for custom dataset

I have been experimenting with fastai2 for a dataset with 4 segmentation masks, so far I got almost everything to work, except the show_results function. Here is my colab notebook I followed the siamese tutorial to create this, the siamese notebook runs fine but mine fails. Can someone figure out what I’m doing wrong? because I tried for so many days. :frowning_face:

1 Like

Zach suggested to ask this to you @sgugger if you have time can you please look into this? ignore the code inside show_results, it doesn’t even gets invoked, something to do with the typedispatched system.

1 Like

I’m having this problem as well, I need to call learn.show_results() having tensors with shape torch.Size([bs, 3, 10, 128, 128]) (yes, channels and seq_len are inverted because I found a pretrained 3D Resnet that want tensors this way, don’t ask me why). In the worst case I can rewrite it from scratch, but I hope there is a smarter way to proceed.

Ok, I’ve made them work. The tricky part is to figure out how to trigger the @typedispatch system. In my case it was sufficient just to say x:TensorImage, y:TensorCategory in the function declaration.

@typedispatch
def show_batch(x:TensorImage, y:TensorCategory, samples, ctxs=None, max_n=6, nrows=None, ncols=2, figsize=None, debug=False, **kwargs):
    if debug:
        print(f'My custom show_batch() - {x.shape} - {type(x)} - {y.shape} - {type(y)} - x0: {x[0].shape}') # torch.Size([8, 3, 10, 28, 28])
    xb_seq = x.permute(0, 2, 1, 3, 4)
    if figsize is None: figsize = (ncols*6, max_n//ncols * 3)
    if ctxs is None: ctxs = get_grid(min(xb_seq[0].shape[0], max_n), nrows=nrows, ncols=ncols, figsize=figsize)
    if debug:
        print(xb_seq.shape)   # torch.Size([8, 10, 3, 28, 28])
        print(len(ctxs))
    for i,ctx in enumerate(ctxs):
        if debug:
            print(i)
            print(ctx)
            print(xb_seq[i].shape, y[i].shape)
        final_image = TensorImage(unroll_image_sequence(xb_seq[i], dim=2, debug=False))
        final_image.show(ctx=ctx, title=y[i])

@typedispatch
def show_results(x:TensorImage, y:TensorCategory, samples, outs, ctxs=None, max_n=6, nrows=None, ncols=2, figsize=None, debug=False, **kwargs):
    if debug:
        print(f'My custom show_results() - {x.shape} - {type(x)} - {y.shape} - {type(y)} - y: {y} - x0: {x[0].shape}') # torch.Size([8, 3, 10, 28, 28])
        print(f'My custom show_results() - {type(samples)} - {type(outs)} - {len(samples)} - {len(outs)}')
        print(f'My custom show_results() - {type(samples[0])} - {type(outs[0])} - {len(samples[0])} - {len(outs[0])}')
        print(f'My custom show_results() - ({type(samples[0][0])}, {type(samples[0][1])}) - {type(outs[0][0])} - {len(samples[0][0])} - {len(outs[0][0])}')
        print(f'My custom show_results() - {samples[0][0].shape} - {outs[0][0]}')

    xb_seq = x.permute(0, 2, 1, 3, 4)
    if figsize is None: figsize = (ncols*6, max_n//ncols * 3)
    if ctxs is None: ctxs = get_grid(min(xb_seq[0].shape[0], max_n), nrows=nrows, ncols=ncols, figsize=figsize)
    if debug:
        print(xb_seq.shape)   # torch.Size([8, 10, 3, 28, 28])
        print(len(ctxs))
    for i,ctx in enumerate(ctxs):
        if debug:
            print(i)
            print(ctx)
            print(xb_seq[i].shape, y[i].shape)
        final_image = TensorImage(unroll_image_sequence(xb_seq[i], dim=2, debug=False))
        final_image.show(ctx=ctx, title=f'Actual: {y[i]}/Pred: {outs[i][0]}')

I’m also not sure if I followed the Siamese tutorial properly… for example, I always have a TensorImage as x in show_batch(), show_results(), etc., not an ImageSequence or ImageSequenceBlock (the equivalent of ImageTuple/ImageTupleBlock in my notebook). I don’t know if this thing will backfire soon :slight_smile:

Anyway, happy to have this machinery working now! :partying_face:

3 Likes

I tried many things too, but nothing worked, I gave up trying. :frowning_face: