Siamese network for one shot learning in fastai

I was thinking of using Siamese network for one shot learning. Since the forward method of it

def forward(self, x1, x2):
 - x1: a Variable of size (B, C, H, W). The left image pairs along the
          batch dimension.
 - x2: a Variable of size (B, C, H, W). The right image pairs along the
          batch dimension.

requires two inputs whereas create_cnn expects one input . How can it be used with create_cnn ?

1 Like

I’ve been experimenting with that, this (below) is how I managed to get the model with two inputs x1 and x2.

The tricky part for me was to make the datablock api give a pair of images. I don’t know if there is an easy way of doing that.

I think it would be interesting if there was a way to combine multiple datablock objects. There are several applications that require multiple inputs like these Siamese Networks, data distillation or more complex networks combining different kinds of input.

class SiameseResnet34(nn.Module):
    def __init__(self):
        super().__init__()
        self.body = create_body(models.resnet34(True), cut=-2)
        self.head = create_head(2048, 1, [512])
        
    def forward(self, x1, x2):
        out1 = self.body(x1)
        out2 = self.body(x2)
        out = torch.cat((out1, out2), dim=1)
        out = self.head(out)
        return out.view(-1)
3 Likes

Try to make a “custom ItemBase” using image tuple as described in the docs:

https://docs.fast.ai/tutorial.itemlist.html

class ImageTuple(ItemBase):
    def __init__(self, img1, img2):
        self.img1,self.img2 = img1,img2
        self.obj,self.data = (img1,img2),[-1+2*img1.data,-1+2*img2.data]
6 Likes

Is there any efficient way for multiple imgs ?

Depends on how far you want to stray from the fastai cookbook. I’m working on something similar right now with text. I found making a custom dataloader for paired text inputs to require too much reinventing the wheel with respect to all the infrastructure supporting text (tokenizing, switching between tokens and ids, padding batches, sampling, etc).

I found it easier to create two separate dataloaders with the same sampler to ensure they’re lined up, and writing a modified fit function that takes an optional second dataloader as input.

This way you can create dataloaders and your learner using standard fastai methods, and everything hooks together well.

4 Likes

Yeah, I also checked creating custom ImageList, so many attributes need to be defined to fully take advantage of lib. So while creating the learner, how did you pass two Databunches ? There is no section as Custom Learner as with imageList.

I added an optional “data_pair” input to the learner class, fit function and validation function. Basically works like this:

        if data_pair:
            it = iter(data_pair.train_dl)

        for xb,yb in progress_bar(data.train_dl, parent=pbar):
            xb, yb = cb_handler.on_batch_begin(xb, yb)
            
            if data_pair:
                x1, y1 = next(it)
                xb = [xb, x1]
            
            loss = loss_batch(model, xb, yb, loss_func, opt, cb_handler)

You need to make sure both dataloaders are being iterated through in the same way though.

3 Likes

I created custom ImageList (training is happening properly) but the problem is with get_preds which won’t work. I don’t have labels for test set, so how to make sure of positive and negative samples. And more often __get__ works for train but procedure for test is going to be different, so when you do add_test in ImageItemList, requires it’s own modules seperately.

2 Likes

can you please share your code? i couldn’t make a custom image list for my siamese network.