I’m trying to use the fastai New Class ImageDataBunch to create a siamese dataset (img1,img2),target, target being similar/dissimilar (0/1). The issue is that the loss function calculate the dimension from the Input dimension in this case (img1,img2). My Siamese network looks like this :
class SiameseNet(nn.Module): def __init__(self, embedding_net): super(SiameseNet, self).__init__() self.embedding_net = embedding_net self.pdist = nn.PairwiseDistance() self.ffc = nn.Linear(4, 2) def forward(self, x1, x2): output1 = self.embedding_net(x1) output2 = self.embedding_net(x2) out = self.pdist(output1, output2) out = F.log_softmax(self.ffc(out),dim=-1) return out def get_embedding(self, x): return self.embedding_net(x)
My loss function is N LLoss, but it doesn’t work because the dimension of the input is 1 as you can see on the code above. What are the best practices on training a siamese network (multiple inputs) with fastai v1?
Ideally I would like that the ImageDataBunch to be able to handle multiple inputs and apply transforms to them.
PS: Thank you for your effort with this library and the courses I’ve learned a huge amount!