Detectron2 anyone?

If the problem is only in predict, I’d create a subclass of Learner and deal with predict (or get_preds if the problem comes from there) by overriding the method, like we do for the language model (predict) or for tabularLearner (get_preds).

While your Bucket solution works, I feel it’s a lot of magic :wink:

1 Like

There is, I also don’t like it

Alright then! I’ll try this solution today them and report how it goes, thank you a lot!!

1 Like

Yes, by removing it works. I also suggest you to change the MaskRCNN class name for solving num_workers issue.

The only work left is finding the place where values are passed to metrics. So, I can extract mask from dict and use segmentation ones.

I was given some intuition here:

However, I don’t understand where I should do the changes that he suggests me.

I think that he suggest me to create new metrics that do this change.

Are your images already of the same size? I’m failing to see where you modified the collate function of your dataloader, and without changing that it cannot collate images of different sizes

No @lgvaz , my images are of different sizes.

In my learner subclass I append all TensorImages to a list and feed into the model. I didn’t override or modify collate function!

I see that, but the dataloader should not even work if the images are of different sizes, before the learner get’s to say anything, this is what I’m failing to understand.

Can you explain in your case how a batch is collated when let’s say, if have an image of size (224,224) and another of size (512, 512)?

My bad, I looked into dataloader line and batch_size is set to 1. This one was a problem that I have. I forgot that I had it.

In Pytorch Forums there is listed some collate functions:

The most common solution is the next:

def my_collate(batch):
    data = [item[0] for item in batch]
    target = [item[1] for item in batch]
    target = torch.LongTensor(target)
    return [data, target]

We need to override the batchcollate that fastai library is usisng.

1 Like

Makes sense! =)

A simple zip collate function like the one I described before works great:

def _zip_collate(t): return Tuple(zip(*t))

I’m working on that rn

Where do you pass that function??

You have to modify DataLoader.create_batch

I was looking there now. However, I don’t know how to link DataBlock to use that new DataLoader.

Take a look at my repo, I’ve done all that =)

You can ignore the bucket things, I’ll change all of that

However, the last commit is from 20 hours ago.

Yeah, but the modifications on the dataloaders were already done, but I’ll update it soon to remove buckets anways

Okay, this kinda of worked, but some other problems showed up.

First problem was the above mentioned find_bs, it wasn’t so trivial to solve as I was expecting, I ended up having to monkey patching it anyways…

Another problem is in GatherPredsCallback, here:

     95         if self.with_input:     self.inputs  = detuplify(to_concat(self.inputs, dim=self.concat_dim))
     96         if not self.save_preds: self.preds   = detuplify(to_concat(self.preds, dim=self.concat_dim))
---> 97         if not self.save_targs: self.targets = detuplify(to_concat(self.targets, dim=self.concat_dim))
     98         if self.with_loss:      self.losses  = to_concat(self.losses)

targets is a list of dicts, to_concat uses is_listy

is_listy, is_iter are the reason I created the buckets in the first place, now trying to remove the buckets gets me in the initial condition again :sleepy: (now in a different situation)

Any ideas? Buckets are starting to look not so bad after all…

1 Like

You can use a different callback for your specific case, since you can rewrite get_preds instead of predict.