It is working for variable images sizes. All the work is done in a subclass of Learner.
Ow, I’m so sorry, I forgot to answer this.
I don’t have telegram, but I really like the forum system we have here, if you like, we can create a thread only for discussing this project, with the added benefit of being open to anyone that wants to collaborate =)
In the previous answer you have a topic that was created for that purpose. I have been updating there with my advances
So I believe this you current datablock?
maskrccnnDataBlock = DataBlock( blocks=(ImageBlock, MaskRCNNBlock), get_items=partial(get_image_files,folders=[manual_name]), getters=getters, splitter=RandomSplitter(valid_pct=0.1,seed=2020), item_tfms=Resize((size,size)), batch_tfms=Normalize.from_stats(*imagenet_stats) )
Does it work if you remove the resize from
If the problem is only in predict, I’d create a subclass of
Learner and deal with predict (or get_preds if the problem comes from there) by overriding the method, like we do for the language model (predict) or for
While your Bucket solution works, I feel it’s a lot of magic
There is, I also don’t like it
Alright then! I’ll try this solution today them and report how it goes, thank you a lot!!
Yes, by removing it works. I also suggest you to change the MaskRCNN class name for solving num_workers issue.
The only work left is finding the place where values are passed to metrics. So, I can extract mask from dict and use segmentation ones.
I was given some intuition here:
However, I don’t understand where I should do the changes that he suggests me.
I think that he suggest me to create new metrics that do this change.
Are your images already of the same size? I’m failing to see where you modified the collate function of your dataloader, and without changing that it cannot collate images of different sizes
No @lgvaz , my images are of different sizes.
In my learner subclass I append all TensorImages to a list and feed into the model. I didn’t override or modify collate function!
I see that, but the dataloader should not even work if the images are of different sizes, before the learner get’s to say anything, this is what I’m failing to understand.
Can you explain in your case how a batch is collated when let’s say, if have an image of size (224,224) and another of size (512, 512)?
My bad, I looked into dataloader line and batch_size is set to 1. This one was a problem that I have. I forgot that I had it.
In Pytorch Forums there is listed some collate functions:
The most common solution is the next:
def my_collate(batch): data = [item for item in batch] target = [item for item in batch] target = torch.LongTensor(target) return [data, target]
We need to override the batchcollate that fastai library is usisng.
Makes sense! =)
A simple zip collate function like the one I described before works great:
def _zip_collate(t): return Tuple(zip(*t))
I’m working on that rn
Where do you pass that function??
You have to modify
I was looking there now. However, I don’t know how to link DataBlock to use that new DataLoader.
Take a look at my repo, I’ve done all that =)
You can ignore the bucket things, I’ll change all of that
However, the last commit is from 20 hours ago.
Yeah, but the modifications on the dataloaders were already done, but I’ll update it soon to remove buckets anways
Okay, this kinda of worked, but some other problems showed up.
First problem was the above mentioned
find_bs, it wasn’t so trivial to solve as I was expecting, I ended up having to monkey patching it anyways…
Another problem is in
95 if self.with_input: self.inputs = detuplify(to_concat(self.inputs, dim=self.concat_dim)) 96 if not self.save_preds: self.preds = detuplify(to_concat(self.preds, dim=self.concat_dim)) ---> 97 if not self.save_targs: self.targets = detuplify(to_concat(self.targets, dim=self.concat_dim)) 98 if self.with_loss: self.losses = to_concat(self.losses)
targets is a list of dicts, to_concat uses
is_iter are the reason I created the buckets in the first place, now trying to remove the buckets gets me in the initial condition again (now in a different situation)
Any ideas? Buckets are starting to look not so bad after all…
You can use a different callback for your specific case, since you can rewrite get_preds instead of predict.