Fastai v2 chat

@zlapp Do you have a full example I could test?
This will make it easier to propose a fix.

In the meantime you can use log_preds=False in the callback.
I’ll propose a PR so that the callback does not completely fail if there is an error in logging predictions.

Thanks @boris will try with log_preds=False
Attached is a standalone notebook I put together hope it’s clear enough.
I artificially created a df from dogs vs cats using MultiCategoryBlock (since that is my use case) but think the issue persists in CategoryBlock as well and was able to reproduce the err.

Thanks @zlapp, this PR should fix your issue and let WandbCallback handle dataframes.

1 Like
[getattr(self.dls.valid_ds.items, 'iloc', self.dls.valid_ds.items)[i]

:+1:

I ran into a similar indexing problem when DistributedDL “wraps around” an underlying TabularPandas based dataloader, and an integer index reference will break, e.g. dataset[i].

Perhaps it’s worthwhile to do a blanket search of *.items\[ and .dataset\[ (and other patterns), and consider a fix at the Datasets level…

Yeah, I guess the challenge is how to define those weights for multi label… I am actually trying something like that at the moment, let’s see how it goes.

That code worked, now I can call weighted_dataloaders from a TabularPandas object, thanks!

However, I am getting this warning in the call:
wdls = to.weighted_dataloaders(wgts=range(len(to.train)), bs=16)
Could not do one pass in your dataloader, there is something wrong in it

And then an error when I call wdls.show_batch(). I am assuming that the weights of a weighted_dataloader must have the same length than the training dataset, is that correct?

Does it work if you use a regular dataloader? The best is to share small reproducible code.

Apologies for cross-posting this from another forum. I am seeking advice on what the best fastai-based solution is for working with images where the input annotations are bounding boxes (not segmentation masks). New models seem to be cropping up very fast;

  • Fast R-CNN (Girshick 2015),
  • Faster R-CNN (Ren et al. 2016),
  • Feature Pyramid Networks (Lin et al 2017),
  • Mask R-CNN (He et al. 2017),
  • Mask scoring R-CNN (Huang et al. 2019),
  • Detectron1 and now Detectron2 in PyTorch (Wu et al. 2019), etc.

Does anyone know what the state of the art would be in terms of models that are already implemented in either fastai v1 or fastai v2, for learning from bounding box annotations? The classes covered one approach in late 2018 but I’m wondering if there are better approaches now.

1 Like

I think that none of them is implemented. Just UNet for semantic segmentation is implemented. However, you could just wrap torchvision models in a Learner and get all the advantages of FastAI 2 Framework!!

I did see one implementation of Mask R-CNN here, but the architecture is quite complicated and it doesn’t look like you could easily wrap it in a Learner. The author called it “mainly a personal learning exercise” and the repo doesn’t appear to have been very active, but I don’t understand why more people haven’t grabbed it (is it hard to use? slow to train? Are there better approaches?). I also noticed that Detectron2 is a Pytorch rewrite of Detectron1, but I don’t know if anyone’s ported it into fastai yet.

You need to look at tochvision doc:

torchvision.models.detection.maskrcnn_resnet50_fpn(pretrained=False, progress=True, num_classes=91, pretrained_backbone=True, **kwargs)

I am going ti try torchvision models with FastAI 2 in several days. I could let you know then!!

Thank you for pointing that out; my ability to miss the obvious amazes me. BTW, the link didn’t work but this should: Torchvision models. I would love to hear about your experience.

1 Like

Sorry, I copied a line of code as URL :joy:

I’ll let you now when I try it! I think that I will look at it on Monday!

@jeremy apologies for bothering you but I noticed that the code for the front-page fastai2 article (on fast.ai) includes databunch several times and I tried one code snippet that didn’t work as a result.

In the COCO dataset section I think the line should be changed from dls = coco.databunch(...) to dls = coco.dataloaders(...) and I’m guessing similar refactoring is needed for the other snippets. I mention it because the article says it is specifically about v2.

John

What is the easiest way of getting an Item from a Dataloader with the transforms having being applied?

Sorry if I misundertood the question but dls.show_batch() should show the images with the tranforms applied. I guess you could specify p=1 if you have a RandTransform that you want to use in order to see how it behaves.

Does anybody tried to do model ensembling in fastai2?
I found the @muellerzr notebook with tabular data that averages the predictions. I was thinking more in merging the models to give a single prediction rather than average them.
I tried some pytorch approaches following this thread but unsuccesful so far.

Practically, I am able to change the last linear layer using list(modelA.model.children())[1][8] = nn.Identity() in a resnet34 but I am not able to build the ensemble model for prediction after this.

Any ideas?

I need to assign the image with the transforms to a variable

This is probably a dumb question but how do I save my model as Fp32 if I trained it with fp16?

EDIT: I solved my question by doing

learn_inf = load_learner('drive/My Drive/pkls/export.pkl')
learn_inf = learn_inf.to_fp32()
learn.export()

You can do learn.to_fp32()

1 Like