A walk with fastai2 - Vision - Study Group and Online Lectures Megathread

If you check the course notebooks in the fastai2 repo, every v3 notebook is converted :slight_smile: , so the GAN-based feature preservation notebook should be there. If you canā€™t find it Iā€™ll try looking later today

Hi there, I am trying to implement the error_rate metric in the Retinanet notebook and have tried:
learn = Learner(data, model, loss_func=crit, metrics = error_rate) and
learn = Learner(data, model, loss_func=crit, metrics = error_rate())
the learning rate finder does display an error rate, but when running learn.fit_one_cycle(5, 1e-4)
I receive the error: fastai TypeError: ā€œerror_rate() takes 2 positional argumentsā€ but 3 were given
Does anyboy know how to get this metric to work with the multiple object detection, Retinanet notebook?
Thank you :slight_smile:

It doesnā€™t. Object detection needs special metrics to work well, hence why we only have our loss function. Iā€™d recommend asking in this thread here about other metrics that could work.

Thank you very much, I was just looking for a metric to use where I could monitor error rate or accuracy but been having trouble. I am looking to use callbacks to save the best epoch and was going to use error_rate to keep my eye on but maybe I could do this with validation loss instead.

Thanks a lot! I just found out about this.

1 Like

Hi, does anyone know how to get the decoded predictions using learn.tta()?

I know that you can pass in with_decoded=True when using:
_, _, preds = learn.get_preds(dl=test_dl, with_decoded=True)

But learn.tta() does not have this parameter with_decoded?

You canā€™t basically. How far decoded are you wanting? I can help with getting you there :slight_smile: (I may also look at this in fastinference in a moment too)

1 Like

For now not really :sweat_smile:. For the simple decoding like taking argmax of the predictions, I can simply write one extra line. But just curious why this option is not available for learn.tta?

1 Like

Because get_preds only has access to learn.loss_funcā€™s decodes, so itā€™ll always run the softmax IIRC from your loss function. (Presuming their thought process here)

1 Like

@muellerzr is there a way to get the decoded labels with a multi label model in preds[0]? right now the decoded labels are in preds[2][1] as far as I could see. I could just grab them from there but preds[0] (just like for classification labels) would be easier :slight_smile: .

Not right now, however that is something that definitely be a thing for multi-label. Let me see what I can do :slight_smile: (I noticed this before too, on my todo list :slight_smile: )

1 Like

Iā€™ve added a minor change to the style-transfer notebook, to fix the slightly ā€˜offā€™ colours seen in the output generated from the second method (not via predict directly). Instead of showing the output activations directly as an image, they need to be ā€˜decodedā€™ first using the ā€˜reverse-tfmsā€™ that are already in the dataloader. (Though arguably the colour-changes seen can be regarded as a ā€˜featureā€™ and not a ā€˜bugā€™! =P )

See my pull request here. Itā€™s just a couple of lines, pasted below.

dec_res = dl.decode_batch(tuplify(res))[0][0]
dec_res.show();

There might be easier / more direct way of decoding the activations to show, but I am not sure how.

Yijin

2 Likes

Hi, Iā€™m trying to fine-tune the pretrained xresnet50 on the IMAGEWOOF dataset, but it could not reach the same accuracy level as my baseline pretrained resnet50. Could someone give me some advice on this?

Here is the notebook that I used: XResNet

Quick question.
Is there a way to plot CAM on fastai2? To get where the model is focusing?

Is this what you need @tcapelle?

1 Like

I was sure to have seen that!

How to run EfficientNet in multiple GPUs? I was using this notebook from the course as example to integrate efficientnet in fastai2. To run on multiple GPUs I use this little trick which works for standard fastai2 models e.g. resnet18.

# https://pytorch.org/tutorials/beginner/blitz/data_parallel_tutorial.html
import torch
import torch.nn as nn
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
if torch.cuda.device_count() > 1:
  print("Let's use", torch.cuda.device_count(), "GPUs!")
  learn.model = nn.DataParallel(learn.model)
learn.model.to(device);

but with efficientnet cudnn does not like it and throws back an error. Has somebody experience it and solve it before?

When I was experimenting with multi-gpu training, I was not able to train EfficientNets using DataParallel with Fastai1 and ended up using DistributedDataParallel.

Fastai2 has support for both DataParallel and DistributedDataParallel via learn.to_parallel() and learn.to_distributed(), defined in 20a_distributed.ipynb and distributed.py. Minimal documentation here: https://dev.fast.ai/distributed

Due to the PyTorch implementation, distributed training did not work in Jupyter Notebooks when I tried it in Fastai1. I would not be surprised if thatā€™s still the case. More info in the Fastai1 docs: https://docs.fast.ai/distributed

I will check it out! Thanks a lot! Any idea on how to do TTA? I would like to adapt the code below and I found only very minimal documentation here.

learn = load_learner('resnet50-u.pkl')
img = PILImage.create('data/images/087c949edb.jpg')
predictions = learn.predict(img)

I am doing a loop for each image and I would like to run tta instead of a simple predict.