@muellerzr Hi Zach, I have been following your videos and getting some very good info out of them! Thank you very much! In particular, I have been trying to follow the style transfer video and the reason is that Jeremy uses a similar approach in the Course V3 Lesson about No-GAN super-resolution using the same Feature preservation analogy you use in your video. I have an application where I’m trying to use UNETs to remove some noise from CT data and that approach worked great for me. However, I’m trying to follow what you did to do a FastAI v2 version of the same nb but I’m not very experienced and I’m getting lost in the details. I can see where all the basic blocks are but I can’t piece them together. Do you have any advice on how I can structure the FeatureLoss module to use on this application? Any advice is appreciated
If you check the course notebooks in the fastai2 repo, every v3 notebook is converted , so the GAN-based feature preservation notebook should be there. If you can’t find it I’ll try looking later today
Hi there, I am trying to implement the error_rate metric in the Retinanet notebook and have tried:
learn = Learner(data, model, loss_func=crit, metrics = error_rate) and
learn = Learner(data, model, loss_func=crit, metrics = error_rate())
the learning rate finder does display an error rate, but when running learn.fit_one_cycle(5, 1e-4)
I receive the error: fastai TypeError: “error_rate() takes 2 positional arguments” but 3 were given
Does anyboy know how to get this metric to work with the multiple object detection, Retinanet notebook?
Thank you
It doesn’t. Object detection needs special metrics to work well, hence why we only have our loss function. I’d recommend asking in this thread here about other metrics that could work.
Thank you very much, I was just looking for a metric to use where I could monitor error rate or accuracy but been having trouble. I am looking to use callbacks to save the best epoch and was going to use error_rate to keep my eye on but maybe I could do this with validation loss instead.
For now not really . For the simple decoding like taking argmax of the predictions, I can simply write one extra line. But just curious why this option is not available for learn.tta?
Because get_preds only has access to learn.loss_func’s decodes, so it’ll always run the softmax IIRC from your loss function. (Presuming their thought process here)
@muellerzr is there a way to get the decoded labels with a multi label model in preds[0]? right now the decoded labels are in preds[2][1] as far as I could see. I could just grab them from there but preds[0] (just like for classification labels) would be easier .
Not right now, however that is something that definitely be a thing for multi-label. Let me see what I can do (I noticed this before too, on my todo list )
I’ve added a minor change to the style-transfer notebook, to fix the slightly ‘off’ colours seen in the output generated from the second method (not via predict directly). Instead of showing the output activations directly as an image, they need to be ‘decoded’ first using the ‘reverse-tfms’ that are already in the dataloader. (Though arguably the colour-changes seen can be regarded as a ‘feature’ and not a ‘bug’! =P )
See my pull request here. It’s just a couple of lines, pasted below.
Hi, I’m trying to fine-tune the pretrained xresnet50 on the IMAGEWOOF dataset, but it could not reach the same accuracy level as my baseline pretrained resnet50. Could someone give me some advice on this?