Have you gotten better results on the PETS dataset?

Hey guys,

One of the tasks at the end of chapter 5 is to try and improve the results gotten in the book. I’ve been trying to do just that without much success. I’ve tried different architectures namely variants of densenet and xresnet but the error_rate is about the same as in the chapter notebook.

Would be glad if you could share your approaches or ideas for improving the results. Thanks.

1 Like

Did you try also tta and the techniques described in chapter 7?

2 Likes

I’ve not yet read chapter 7, thanks guess I have to jump there and read it. Thanks for the direction @Albertotono.

I saw other people are trying with progressive resizing as well, https://forums.fast.ai/t/resizing-images-for-progressive-img-resizing/69443 let us know your findings. It would be great to see your results and tests. (A blog post will be great). Thank you for doing this @kofi

1 Like

Great, I will get right into it and let you know what I find

2 Likes

Also I would try with FP_16 if you can, it should make it a bit better from what jeremy said in the lesson 6 min 32

1 Like

Has anyone gotten any further with this? I tried various transformations, mixed precision learning, deeper architectures, progressive resizing etc and I maxed out at around 95.2% accuracy, which is only marginally better than the chapter 5 results.

I note on papers with code this still seems like a good result. Ragdoll vs Birman may be beyond both humanity and DL :slight_smile:.

1 Like