I created a post with link to some quick and dirty benchmarks, training Cifar-10 & Cifar-100 on two GPUs, the current and cheapest RTX (2060) and the last-gen “King of the Hill” GTX (1080Ti).
If you can use FP16 for the RTX Tensor Cores, it will most likely be faster than the 1080Ti, despite half the price and half the VRAM.
Now should you get 2* RTX 2060 over a single RTX 2080, for the same price, is open for discussion and beyond my pay-grade
Comparing the RTX 2060 vs the GTX 1080Ti, using Fastai for Computer Vision