Here's a very nice article by Tim Dettmers that details and compares GPUs for Deep Learning.
Maybe you already know it but it might help in this discussion.
And for my personal touch, I started playing with VGG16 on a GTX750ti. It was not really fast, but the pre-trained model fits in the 2GB memory of that card and allows for a batch size of 4 to 8 on Cats vs. Dogs redux (shape 3, 224, 224).
I switched to a GTX1070 later and the training is roughly 5-6x faster, and 8GB VRAM fits up to ~180 images (still on Cats vs. Dogs redux, same shape).
I would say that you can give it a try if you already have a 2GB card, but it will be limited with VGG16 (which is the largest of the pre-trained models available on Keras, so it might be easier playing with others in a second time).
Any 4-6GB card should allow you to go through Part 1 without problem, and have fun on some Kaggle competitions.
If you can go for a 1070 or upper, you should have to upgrade before tackling really serious tasks.