It would be cool to create some sort of common speed benchmark to test our configurations. To account for different GPU memory size we can vary batch_size. The latest Keras (2.0.9) makes multi-GPU training easy so we can test things like 2x1070 vs 1080ti. The CPU/HDD can also be tested by using CPU intensive augmentation strategies.
We can compare:
- pure GPU speed (time spent on some common model/dataset per one epoch)
- GPU/CPU speed (time spent on heavily augmented data located in memory)
- GPU/CPU/HDD speed(time spent on heavily augmented images located on HDD)
- Multi-GPU vs. more powerful single GPU.
We can pick some standard dataset like CIFAR100 that comes with Keras.