Regarding Data Augmentation, I was reading https://petewarden.com/2017/10/29/how-do-cnns-deal-with-position-differences/1 blog retweeted by @jeremy on Twitter and then I wonder what difference does data augmentation make?
Data augmentation enables your system to identify previously unseen things in more scenarios than appear in the training data. I think Jeremy will discuss this in more detail in the second lesson.
The article is talking about data augmentation, it just doesn’t use that term. From the blog post,
Part of the secret is that training often includes adding artificial offsets to the inputs, so that the network has to learn to cope with these differences
Adding artificial offsets is a form of data augmentation. Imagine your labeled dataset only has one photo of a kind of shoe, and the shoe is positioned in the top left corner of the image. To train a system to recognize that shoe in more scenarios, you can modify the image so the shoe appears to be in a few other positions. Then you can train your system on 5 images of the shoe in different positions. Even though it’s the same shoe, this helps the training algorithm. The idea is that if you didn’t do data augmentation, then the system might not identify some images as shoes. If you use data augmentation, then it will correctly identify more shoes.