Training speeds and bottleneck

While optimizing the design of my “German traffic signs” CNN I noticed that performance didn’t necessarily suffer when the number of layers or parameters increased. Instead it seems like there is always 1 or 2 bottlenecks in the pipeline that determine the training speed.

Although this probably isn’t news, it was new to me. I wondered if there are any papers on it, but couldn’t find any. Do you have any resources on that topic?

Thanks!

Edit: to clarify, I did find papers on speed improvements but not on the bottleneck hypothesis.

I’m interested in this as well.

The first place my mind goes to is the hardware? Are you using the aws instance or something else?

One of the things mentioned (I think in Lesson 3 video?) is about saving the stuff that goes on with the images – decoding jpeg, reordering RBG, and so forth.

You don’t mention how many layers or parameters you added. Or if you are getting better results. Perhaps the additional layers and parameters aren’t being used? Just saying that I’ve had experiences (in other coding domains) where I didn’t add things correctly and therefore they didn’t impact results/performance/etc. :slight_smile:

thanks for your feedback. i triedit on my macbook air and my workstation (macos) at home. although the speeds were different, the behaviour was the same. right now im using pickled images with the imagedatagenerator in keras. it might be possible that this on its own is the bottleneck.

ill investigate further and report back once im home from vacation. i do think theres room for a standardised measurement of NN training performance.