My goal was to tell people that Deep Learning is easier than they might expect. This goal I believe I achieved and everyone now knows about imagenet, pre-trained models, fine tuning FC layers or some last convolutional layers.
Another goal was to explain basic components of CNN: filters, stride, padding; gradient descent, model architectures and whats going on in the model after Layer 1. I think thats just too hard to understand how all these elements got combined together from 45 minutes talk. I am sure guys have heard about elements, but only few of them, who studied something similar at the university, had this sight: “Oh, this is how it works!”.
But I think its ok for the first try.