I am trying to stay busy and prepare for the next round of the certification…I am interested in working with the dogscats-ensemble.ipynb.

Unless I’m totally blind (very possible!) It doesn’t appear to be up anywhere on the server.

Is this notebook available?


I think this is what you’re looking for:

Reading on the wiki here about dogscats-ensemble.ipynb, it appears that it existed at one point? Did it get folded into redux.ipynb?

I think I need to finish rewatching some videos to answer my own question – I remember an ensemble but not sure where it happened. The redux script only has a single layer of finetuning and a Dense layer.

Thanks for pointing this out. I’ve found the notebook and uploaded it. Apologies!

Thanks for putting that up! :slight_smile:

Seems to be referencing an object we haven’t created yet…

I think the issue is that were are trying to load up train_ll_feat.bc, which hasn’t been created and saved yet.

It’s interesting, but there’s only about one minute of class discussion on this one particular notebook. That discussion says that the success of the script is due to data augmentation and the inclusion of the batch normalization in the VGG script. But it seems there’s some interesting architecture here. Not sure why it’s called an ‘Ensemble’, because it appears to be a matter of training the dense layer lots of times…maybe not a true ensemble technique? Guess I know what I’ll be doing this weekend…digging deeper into the script! :laughing:

I pretty much never retrain conv layers for any model - very little need for anything that is a normal photo. So still an ensemble. You might find the more detailed MNIST run-thru in a following lesson better - it also shows ensembling.

I’ve tried to create conv_model myself but I can’t seem to create anything that has a .predict_generator as a method. Could you point me in the right direction as to what the conv_model is, and how I can create it?

I’ll definitely check out the MNIST notebook you mentioned…thanks!

Every keras model has a predict_generator() - . Anytime I have something called ‘conv_model’ it means that I’ve removed all the dense (and related) layers off a VGG network - in fact everything after, and including, the flatten() layer. Here’s some other examples:✓&q=conv_model . The fish notebook is a good one to look at since it uses a convenient function:

conv_layers,fc_layers = split_at(model, Convolution2D)
1 Like

Thank you!! I’m on it!! :smiley:

1 Like


We have looked through the code base and tried a variety of things, but can’t figure out where these features came from (train_ll_feat.bc,valid_ll_feat.bc ).

The code is
ll_feat = load_array(model_path+ 'train_ll_feat.bc') ll_val_feat = load_array(model_path + 'valid_ll_feat.bc')

Without knowing there how to create these .bc files, Group 16’s progress for the weekend is grounded. :fearful:

We are also hoping you could give us a maybe 2 line description of what is happening with these functions:

We just basically don’t have the technical maturity to understand what is going on here.
We can’t experiment with it either since we can’t run the code either, without the ll_feat and ll_val_feat.

We are especially flummoxed by the use of zip in both of these functions. How does this work?

Thanks again!

OK, I’ve fixed the setup section in . More importantly though, I’d like to help you guys get to the point that you can debug this on your own. All this notebook is doing is to automate all the steps that you have to do everytime you fully fine-tune a model:

  1. Create a model that retrains just the last layer
  2. Add this to a model containing all VGG layers except the last layer
  3. Fine-tune just the dense layers of this model (pre-computing the convolutional layers)
  4. Add data augmentation, fine-tuning the dense layers without pre-computation.

So make sure that you can follow through these steps without the automation. To do so, simply move all the code in each function out of the functions, so each line of code is in a separate cell. (ctrl-shit-minus splits a cell, which makes this easy). Then run each line of code one at a time, studying the input and output, and seeing what it does and why it works.

My hope is that by the time you get to the zip() line, and after running a few similar functions to see how it behaves, you’ll have answered your own question. If not, please do come back here and let us know where you’re up to.

A suggestion - don’t let yourself ever be stuck; note that each notebook is covering similar material in similar ways, so if you’re stuck on one notebook, try another one (eg in this case the statefarm, mnist, and fish notebooks cover similar territory). Also, try searching the new github repo to see how a function you’re interested in is used through the repo, eg:✓&q=zip


After some issues with this note book (some parts have BatchNormalization and others don’t ) so I changed vgg16 to vgg16BN which worked. That got me my highest score in the redux comp but sadly was well adrift from the top slot. Previously I had scores of 0.1, 0.09 and 0.08 where the top score was 0.03. You now see why there is no need for many significant figures. It is very interesting to understand how the leader came at this figure, which he reveals in his winners interview. Check it out. I think it’s worth some discussion here. Or in the dogs and cats redux forum