Lesson 8 in-class

(Alex Haro) #81

@mo.shakirma I had to install these to get the notebooks running

pip install xgboost
pip install gensim
pip install keras-tqdm

(Alex Izvorski) #82

@mo.shakirma pip install xgboost

(Samuel Ekpe) #83

I shared a link on the part 1 forum. anyway here it is: https://github.com/dunovank/jupyter-themes

(Igor Barinov) #84

for me it was:

pip install matplotlib
pip install pandas
pip install xgboost
pip install bcolz
pip install gensim
pip install nltk
pip install keras_tqdm

from utils2 import * caused all that dependencies

(Matthew Kleinsmith) #85

I found out about Part I of this course from Import AI.

(Hamel Husain) #86

If you import the VGG model that is built into keras (keras.applications), do you still have to re-order the channels, etc.?

(kelvin) #87


(Rachel Thomas) #88

I like the Import AI and Wild ML newsletters: http://www.wildml.com/newsletter/

(nima) #89

Shouldn’t we use something like Resnet instead of VGG (with avg pooling) since the residual blocks carry more context?

(arthurconner) #90

Should/do we put in batch normalization like we did in lesson 5?

(Karthik Kannan) #91

Will the pre-trained weights change if we’re using Average pooling instead of max pooling?

(Xinxin) #92

how about vgg16_bn_avg?


MaxPool --> AvgPool change still preserves the “pretrained”?

(Thundering Typhoons) #94

Do you have to retrain VGG on imagenet when you change max pooling to avg pooling?

(sai kiran) #95

Is it advisable/recommended to learn Tensorflow?

(Rachel Thomas) #96

@sakiran Yes

(Brendan Fortuner) #97

Can jeremy scroll down a bit on the screen when he goes over code snippets so they are centered on the board?

(nima) #98

Can you walk us through [loss]+grads ? Why is there a plus sign there and what does it do?

(kelvin) #99

It’s python syntax to concatenate lists. (Make loss into a list, then concatenate it w/ grads.)

(nima) #100

So it’s just [loss, grads] flattened?