Lesson 9 discussion

(Jeremy Howard) #1

Updated python files will live here from now on: https://github.com/jph00/part2

I’ve updated last week’s Powerpoint presentation with this week’s additions.


Helpful links:

Lesson 9 wiki
(Sahil Singla) #2

Hi Jeremy, can you please share details of what all is going to be covered in each lecture?

(Jeremy Howard) #3

As I mentioned last night, the lesson contents change a lot during the week based on how the previous week went, new research results, new library features, etc, so it’s not possible to give a complete lesson plan.

If you let me know exactly what actions you’re wanting to do and what information you need to do them, I’ll do my best to help.

(Sahil Singla) #4

I want to learn about structured data and time series analysis using deep learning. And along with that best practices and tricks for doing image segmentation. I need both these for my work here at farmguide (its the startup where I am working).
We need both these things to predict which crops in which areas of India are likely to fail in some season.

(Jeremy Howard) #5

Sure - we’ll be covering those in later lessons, so you’ll just need a little patience! :slight_smile:

(Jeremy Howard) #6


(Kent) #7

Is the “imagenet_process-full-Copy1.ipynb” shown in lesson 9 lecture going to be uploaded to git?

(Jeremy Howard) #8

I’ll share that notebook next week, since we didn’t get far with it this week.

(kelvin) #9

I missed this post.

I’m interested in where you see RL (Reinforcement Learning) being practically applied to real world problems. It seems mainly focused on game playing in the literature.

(Jeremy Howard) #10

I haven’t found anything much yet - not sure that RL is quite ready for the course yet. We’ll see how time goes; we might have time for a quick intro, but it’ll be game playing unless I find a useful application for which data is available soon…

(brianorwhatever) #11

Is the video for Lesson 9 available anywhere yet? I watched the stream of lesson 9 last night but now it appears all of the streams have been taken down

(Jeremy Howard) #12

Yes they’re in the wiki post.

(Jeremy Howard) #13

(David Gutman) #14

I’m not sure my setup could handle it, but it might be interesting to try the word vector approach with ImageNet10K…


(Jeremy Howard) #15

I guess the idea is that it’s not really necessary - you can already use words that aren’t in the 1000 imagenet categories… The paper shows examples.

(arthurconner) #16

I am trying to get the neural style notebook to work for image resizing before going to the home work the one on the web site only uses inp_shape once in the notebook.

Am I right to assume that it is (72, 72, 3)?

I also know that you used a different notebook in class ( since the res function called conv_block twice ).

Was this origanally a theano model?

I am asking because I can’t get the deconv to work with a None parameter and run into this issue

this function:
def deconv_block(x, filters, size, shape, stride=(2,2)):
x = Deconvolution2D(filters, size, size, subsample=stride, border_mode=‘same’,
x = BatchNormalization(axis=1, mode=2)(x)
return Activation(‘relu’)(x)

def deconv_block(x, filters, size, shape, stride=(2,2)):
x = Deconvolution2D(filters, size, size, subsample=stride, border_mode=‘same’,
x = BatchNormalization(axis=1, mode=2)(x)
return Activation(‘relu’)(x)

which I have no idea is correct.

(Jeremy Howard) #17

Yes - I used:

inp_shape = arr_lr.shape[1:]

No, I haven’t changed that function at all. Do you have the latest keras installed directly from github? And the latest tensorflow? In the issue you reference they have an older tensorflow. In class someone mentioned that axis should be -1 for batchnorm BTW.

The only changes I made were to define inp_shape and to refactor res_block to call conv_block twice (adding an appropriate option to conv_block to make that work). It doesn’t change anything in practice - it’s just aesthetic.

If you don’t use ‘None’ for the batch size, you’ll need to put the actual batch size you’re using (I had both 8 and 16 in my code). But you shouldn’t need to do that - at least, I didn’t need to, so I assume it must just be a difference in versions of some library.

(Kishore P. V.) #18

Stumbled upon this:

Keras has all vgg, resnet and inception architectures accessible through keras.applications.*

Here is the doc: https://keras.io/applications/

A simple feature map generation (as in the sample tutorial with comments):

from keras.applications.vgg16 import VGG16
from keras.applications.vgg16 import preprocess_input
from keras.preprocessing import image
import numpy as np

img=image.load_img(test_image) #loads image in PIL format
img.size #(2835, 4289)
img_rsz=image.load_img(test_image, target_size=(224,224)) #loads image in PIL format (height, width)
img_rsz.size #(224,224)
img_rsz2=image.load_img(test_image, target_size=(300,300))
x=image.img_to_array(img_rsz2) #convert PIL format image to numpy array
inp=preprocess_input(np.expand_dims(x, axis=0)) #input is a batch (None,x,y,z)


PS: Is there a better way to post code here?

(Karthik Kannan) #19

Using github gists are a better idea.

(David Gutman) #20

You can also use triple backticks with the word python immediately after the first set of ticks.


def hello():
    print('hello world')