Lesson 9 discussion

Updated python files will live here from now on: https://github.com/jph00/part2

I’ve updated last week’s Powerpoint presentation with this week’s additions.

Papers:

Helpful links:

5 Likes

Hi Jeremy, can you please share details of what all is going to be covered in each lecture?

As I mentioned last night, the lesson contents change a lot during the week based on how the previous week went, new research results, new library features, etc, so it’s not possible to give a complete lesson plan.

If you let me know exactly what actions you’re wanting to do and what information you need to do them, I’ll do my best to help.

I want to learn about structured data and time series analysis using deep learning. And along with that best practices and tricks for doing image segmentation. I need both these for my work here at farmguide (its the startup where I am working).
We need both these things to predict which crops in which areas of India are likely to fail in some season.

1 Like

Sure - we’ll be covering those in later lessons, so you’ll just need a little patience! :slight_smile:

Bump.

Is the “imagenet_process-full-Copy1.ipynb” shown in lesson 9 lecture going to be uploaded to git?

I’ll share that notebook next week, since we didn’t get far with it this week.

I missed this post.

I’m interested in where you see RL (Reinforcement Learning) being practically applied to real world problems. It seems mainly focused on game playing in the literature.

I haven’t found anything much yet - not sure that RL is quite ready for the course yet. We’ll see how time goes; we might have time for a quick intro, but it’ll be game playing unless I find a useful application for which data is available soon…

Is the video for Lesson 9 available anywhere yet? I watched the stream of lesson 9 last night but now it appears all of the streams have been taken down

Yes they’re in the wiki post.

1 Like

I’m not sure my setup could handle it, but it might be interesting to try the word vector approach with ImageNet10K…

http://vision.stanford.edu/documents/DengBergLiFei-Fei_ECCV2010.pdf

I guess the idea is that it’s not really necessary - you can already use words that aren’t in the 1000 imagenet categories… The paper shows examples.

I am trying to get the neural style notebook to work for image resizing before going to the home work the one on the web site only uses inp_shape once in the notebook.

Am I right to assume that it is (72, 72, 3)?

I also know that you used a different notebook in class ( since the res function called conv_block twice ).

Was this origanally a theano model?

I am asking because I can’t get the deconv to work with a None parameter and run into this issue

this function:
def deconv_block(x, filters, size, shape, stride=(2,2)):
x = Deconvolution2D(filters, size, size, subsample=stride, border_mode=‘same’,
output_shape=(None,)+shape)(x)
x = BatchNormalization(axis=1, mode=2)(x)
return Activation(‘relu’)(x)

becomes
def deconv_block(x, filters, size, shape, stride=(2,2)):
x = Deconvolution2D(filters, size, size, subsample=stride, border_mode=‘same’,
output_shape=(32,)+shape)(x)
x = BatchNormalization(axis=1, mode=2)(x)
return Activation(‘relu’)(x)

which I have no idea is correct.

Yes - I used:

inp_shape = arr_lr.shape[1:]

No, I haven’t changed that function at all. Do you have the latest keras installed directly from github? And the latest tensorflow? In the issue you reference they have an older tensorflow. In class someone mentioned that axis should be -1 for batchnorm BTW.

The only changes I made were to define inp_shape and to refactor res_block to call conv_block twice (adding an appropriate option to conv_block to make that work). It doesn’t change anything in practice - it’s just aesthetic.

If you don’t use ‘None’ for the batch size, you’ll need to put the actual batch size you’re using (I had both 8 and 16 in my code). But you shouldn’t need to do that - at least, I didn’t need to, so I assume it must just be a difference in versions of some library.

Stumbled upon this:

Keras has all vgg, resnet and inception architectures accessible through keras.applications.*

Here is the doc: https://keras.io/applications/

A simple feature map generation (as in the sample tutorial with comments):

from keras.applications.vgg16 import VGG16
from keras.applications.vgg16 import preprocess_input
from keras.preprocessing import image
import numpy as np

vgg16_model=VGG16(weights=“imagenet”,include_top=False)
test_image=“Mona_Lisa.jpg”
img=image.load_img(test_image) #loads image in PIL format
img.size #(2835, 4289)
img_rsz=image.load_img(test_image, target_size=(224,224)) #loads image in PIL format (height, width)
img_rsz.size #(224,224)
img_rsz2=image.load_img(test_image, target_size=(300,300))
x=image.img_to_array(img_rsz2) #convert PIL format image to numpy array
inp=preprocess_input(np.expand_dims(x, axis=0)) #input is a batch (None,x,y,z)
pred=vgg16_model.predict(inp)
pred.shape

vgg16_model.summary()

PS: Is there a better way to post code here?

2 Likes

Using github gists are a better idea.

You can also use triple backticks with the word python immediately after the first set of ticks.

E.g.

def hello():
    print('hello world')
1 Like