Lesson 9 discussion

Have you ever experienced an issue like this before?

Run the following command in a tmux frame:

watch -n 1 nvidia-smi

then run the training and let us know if you see the GPU Util. % increase.

Lastly, what versions of keras and tf are you using? Is this the AWS instance set up using the fast.ai script? Or did you download tensorflow yourself? If the latter, you may have downloaded regular tensorflow instead of tensorflow-gpu?

Hi, thanks for replying to my question. I am running the Jupyter notebook in a Python 3.6 environment.
I tried the watch command and confirmed that the GPU Uilt %age rose to 100% when I started training.
I am using keras version 1.2.2 and tf.__version__1.1.0
I tried the Logging Device placement scrip on https://www.tensorflow.org/tutorials/using_gpu and confirmed that all operations were running on the GPU.
So as far as I can see the code is running on the GPU, just painfully slowly. To add to my worries I an now getting a ResourceExhaustedError when exactly the same code ran without producing this error last week. I can work around this by reducing the batch size from 18 to 8.
I am very confused. My p2.xlarge instance provides me slower run times and a smaller batch size limit that my MacBook. The only positive is that I got some very good results after running my MacBook for 18 hours. Yet I still have to question what I am paying for on AWS. Should I try setting up the whole p2.xlarge instance again from scratch?
Any other suggestions gratefully accepted.
Yours cheerfully, Gavin

I’ve got to admit that I’m stumped. You could certainly try redoing your AWS, but man what a pain in the ass…

Hmmm… It’s obviously using the GPU like you said. Just doesn’t seem to be doing anything, which… just doesn’t make any sense… Man, sorry I can’t be of more help.

Oh well, thanks again. I’ll keep trying to crack it. And guess what? My AWS bill just came through!

Dear all

Do you know how to implement the fast style transfer model in xcode using CoreML?
I have transferred the model to coreml model, but cannot predict its output and show it in Xcode.

Any comments?

Cjeers,
Arash

Hi all,

If I want to transform some popular songs into another style based on some other songs.

How to deal with the, maybe mp3 format, files and transform them into numpy array, is it enough that only use python or need I to use some tools to do the transformation?

Hello, can I check if you resolved this issue, if it is indeed one? I’m facing the same with a GTX 1070 ti taking about 30mins as well. GPU is being used according to the nvidia-smi command.

Unfortunately I never resolved the issue, so I’m not sure what the problem was :frowning: I think I just skipped it and moved on.

Thanks, Jonathan!

can someone explain me y the loss function in super resolution so complex?
if we get an output of “high resolution” picture - y cant we compare it to the true picture
we have “high resolution” and do a loss function between them? by pixels?

Hi there,

I tired super resolution based on Jeremy’s code, but my generated pictures have some side effects (maybe not checker board, but something like that). I can’t figure out how can I eliminate this. It would be nice if you guys could help me :slight_smile:

I used different training solution, but the architecture seems to be the same. Here is my code:

def conv_block(input, filters=64, size=(3, 3), strides=(1, 1), padding='same', act=True):
    input = Conv2D(filters, kernel_size=size, strides=strides, padding=padding)(input)
    input = BatchNormalization()(input)
    return Activation('relu')(input) if act else input


def res_block(input, filters=64, size=(3, 3)):
    x = conv_block(input, filters=filters, size=size)
    x = conv_block(x, filters=filters, size=size, act=False)
    return add([x, input])


def up_block(x, filters=64, size=(3, 3)):
    x = UpSampling2D()(x)
    return conv_block(x, filters=filters, size=size)

low_res_input = Input((None, None, 3))  # lr_shape + (3,)
x = conv_block(low_res_input, filters=64, size=(9, 9))
for i in range(4):
    x = res_block(x, filters=64, size=(3, 3))
x = up_block(x, filters=64, size=(3, 3))
x = up_block(x, filters=64, size=(3, 3))
predicted_output = Conv2D(3, kernel_size=(9, 9), strides=(1, 1), activation='relu', padding='same')(x)

I use this custom objective, I think this solution is more understandable:

vgg_input = Input(hr_shape + (3,))
imagenet_mean = np.array([123.68, 116.779, 103.939], dtype=np.float32)
vgg = VGG16(include_top=False, input_tensor=Lambda(lambda x: (x - imagenet_mean)[:, :, :, ::-1])(vgg_input))
for l in vgg.layers:
    l.trainable = False
vgg_featurizer = Model(vgg_input, vgg.get_layer(f'block2_conv1').output)

def custom_objective(y_true, y_pred):
    diff = vgg_featurizer(y_true) - vgg_featurizer(y_pred)
    dims = list(range(1, K.ndim(diff)))
    return K.expand_dims(K.sqrt(K.mean(diff ** 2, dims)), 0)

Here is some generated examples:
https://imgur.com/PNUXb3d
https://imgur.com/0PSILDl

Please help me, I tried a lot of stuff unsuccessfully, I’m a bit angry :slight_smile:

Hi,

I came across the data for this lesson here: http://files.fast.ai/data/

Can someone please confirm if the ones that Jeremy uses in his notebook resizes the images so that the shape of the high res and low res images are the same?

eg. both (x, x, 3). At the moment lr is 72,72,3 and high res is 288,288,3.

Getting a shape mismatch error and I noticed that Jeremy’s file in the lesson has an extra _r in the file name (I guessed r meaning resized?).

old remark, but for anyone using this words.txt file, you’ll see only 20k matches where Jeremy has about 50k. Here is the code I’ve done to get up to 51k matches.

for k,v in classids.items():
    classids[k] = v.replace(' ', '_')
    index = v.find(', ')
    if index != -1:
        classids[k] = v[:index]
classids['n02099601'], classids['n15295045'] # use this to see it has successfully updated these 2 ones

You’ll see (‘golden_retriever’, ‘flower’) instead of (‘golden retriever’, ‘flower, prime, peak, heyday, bloom, blossom, efflorescence, flush’)