Lesson 9 discussion

I read the source code of the new API. It seems that mode=0 has become the default option and can not be changed.

No, I haven’t changed that function at all. Do you have the latest keras installed directly from github? And the latest tensorflow? In the issue you reference they have an older tensorflow. In class someone mentioned that axis should be -1 for batchnorm BTW.

At 18:18 approx in the lesson 9 section 2 video this is referred to as a Theano configuration when I think it should be a Tensorflow configuration.

I may be missing something here, I can’t locate the classids.txt as mentioned in the notebook. Any idea where to get it? Thanks!

sorry just solved it :slight_smile:
I downloaded it from http://image-net.org/archive/words.txt and changed the code a bit to split lines at ‘\t’ instead of ’ ’ to create the dict.

5 Likes

I met with the same problem, do you fix it?

Hi all, I’m trying to understand the perceptual losses paper and having a hard time understanding this graph

I have a few questions

  1. Why does the loss remain constant for the perceptual losses method (green line)?
  2. Why does the x-axis say L_BFGS iteration when they are using Adam?
  3. In supplementary material, in case of super-resolution, for the convolutional layers in res_blocks , they didn’t use any padding because it causes artifacts.Because of that the output after 2 conv layers will be of different shape compared to input to res_block.To avoid this they center cropped the input to match the size of output of 2 conv layers.I can understand cropping raw images but at the res block stage they are features(whatever that means) right?, what is the intuition behind cropping features.

Hi, I can’t seem to download the word2vec from googledrive and can’t find it on http://files.fast.ai/part2/

Anyone figure out a way to get it?

Hi, where can I get “trn_resized_72_r.bc” and “trn_resized_288_r.bc” datasets from?

Found it here: http://files.fast.ai/data/

1 Like

Awesome thanks! It’s actually in here as well: Lesson 8 Discussion

Note: the complete collection of Part 2 video timelines is available in a single thread for keyword search.
Part 2: complete collection of video timelines

Lesson 9 video timeline:

00:00:30 Contribute to, and use Lesson 8 Wiki

00:02:00 Experiments on Image/Neural Style Transfer

00:05:45 Advanced tips from Keras on Neural Style Transfer

00:10:15 More tips to read research papers & “A Neural Algorithm of Artistic Style, Sep-2015”

00:23:00 From Style Transfer to Generative Models

00:32:50 “Perpetual Losses for Real-Time Style Transfer & Super-Resolution, Mar-2016”

00:39:30 Implementation notebook w/ re-use of ‘bcolz’ arrays from Part 1.

00:43:00 Digress: how “practical” are the tools learnt in Part 2, vs. Part 1 ?

00:52:10 Two approaches to up-sampling: Deconvolution & Resizing

01:09:30 TQDM library: add a progress meter to your loops

01:17:30 Fast Style Transfer w/ “Supplementary Material, Mar-2016”

01:27:45 Ugly artifacts like “checkerboard”: cause and fixes; Keras UpSampling2D

01:31:20 ImageNet Processing in parallel

01:33:15 DeVISE research paper

01:38:00 Digress: Tips on path setup for SSD vs. HD

01:42:00 words, vectors = zip(*w2v_list)

01:49:30 Resize images

01:52:10 Three ways to make an algorithm faster:
memory locality,
simd/vectorization,
parallel processing

4 Likes

I found it at

wget -c "https://s3.amazonaws.com/dl4j-distribution/GoogleNews-vectors-negative300.bin.gz"

from https://groups.google.com/forum/#!topic/word2vec-toolkit/z0Aw5powUco

When I use this file, I am getting very less number of matches than what is shown in the video lecture. You sure this is the file?

How long is training expected to take when training the super-resolution network? I’m using a GTX 1080 and it looks like training will take 30 min, which is longer than I would have expected:

[loss: 58328.617] 7% 1296/19439 [02:24<33:00, 9.16it/s]

This is much slower than when training previous CNNs. (I made sure to set VGG layers to not trainable, so that isn’t the problem.) I also checked my GPU usage while training using the nvidia-smi command, and GPU seems to have jumped since starting the training, suggesting that the GPU is indeed being used:

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 382.05                 Driver Version: 382.05                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name            TCC/WDDM | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 1080   WDDM  | 0000:02:00.0      On |                  N/A |
| 37%   59C    P2   185W / 180W |   7169MiB /  8192MiB |     98%      Default |
+-------------------------------+----------------------+----------------------+

Hi Matthew

When I run arr_lr = bcolz.open(‘trn_resized_72_r.bc’)[:]
I get error : FileNotFoundError: [Errno 2] No such file or directory: ‘trn_resized_72_r.bc\meta\sizes’

Do you know what is the problem?

Thanks,
Arash

When I run arr_lr = bcolz.open(‘trn_resized_72_r.bc’)[:]
I get error : FileNotFoundError: [Errno 2] No such file or directory: ‘trn_resized_72_r.bc\meta\sizes’

I don’t know this problem, and so by default I’d redownload the data. If that didn’t work, I’d explore this mysterious “\meta\sizes” thing, by studying the nature of bcolz arrays and how they’re stored (via bcolz documentation). Maybe the bcolz package updated and became out of sync with these bcolz arrays, which may have been created with an older version of bcolz.

Dear Matthew

Thanks for reply.
I have downgrade the bcolz to 1.0.0 but still the same problem.
To download the files again, it says I don’t have access. Can I get the access or it is just for people registered in the course?

Thanks,
Arash

@arashaomrani

You can find the data here: Index of /data

Fast.ai moved their files from platform.ai to files.fast.ai.


To help you transcend me / generalize beyond me:

Fast.ai moves fast, and so information about them can go out of date fast. This applies to many posts on this forum. I didn’t have access to these files either with respect to the original link. I found the new link by going to course.fast.ai and pretending I was new. I soon ended up at the lesson 1 page and saw this:

“Important note: All files in the course are now available from files.fast.ai, rather than platform.ai, as shown in the videos.”

The following is the assumption that led me to this solution: “Fast.ai cares deeply about being inclusive. Restricting access to learning materials is the opposite of being inclusive. Fast.ai would never do that. There must be another link.”

Dear Matthew

Thanks, I have downloaded the files and it works.

Thanks,
Arash

Why is AWS p2 slower than my MacBook Pro? I am running a small sample of images through the fast style transfer main algorithm. This takes 9 seconds per iteration on my MacBook Pro and 16 seconds per iteration on p2.xlarge. I thought Tensorflow was supposed to take advantage of a GPU automatically, resulting in a significant speed up!? Am I missing some configuration setting?