I read the source code of the new API. It seems that mode=0
has become the default option and can not be changed.
No, I havenât changed that function at all. Do you have the latest keras installed directly from github? And the latest tensorflow? In the issue you reference they have an older tensorflow. In class someone mentioned that axis should be -1 for batchnorm BTW.
At 18:18 approx in the lesson 9 section 2 video this is referred to as a Theano configuration when I think it should be a Tensorflow configuration.
I may be missing something here, I canât locate the classids.txt as mentioned in the notebook. Any idea where to get it? Thanks!
sorry just solved it
I downloaded it from http://image-net.org/archive/words.txt and changed the code a bit to split lines at â\tâ instead of â â to create the dict.
I met with the same problem, do you fix it?
Hi all, Iâm trying to understand the perceptual losses paper and having a hard time understanding this graph
I have a few questions
- Why does the loss remain constant for the perceptual losses method (green line)?
- Why does the x-axis say L_BFGS iteration when they are using Adam?
- In supplementary material, in case of super-resolution, for the convolutional layers in res_blocks , they didnât use any padding because it causes artifacts.Because of that the output after 2 conv layers will be of different shape compared to input to res_block.To avoid this they center cropped the input to match the size of output of 2 conv layers.I can understand cropping raw images but at the res block stage they are features(whatever that means) right?, what is the intuition behind cropping features.
Hi, I canât seem to download the word2vec from googledrive and canât find it on http://files.fast.ai/part2/
Anyone figure out a way to get it?
Hi, where can I get âtrn_resized_72_r.bcâ and âtrn_resized_288_r.bcâ datasets from?
Found it here: http://files.fast.ai/data/
Awesome thanks! Itâs actually in here as well: Lesson 8 Discussion
Note: the complete collection of Part 2 video timelines is available in a single thread for keyword search.
Part 2: complete collection of video timelines
Lesson 9 video timeline:
00:00:30 Contribute to, and use Lesson 8 Wiki
00:02:00 Experiments on Image/Neural Style Transfer
00:05:45 Advanced tips from Keras on Neural Style Transfer
00:10:15 More tips to read research papers & âA Neural Algorithm of Artistic Style, Sep-2015â
00:23:00 From Style Transfer to Generative Models
00:32:50 âPerpetual Losses for Real-Time Style Transfer & Super-Resolution, Mar-2016â
00:39:30 Implementation notebook w/ re-use of âbcolzâ arrays from Part 1.
00:43:00 Digress: how âpracticalâ are the tools learnt in Part 2, vs. Part 1 ?
00:52:10 Two approaches to up-sampling: Deconvolution & Resizing
01:09:30 TQDM library: add a progress meter to your loops
01:17:30 Fast Style Transfer w/ âSupplementary Material, Mar-2016â
01:27:45 Ugly artifacts like âcheckerboardâ: cause and fixes; Keras UpSampling2D
01:31:20 ImageNet Processing in parallel
01:33:15 DeVISE research paper
01:38:00 Digress: Tips on path setup for SSD vs. HD
01:42:00 words, vectors = zip(*w2v_list)
01:49:30 Resize images
01:52:10 Three ways to make an algorithm faster:
memory locality,
simd/vectorization,
parallel processing
I found it at
wget -c "https://s3.amazonaws.com/dl4j-distribution/GoogleNews-vectors-negative300.bin.gz"
from https://groups.google.com/forum/#!topic/word2vec-toolkit/z0Aw5powUco
When I use this file, I am getting very less number of matches than what is shown in the video lecture. You sure this is the file?
How long is training expected to take when training the super-resolution network? Iâm using a GTX 1080 and it looks like training will take 30 min, which is longer than I would have expected:
[loss: 58328.617] 7% 1296/19439 [02:24<33:00, 9.16it/s]
This is much slower than when training previous CNNs. (I made sure to set VGG layers to not trainable, so that isnât the problem.) I also checked my GPU usage while training using the nvidia-smi
command, and GPU seems to have jumped since starting the training, suggesting that the GPU is indeed being used:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 382.05 Driver Version: 382.05 |
|-------------------------------+----------------------+----------------------+
| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 1080 WDDM | 0000:02:00.0 On | N/A |
| 37% 59C P2 185W / 180W | 7169MiB / 8192MiB | 98% Default |
+-------------------------------+----------------------+----------------------+
Hi Matthew
When I run arr_lr = bcolz.open(âtrn_resized_72_r.bcâ)[:]
I get error : FileNotFoundError: [Errno 2] No such file or directory: âtrn_resized_72_r.bc\meta\sizesâ
Do you know what is the problem?
Thanks,
Arash
When I run arr_lr = bcolz.open(âtrn_resized_72_r.bcâ)[:]
I get error : FileNotFoundError: [Errno 2] No such file or directory: âtrn_resized_72_r.bc\meta\sizesâ
I donât know this problem, and so by default Iâd redownload the data. If that didnât work, Iâd explore this mysterious â\meta\sizesâ thing, by studying the nature of bcolz arrays and how theyâre stored (via bcolz documentation). Maybe the bcolz package updated and became out of sync with these bcolz arrays, which may have been created with an older version of bcolz.
Dear Matthew
Thanks for reply.
I have downgrade the bcolz to 1.0.0 but still the same problem.
To download the files again, it says I donât have access. Can I get the access or it is just for people registered in the course?
Thanks,
Arash
You can find the data here: Index of /data
Fast.ai moved their files from platform.ai to files.fast.ai.
To help you transcend me / generalize beyond me:
Fast.ai moves fast, and so information about them can go out of date fast. This applies to many posts on this forum. I didnât have access to these files either with respect to the original link. I found the new link by going to course.fast.ai and pretending I was new. I soon ended up at the lesson 1 page and saw this:
âImportant note: All files in the course are now available from files.fast.ai, rather than platform.ai, as shown in the videos.â
The following is the assumption that led me to this solution: âFast.ai cares deeply about being inclusive. Restricting access to learning materials is the opposite of being inclusive. Fast.ai would never do that. There must be another link.â
Dear Matthew
Thanks, I have downloaded the files and it works.
Thanks,
Arash
Why is AWS p2 slower than my MacBook Pro? I am running a small sample of images through the fast style transfer main algorithm. This takes 9 seconds per iteration on my MacBook Pro and 16 seconds per iteration on p2.xlarge. I thought Tensorflow was supposed to take advantage of a GPU automatically, resulting in a significant speed up!? Am I missing some configuration setting?