Part 2 Lesson 13 Wiki

@James_Ying,
It has been a while, so I do not remember but I think I changed definition of actn_loss2:

I changed the line out = V(sf.features)

to out = V(sf.features, requires_grad=True)

I model did not converge. see if it does for you

I found the reason.
Change one line in core.py in fastai/dl2

will solve
RuntimeError: element 0 of variables does not require grad and does not have a grad_fn

from

to

3 Likes

@James_Ying
When I make the change in my local core.py,
I get an error at
optimizer = optim.LBFGS([opt_img_v], lr=0.5)

ValueError: can’t optimize a non-leaf Tensor

instead in definition actn_loss2 I changed
line out = V(sf.features)
to out = torch.tensor(sf.features, requires_grad=True)

Now the model converges nicely

I also changed line in definition of style_loss
outs = [V(o.features) for o in sfs] to outs = [torch.tensor(o.features, requires_grad=True) for o in sfs]

line in definition of comb_loss
outs = [V(o.features) for o in sfs] to outs = [torch.tensor(o.features, requires_grad=True) for o in sfs]

Thanks for that.
run on comb_loss is no error
but run on before ones like 1 Style transfer 2 forward hook 3 Style match. will have errors

But, only opt_img_v need require_grad=True.
all the weights in every layers should be fixed.

There are three images
0 noise image
1 original image (content image)
2 style image
We need fix all the neural network, but update the noise image very step.
Let the noise image looks similar with the original image and style image.
Normal network is update weights, but now we update the 0 noise image.

outs is equal (input image and get the result on every block_ends layers)

How about your image looks like, can you show me your result?

The images look good…similar to what jeremy has in his notebook

I am having the same issue.

I have downloaded the data to '…/fastai/courses/dl2/data0/datasets/cyclegan/horse2zebra'. Since that did not work, I also placed a copy of /data0/datasets/... in my home directory.

Can’t seem to figure out what’s causing the hang-up.

Update:

The problem was caused by the fact that I was using a Docker container that did not have access to my entire file system. Here is my solution.

I would absolutely love to check out your notebook! Glad you’re working on the project, it’s another gem in the “Fast.ai Image Manipulation Canon”.

I am getting error ‘cuda runtime error (10) : invalid device ordinal at torch/csrc/cuda/Module.cpp:88’
I am running style transfer file and i am getting this error on line - torch.cuda.set_device(3)
If i set to torch.cuda.set_device(0) then i get same error on line- targ_t = m_vgg(VV(img_tfm[None]))
need help

1 Like

Just bumping this in case there’s a notebook out there for us to try!

Thanks!

Regards,
Theodore.

Yes, there is a blog post and a notebook now.

3 Likes

‘cuda runtime error (10) : invalid device ordinal at torch/csrc/cuda/Module.cpp:88’
I have the same problem

Thanks Sylvain much appreciated!

Jeremy mentions multiple times that “it takes 5 seconds” and the image ist generated.
Are those 5 seconds a figure of speech or ist that the real time it takes on his nvidia 1080ti?
I have 2 measurements here for the final style transfer optimizer (1000 iterations of comb_loss)

  • GTX 1050 w/ 4GB RAM: 96 seconds (Laptop GPU)
  • GTX 1060 w/ 6GB RAM: 55 seconds

So is the 1080ti actually 11 times faster then the 1060?
(I am aware the RAM-size is irrelevant here, but I put it there to show that those are “real” cards, both GPUs exist as cheaper versions with smaller RAM sizes (and I assume downgraded performance) )

1 Like

comment out the line at the top of the notebook (or set it to 0 if you have just one GPU):

#torch.cuda.set_device(3)

Note, it’s the same issue in almost all dl2 notebooks.

1 Like

Looks like a missing cell in the notebook. Should be fixed by this PR.

1 Like

It works for me after changing from out = V(sf.features) to out = to_gpu(sf.features)

Did you ever figure this out?

I see the size is set at 288 and you seem to have a square sized image when the thing is done.

sz = 288
trn_tfms,val_tfms = tfms_from_model(vgg16, sz)

Is there a way to set the size to the original size of the original image? The image that is not the style image?
Deepart.io does it somehow.

Hello. Finished the course, and thought I would have a go at some style transfer/matching.
I have a program which draws an image based on a set of points. An array of [20,2] for example. I’ve been trying to alter the style transfer code to update these points and compare the new generated image with the target image.
Is this possible using the vgg network in the code ? Is it possible at all ? I thought I could write a loss function which takes the set of points, generates the image, and compare that image with the target but I cannot get it to make any alteration to the points.
Any and all advice would be appreciated.

Not sure about the 1080ti. Mine is a GTX 1060 and similar performance to what you’ve mentioned. But its great to be able to do this work on a very cheap laptop. On my MacBook Pro with no GPU its really a joke, and even some of the low end cloud options were slower than my GTX 1060 laptop.

1 Like