However, after training for half day, the predicted result looks like random noise images and the content loss decrease are almost flat.
The input image is:
The noised image looks like this:
And when I inspect each channel in RGB, however, it looks like it is working. The details and edges are somehow recovered:
R channel:
the B and G channel looks just like R channel.
So my question is, what could the possible reason be? Or is it just a normal thing that indicate I should continue training?
If I should just continue training, then how could I continue decease the loss? Currently the loss doesn’t seems going down anymore. I’m using adam and the learning rate is already at 1e-4.
And I managed to get the result right. Here’s what I did. I trained it for another half day with a even smaller learning rate. And the result still is noise. And thanks to @davecg, your answer enlighten me. The scale are correct, I’m using the ‘tanh’ activation followed by a Lamda layer just like the teacher did, so the range is 0-255. However the data type seems to be wrong. After I force convert the result to int, the result showed.
Here’s the result when I force to int:
(Due to new user can only post 1 image per reply constraint, I’ll post this later)
And here’s the result just use as float:
(Due to new user can only post 1 image per reply constraint, I’ll post this later)
I’m not sure the root cause of this however, probably something to do with the plot library?