Two things:
-
create_cnn
is now deprecated.
learn = create_cnn(data, models.resnet34, metrics=error_rate)
should now be
learn = cnn_learner(data, models.resnet34, metrics=error_rate)
. -
doc(interp.plot_top_losses)
throws an error.
Two things:
create_cnn
is now deprecated.
learn = create_cnn(data, models.resnet34, metrics=error_rate)
should now be
learn = cnn_learner(data, models.resnet34, metrics=error_rate)
.
doc(interp.plot_top_losses)
throws an error.
My promo code is JHOAHEV
**My promo code is JHOAHEV, enter it and apply you can get $10 **
In the function open_image
we call pil2tensor
which has the following lines of code:
a = np.transpose(a, (1, 0, 2))
a = np.transpose(a, (2, 1, 0))
i replaced it with:
a = np.transpose(a, (2, 0, 1))
and not surprisingly get the same result. Any idea what is going on here? am i missing something? Is this done for optimization in converting the dimensions?
I think the key here is that it’s stochastic GD. So there’s the element of chance that makes results different from time to time, but I’m new here; so I can’t be 100% sure.
I use a Gnome extention called system monitor that sits in the top bar. Show % GPU is a preference you can turn on.