No problem, Richard. I’ll keep you posted, should I make any further progress.
I’m fairly convinced I should be able to configure a workstation for deep learning no matter what libraries I use, since that’s a valuable skill to learn by its own.
So you managed to get pytorch working properly on windows 10, which is substantial! Do you have other libraries installed and/or other versions of cuda?
Did I manage to run lesson1 nbs? Yes, but with cpu-only pytorch, wich leaves me dissatisfied.
I could switch to linux, but for a number of reasons, I’d rather follow with windows 10, primarily since I have other things to do on that windows box, and rebooting continuously is a nonoption.
I am already in week 4 of Part 1 (v1) course. Shall I first finish the V1 and then come back to V2 or is it a better idea to restart the V2 course from the beginning?
b) Install pytorch for cuda. If you already have a keras/TF setup which works only with cuda 8.0, and want to avoid headaches, you would rather install the cuda 8.0 version:
>activate fastai
>conda install -c peterjc123 pytorch cuda80
However, keep us posted should you manage a successful installation of the cuda9 version alongside TF and cuda8
c) Install fastai library
>pip install fastai
be patient, that will install a lot of stuff.
d) Now you are done, but you may want to install the ipython kernel in order to use lessons’ notebooks and do your own experiments. In my case:
The fast library simplifies things a lot compared to part 1 v1; for a non-programmer like me, v2 is actually easier than v1. Hope to become a programmer by the time v3 is published
After v2 is launched on the course website, will the v1 notebooks and videos still be available?
Is it possible to create a video example to explain the core advise that the time is best spent playing with the application side of the code rather than the theory and fundamentals; for example, I do not understand the difference when @jeremy says I want to work in bash and not in the shell or something like that, but I should ignore this for now because it is not on my shortest path to get a feel for the model
Also, is there a way to show how many parameters the model is trying to fit, when we are trying to determine if it needs more data or improved optimizer etc?
Everything will remain with just some shuffling of directories like putting all notebooks or n a directory named part1v2 or something equivalent…
Don’t know…
if we have the model named as m then in Pytorch we can do m.parameters(). This will display Everything…(it’s a bit ugly so I tried it on small networks to make sense out of it…)