you should be able to get everything work (at least vast majority) directly in windows 10 now (with newest pytorch). Everything to set up on ubuntu is covered in the thread on setting up your own DL box (Personal DL box) .
Basically:
Thanks @beacrett for your list of steps. But in fact, I would like to avoid the dualboot windows/linux.
I would like to run my pytorch jupyter notebooks on ubuntu installed in windows 10, not from a true linux partition.
I tried the conda install of peterjc123 but did not work. I guess I’m missing steps in the installation of cuda/cudnn on ubuntu windows.
What is your opinion on this package (peterjc123) ?
i don’t believe the required nvidia and cuda set up is supported under the windows ubuntu sub system (pretty sure this is discussed in the dl box thread). As per one of Jeremy’s recent posts, you should now be able to get everything running directly in windows if you dont want to dual boot.
I do not understand well the difference between “i don’t believe the required nvidia and cuda set up is supported under the windows ubuntu sub system” and what seems to me the opposite : “you should now be able to get everything running directly in windows”. If you have time to explain more (or give the link to a post answering that), I would appreciate very much.
To word my suggestions another way, if you want to run everything for this class in a ubuntu environment (given that you have a windows system), setting up a dual boot machine or a virtual box are options. Using the windows subsystem for linux may be problematic (in the ‘setting up a DL box’ thread I linked above, the consensus seemed to be that this doesn’t work for the class - it would seem to revolve around issues getting the gpu configured - this was the wall I hit).
As per Jeremy’s recent post however, you should now be able to get everything running in windows 10 (without using the ubuntu sub system).
One thought I had to make the fastai lib in kaggle kernels more useful was to upload pretrained model weights as kaggle datasets. And then add functionality in fastai to allow loading weights from csv files or SQLite DBs, so that we could then use pretrained models in kaggle. If anyone wants to give that a go and see if it works OK, I think it would be a cool project…
I am more into “paper to code” concept, which is understanding how the advanced concept from papers has been converted as a library. I remember in one lecture @jeremy said every efficient technique will be outdated by upcoming different approaches, so we should be in a position to understand how to convert or apply the technique defined in the papers. I started with this, and it is kind of very intense.
We usually store trained models, computed values etc. in some folder (e.g. tmp). For that we need to have write access in uploaded kaggle repos. I guess the problem will be that we can’t change a kaggle repository. We can just upload a repository and read from that. (as fas as I know). How would that work?
Hello all! Sorry I’m a bit late to this thread. I’m also interested in joining a study group and participating in some projects. Has one been started already from this conversation? Thank you!
Transfer Learning will be taught as one of the main reason of the democratization process of DL. I will talk in my lecture of our course Fast.ai as a (good example of that (thanks @jeremy@rachel !).