Unofficial Setup thread (Local, AWS)

On day to day basis, is it enough to run :

conda update -c fastai fastai 
conda update -c fastai torchvision-nightly

to update ones local install? If so perhaps this should be added to the install instructions.

2 Likes

Great! Thanks. I’ll try it out tonight.
BTW, will v0.7 still be usable after installing v1.0?
(I noticed that @init_27 says he has both 0.7 and 1.0 working).

2 Likes

Hello @init_27, when you refer a 2nd environment it means a 2nd virtual machine?

you need to set up different conda environments for the 2 versions, then it is no problem.
you cannot have 2 versions of fastai in the same env at the same time. you have to switch between environments using conda activate fastai and conda activate fastaiv1 (just examples)

2 Likes

Hello, I just updated the wiki guide for implementing Fastai v1 in Google Colab
Please checkout, everything is working fine!

1 Like

LIke @marcmuc said, you can have both on the same machine but in differente enviroments. That’s why in my step by step I’ve created another env for v1.
The only problem I’ve found was that when switching env, sometimes, my Jupyter config gets messy. So, I’ve got to reconfigure it.

Update on my help request to Paperspace - they promise to have a fast.ai v1.0 template ready by the start of class. That will be useful for people new to Paperspace.

2 Likes

Please, feel free to add a note to the wiki. Note, anyone can edit the above wiki. I generally do some policing to reduce redundant points from being added.

Will do!

1 Like

Binary Updates. I’ve installed the binary version 1.0 on my local UBUNTU 16.04 LTS using conda and ‘conda install 
’ and it seems to be working fine, I used the binary instructions and had only one minor problem with the source path. My bash was looking in the wrong place so ‘source activate’ wasn’t working. This was easily fixed by editing my .bashrc file and commenting out the added export PATH line. In my last class last iteration (Part 2,V2), I would just do a git pull, and get new source. Since I’m now using Conda, my questions are:

  1. How often do the binaries change?
  2. How do I update my machine when the do?

Thanks

Some of you may have a MSDN account and will have access to an Azure subscription (like me). You can use Azure to setup the VM. I found a nice guide to set this up -



Make sure to select Linux in place of Windows.

4 Likes

In case you didn’t get an answer:

1 Like

I configured the Azure Ubuntu 16.04 Deep Learning Virtual Machine for FastAI v1.0 with no issues - just followed the guidance for Part 1 v3 config.

1 Like

@Interogativ think the updates should slightly become passive once we’re into the course.
For the dev version-they move fast! Fast isn’t just fast, its FastAI-fast!

I think a conda update should suffice.

@arjundg If you’ve tested these and they are working nicely, please feel free to add these to the wiki.
Thanks.

1 Like

thanks. the instructions for AWS worked perfectly for me.

1 Like

I have tested this Dockerfile by building a Docker image and then run the CUDA container using NVIDIA-Docker 2.0 runtime on Google Cloud Engine using the Deep Learning image (Debian GNU/Linux 9.5 (stretch)). Docker CE comes preinstalled. I have not experience the error. Note, I am using Python 3.6 and fastai 1.0.5. However, the Docker image takes up a huge 10 GB of disk space, understandably because I think you have not optimize the Dockerfile layers for minimal footprint.

Be careful, pulling the latest FloydHub PyTorch Docker image doesn’t give you torchvision-nightly package from fast.ai. See FloydHub’s Dockerfile below:

1 Like

I got dependency errors when I tried to update my drivers. I solved the issue by running ‘sudo apt-get purge nvidia*’ and starting over.

Added note to the wiki.

I’ve installed FastAI 1.0 on two different computers (desktop/laptop) using the latest Linux Mint distribution. I did that as Ubuntu didn’t support my laptop hardware, and Mint did. It was really simple following the above instructions.