Here’s the launch announcement:
Great news Jeremy, its been a massive undertaking! Is version 1 now the version we find in the Github fastai repository? I ask because I expected that the environment.yml would reflect pytorch 1.0, but it is still saying pytorch < 0.4.
conda list pytorch in the shell I get the following pytorch packages:
pytorch 0.4.0 py36_cuda0.0_cudnn0.0_1 pytorch pytorch-nightly 1.0.0.dev20180921 py3.6_0 pytorch
The build testing with
jupyter nbconvert --execute --ExecutePreprocessor.timeout=600 --to notebook examples/tabular.ipynb (see installation instructions on https://github.com/fastai/fastai) and the cifar example notebook works in my case.
What pytorch version(s) do you get?
Does the test work?
Congratulations for the launch @jeremy!
I have a question about
if I use Adam as an optimizer (or another one with momentum) I will get very high momentum after
learner.lr_find(), is it correct?
Because fastai only saves a model, not an optimizer via
learner.save(), and after
lr_find it will restore the initial model with high moving average of our gradients, won’t it?
I am on Windows 10. I get this feedback:
pytorch 0.4.0 py36_cuda91_cudnn7he774522_1 [cuda91] pytorch
I have not managed to create the fastai environment - I get errors to do with installing shapely, which I haven’t figured out how to bypass. So I just use my base environment.
But after some reading I think I have understood that Pytorch 1.0 is not yet available for Windows - is that correct? So maybe its not yet time for me to adopt fastai version 1? And I also see that the conda install fastai approach, which I have NOT used, is specific to getting up and running on v1. I am still dealing with the fastai library as I always have done, through git clone and pull.
Once I figure out what you are suggesting re build testing I will get back to you on that!
I have a couple of questions about the
- The github README indicates a slightly different way of installing
fastaicompared to the docs page:
conda install -c pytorch -c fastai fastai pytorch-nightly cuda92vs
conda install -c fastai fastai. Does installing
fastaialone automatically install the required
pytorchdependencies or do they need to be installed separately as mentioned in the README?
- I don’t have
- I installed
fastaiwith the following command:
conda install -c pytorch -c fastai pytorch-nightly fastai. Everything got installed without any problems, however, when I load in
from fastai import *, it just hangs there. Any thoughts?
Here is my
I’m having that problem too. Looks like the kernel restarts when importing ‘fastai.vision.transform’ . I could pass that point commenting out the line
from .vision.transform import * in the
tta.py file, but i could not identify the root cause.
I can confirm this as well. It takes 7s 922ms to load
fastai are commenting out the
.vision.transform import * line in the
Without that commented out.
fastai eventually does load but it more a little over 4m to load.
Congrats on the launch, @jeremy! Wishing you and the rest of the team continued impact!
Fastai v1 is unfortunately not available on Windows yet, since pytorch v1 isn’t either. As soon as pytorch releases v1 (it’s just a preview now), we recommend to use Linux instances (pytorch v1 supports mac OS on the CPU only…)
Actually having tta in
fastai/__init__.py was a mistake. It should have been in
fastai/vision/__init__.py. So rather than what @sgugger wrote above, please show us the first few lines of the result of:
%prun from fastai.vision import *
Ok I ran it and it made over a million function calls. I’m not sure how to share it here. Please see this notebook for the output.
Thanks @sgugger for confirming that. I have a big process to go through to get my PC running Linux so I have put it off. The upcoming classroom series will presumably use V1, so if I am able to participate in that then I’ll head over to AWS for a Linux instance.
But to confirm your intentions, will you be making fast.ai Windows available once Pytorch 1.0 is? Do you have any idea about when that might be happening?
The only barrier to use fastai v1 on Windows is pytorch, so as soon as it’s publicly available on Windows, fastai v1 will also be. At the conference, they said it would be out for NIPS, so beginning of December.
Great thanks. I can fix it so that the init isn’t slow any more (doing that now), but I suspect it’ll only push the issue to later… Can you try running this in a new notebook and tell me how long it takes? (It should be basically instant, but your profile suggests it’ll be slow):
import torch for i in range(10): a=torch.tensor([1.,2.]).cuda()
Yes. It ran very quickly:
for i in range(10): a=torch.tensor([1., 2.]).cuda()
CPU times: user 1 µs, sys: 0 ns, total: 1 µs
Wall time: 3.81 µs
fastai through conda. Does changes pushed to the Github repo automatically update the conda packages? If not, what is the current best approach to have the latest version installed?
I wonder if Windows Subsystem for Linux could be used in the meantime
No. Use the ‘developer install’ from the bottom of the readme.