Fastai v1 launched

Here’s the launch announcement:

http://www.fast.ai/2018/10/02/fastai-ai/

31 Likes

congrats for the launch @jeremy! seems awesome thanks for building it.
Also the GCP arrangement is very convenient, much appreciated.
props to @miguel_perez for the mention, well deserved

2 Likes

Great news Jeremy, its been a massive undertaking! Is version 1 now the version we find in the Github fastai repository? I ask because I expected that the environment.yml would reflect pytorch 1.0, but it is still saying pytorch < 0.4.

Dear Chris,

with conda list pytorch in the shell I get the following pytorch packages:

pytorch                   0.4.0           py36_cuda0.0_cudnn0.0_1    pytorch
pytorch-nightly           1.0.0.dev20180921         py3.6_0    pytorch

The build testing with jupyter nbconvert --execute --ExecutePreprocessor.timeout=600 --to notebook examples/tabular.ipynb (see installation instructions on https://github.com/fastai/fastai) and the cifar example notebook works in my case.

What pytorch version(s) do you get?
Does the test work?

Best regards
Michael

Congratulations for the launch @jeremy!
I have a question about learner.save() and class LRFinder:
if I use Adam as an optimizer (or another one with momentum) I will get very high momentum after learner.lr_find(), is it correct?
Because fastai only saves a model, not an optimizer via learner.save(), and after lr_find it will restore the initial model with high moving average of our gradients, won’t it?
Thank you!

Hi @MicPie

I am on Windows 10. I get this feedback:

pytorch 0.4.0 py36_cuda91_cudnn7he774522_1 [cuda91] pytorch

I have not managed to create the fastai environment - I get errors to do with installing shapely, which I haven’t figured out how to bypass. So I just use my base environment.

But after some reading I think I have understood that Pytorch 1.0 is not yet available for Windows - is that correct? So maybe its not yet time for me to adopt fastai version 1? And I also see that the conda install fastai approach, which I have NOT used, is specific to getting up and running on v1. I am still dealing with the fastai library as I always have done, through git clone and pull.

Once I figure out what you are suggesting re build testing I will get back to you on that!

I have a couple of questions about the fastai install:

  1. The github README indicates a slightly different way of installing fastai compared to the docs page:
    conda install -c pytorch -c fastai fastai pytorch-nightly cuda92 vs conda install -c fastai fastai. Does installing fastai alone automatically install the required pytorch dependencies or do they need to be installed separately as mentioned in the README?
  2. I don’t have cuda92, does fastai work with cuda 9.1?
  3. I installed fastai with the following command: conda install -c pytorch -c fastai pytorch-nightly fastai. Everything got installed without any problems, however, when I load in fastai using from fastai import *, it just hangs there. Any thoughts?

Here is my environment.yml.

Thanks.

I’m having that problem too. Looks like the kernel restarts when importing ‘fastai.vision.transform’ . I could pass that point commenting out the line from .vision.transform import * in the tta.py file, but i could not identify the root cause.

I can confirm this as well. It takes 7s 922ms to load fastai are commenting out the .vision.transform import * line in the tta.py file.

Without that commented out. fastai eventually does load but it more a little over 4m to load.

Congrats on the launch, @jeremy! Wishing you and the rest of the team continued impact!

Fastai v1 is unfortunately not available on Windows yet, since pytorch v1 isn’t either. As soon as pytorch releases v1 (it’s just a preview now), we recommend to use Linux instances (pytorch v1 supports mac OS on the CPU only…)

Hi @shaun1 and @elmarculino
I don’t have that problem so I can’t do it myself, but could you please do a %prun from fastai import * and share the results with us? That would be very helpful.

Actually having tta in fastai/__init__.py was a mistake. It should have been in fastai/vision/__init__.py. So rather than what @sgugger wrote above, please show us the first few lines of the result of: %prun from fastai.vision import *

Ok I ran it and it made over a million function calls. I’m not sure how to share it here. Please see this notebook for the output.

Thanks @sgugger for confirming that. I have a big process to go through to get my PC running Linux so I have put it off. The upcoming classroom series will presumably use V1, so if I am able to participate in that then I’ll head over to AWS for a Linux instance.

But to confirm your intentions, will you be making fast.ai Windows available once Pytorch 1.0 is? Do you have any idea about when that might be happening?

The only barrier to use fastai v1 on Windows is pytorch, so as soon as it’s publicly available on Windows, fastai v1 will also be. At the conference, they said it would be out for NIPS, so beginning of December.

1 Like

Great thanks. I can fix it so that the init isn’t slow any more (doing that now), but I suspect it’ll only push the issue to later… Can you try running this in a new notebook and tell me how long it takes? (It should be basically instant, but your profile suggests it’ll be slow):

import torch
for i in range(10): a=torch.tensor([1.,2.]).cuda()

Yes. It ran very quickly:

%time
import torch
for i in range(10): a=torch.tensor([1., 2.]).cuda()

CPU times: user 1 µs, sys: 0 ns, total: 1 µs
Wall time: 3.81 µs

I installed fastai through conda. Does changes pushed to the Github repo automatically update the conda packages? If not, what is the current best approach to have the latest version installed?

I wonder if Windows Subsystem for Linux could be used in the meantime

No. Use the ‘developer install’ from the bottom of the readme.