Fastai v1 launched

I’m having that problem too. Looks like the kernel restarts when importing ‘fastai.vision.transform’ . I could pass that point commenting out the line from .vision.transform import * in the tta.py file, but i could not identify the root cause.

I can confirm this as well. It takes 7s 922ms to load fastai are commenting out the .vision.transform import * line in the tta.py file.

Without that commented out. fastai eventually does load but it more a little over 4m to load.

Congrats on the launch, @jeremy! Wishing you and the rest of the team continued impact!

Fastai v1 is unfortunately not available on Windows yet, since pytorch v1 isn’t either. As soon as pytorch releases v1 (it’s just a preview now), we recommend to use Linux instances (pytorch v1 supports mac OS on the CPU only…)

Hi @shaun1 and @elmarculino
I don’t have that problem so I can’t do it myself, but could you please do a %prun from fastai import * and share the results with us? That would be very helpful.

Actually having tta in fastai/__init__.py was a mistake. It should have been in fastai/vision/__init__.py. So rather than what @sgugger wrote above, please show us the first few lines of the result of: %prun from fastai.vision import *

Ok I ran it and it made over a million function calls. I’m not sure how to share it here. Please see this notebook for the output.

Thanks @sgugger for confirming that. I have a big process to go through to get my PC running Linux so I have put it off. The upcoming classroom series will presumably use V1, so if I am able to participate in that then I’ll head over to AWS for a Linux instance.

But to confirm your intentions, will you be making fast.ai Windows available once Pytorch 1.0 is? Do you have any idea about when that might be happening?

The only barrier to use fastai v1 on Windows is pytorch, so as soon as it’s publicly available on Windows, fastai v1 will also be. At the conference, they said it would be out for NIPS, so beginning of December.

1 Like

Great thanks. I can fix it so that the init isn’t slow any more (doing that now), but I suspect it’ll only push the issue to later… Can you try running this in a new notebook and tell me how long it takes? (It should be basically instant, but your profile suggests it’ll be slow):

import torch
for i in range(10): a=torch.tensor([1.,2.]).cuda()

Yes. It ran very quickly:

%time
import torch
for i in range(10): a=torch.tensor([1., 2.]).cuda()

CPU times: user 1 µs, sys: 0 ns, total: 1 µs
Wall time: 3.81 µs

I installed fastai through conda. Does changes pushed to the Github repo automatically update the conda packages? If not, what is the current best approach to have the latest version installed?

I wonder if Windows Subsystem for Linux could be used in the meantime

No. Use the ‘developer install’ from the bottom of the readme.

Unfortunately the windows subsystem for linux doesn’t give access to the GPU…

A post was merged into an existing topic: Fastai v1 install issues thread

Quick questions:

  • Do you plan to release full-fledged documentation? If so, when?

  • FastaiV1 repository does actually include all the notebooks, which at a first glance seem to be identical to Fastai0.7 nbs. Can we use them with FastaiV1?

  • I’d really like to have V1 and V0.7 sitting together on my linux box. Should I expect issues about cuda and cudnn (different versions)?

The doc is fully released here. The notebooks are the ones from the old course, so they aren’t usable with v1 directly. You’ll have to wait for the new course to have compatible notebooks.
There’s no reason you can’t have v1 and v0.7 in different environments.

1 Like

Been exploring fastai v1 a bit and it looks even greater than before! 1 quick question - with fastai v1, how do we do lr_find?
I tried learn.lr_find(), but after that learn.recorder.val_losses is an empty list, and learn.recorder.plot_losses() throws an exception, complaining no output activations

You need to use learn.recorder.plot(). See the doc on lr_find for more information.

2 Likes

Thanks!