Problem importing fastai2 today

Anybody know why this import statements fail ? I am running on Kaggle.
Please help !
cheers
sid

PS: I also notice that fastai2 versions later than 0.0.20, the process takes much longer…

!pip install fastai2==0.0.20

from fastai2.vision.all import *
from fastai2.vision.core import *
from fastai2.callback.all import *
from fastai2.metrics import *

What is the error message that you are getting?

Same error also seen in Colab

====================================================================
NameError Traceback (most recent call last)
in
----> 1 from fastai2.vision.all import *
2 from fastai2.vision.core import *
3 from fastai2.callback.all import *
4 from fastai2.metrics import *
5 ‘’’

/opt/conda/lib/python3.7/site-packages/fastai2/vision/all.py in
----> 1 from …basics import *
2 from …callback.all import *
3 from .augment import *
4 from .core import *
5 from .data import *

/opt/conda/lib/python3.7/site-packages/fastai2/basics.py in
----> 1 from .data.all import *
2 from .optimizer import *
3 from .callback.core import *
4 from .learner import *
5 from .metrics import *

/opt/conda/lib/python3.7/site-packages/fastai2/data/all.py in
----> 1 from …torch_basics import *
2 from .core import *
3 from .load import *
4 from .external import *
5 from .transforms import *

/opt/conda/lib/python3.7/site-packages/fastai2/torch_basics.py in
2 from .imports import *
3 from .torch_imports import *
----> 4 from .torch_core import *
5 from .layers import *

/opt/conda/lib/python3.7/site-packages/fastai2/torch_core.py in
418 return show_title(str(self), ctx=ctx, **merge(self._show_args, kwargs))
419
–> 420 class TitledTuple(Tuple, ShowTitle):
421 _show_args = {‘label’: ‘text’}
422 def show(self, ctx=None, **kwargs):

NameError: name ‘Tuple’ is not defined

2 Likes

You have also to install the latest version of fastcore: recently it renamed Tuple to fastuple.

3 Likes

Due to a bug, you have to force the installation of at least version 0.1.35 of fastcore.

I have made a pull request to fix this.

Thanks ! It’s quite strange to have an error when the package I am using is stable, older version. I have
been running it several weeks without problem.

But I see something is still not right. Today I tried on Kaggle :
!pip install fastai2
It installed
Successfully installed dataclasses-0.6 fastai2-0.0.30 fastcore-0.1.38 torch-1.6.0 torchvision-0.7.0

I can run pass the above errors. But another error appears when I use learn.to_fp16() :
/opt/conda/lib/python3.7/site-packages/fastai2/callback/fp16.py in before_fit(self)
83
84 def before_fit(self):
—> 85 assert self.dls.device.type == ‘cuda’, "Mixed-precision training requires a GPU, remove the call to_fp16"
86 if self.learn.opt is None: self.learn.create_opt()
87 self.model_pgs,self.master_pgs = get_master(self.opt, self.flat_master)

AssertionError: Mixed-precision training requires a GPU, remove the call to_fp16

This is surprising since I already turned on GPU mode on Kaggle

cheers
sid