Fastai v2 chat

YEs please, I think it’s because settings.ini is not in the MANIFEST. Just fixed that but it will need a new release to be working.

1 Like

Still not working :confused: here is how I’m installing (I also restarted the instance):

!pip install -q git+https://github.com/fastai/fastai2 --upgrade

And running:

from fastai2.basics import *
from fastai2.vision.all import *
from fastai2.callback.all import *
from nbdev.showdoc import *
show_doc(untar_data)
---------------------------------------------------------------------------
AssertionError                            Traceback (most recent call last)
<ipython-input-3-71804236c9be> in <module>()
----> 1 show_doc(untar_data)

1 frames
/usr/local/lib/python3.6/dist-packages/nbdev/showdoc.py in show_doc(elt, doc_string, name, title_level, disp, default_cls_level)
    245         s = inspect.getdoc(elt)
    246         # doc links don't work inside markdown pre/code blocks
--> 247         s = f'```\n{s}\n```' if Config().get('monospace_docstrings') == 'True' else add_doc_links(s)
    248         doc += s
    249     if disp: display(Markdown(doc))

/usr/local/lib/python3.6/dist-packages/nbdev/imports.py in __init__(self, cfg_name)
     38         while cfg_path != Path('/') and not (cfg_path/cfg_name).exists(): cfg_path = cfg_path.parent
     39         self.config_file = cfg_path/cfg_name
---> 40         assert self.config_file.exists(), "Use `Config.create` to create a `Config` object the first time"
     41         self.d = read_config_file(self.config_file)['DEFAULT']
     42         add_new_defaults(self.d, self.config_file)

AssertionError: Use `Config.create` to create a `Config` object the first time

Weird. What version of nbdev?

It’s running 0.2.5

I tried doing the dev (0.2.6) and upon import I’ll get the config error as well

---------------------------------------------------------------------------
AssertionError                            Traceback (most recent call last)
<ipython-input-2-e0e43e5288cc> in <module>()
----> 1 from nbdev.showdoc import *

2 frames
/usr/local/lib/python3.6/dist-packages/nbdev/showdoc.py in <module>()
      8 from .imports import *
      9 from .export import *
---> 10 from .sync import *
     11 from nbconvert import HTMLExporter
     12 

/usr/local/lib/python3.6/dist-packages/nbdev/sync.py in <module>()
     41 
     42 # Cell
---> 43 _re_default_nb = re.compile(f'File to edit: {Config().nbs_path.relative_to(Config().config_file.parent)}/(\\S+)\\s+')
     44 _re_cell = re.compile(r'^# Cell|^# Comes from\s+(\S+), cell')
     45 

/usr/local/lib/python3.6/dist-packages/nbdev/imports.py in __init__(self, cfg_name)
     38         while cfg_path != Path('/') and not (cfg_path/cfg_name).exists(): cfg_path = cfg_path.parent
     39         self.config_file = cfg_path/cfg_name
---> 40         assert self.config_file.exists(), "Use `Config.create` to create a `Config` object the first time"
     41         self.d = read_config_file(self.config_file)['DEFAULT']
     42         add_new_defaults(self.d, self.config_file)

AssertionError: Use `Config.create` to create a `Config` object the first time

Note that the Cuda transform has been replaced since we now use the device attribute of DataLoader. By default:

  • a DataLoader (or any subclass like TfmdDL) don’t do anything in terms of device placement (pass along a device to change this)
  • a DataBunch (or any subclass) uses default_device(), again pass along the device you want at creation (either with a .databunch or the regular init/factory methods).
5 Likes

Torch Core

Class ArrayImageBW

Class ArrayImageBW, it seems, doesn’t cast/convert an RGB image to 1-channel, only sets cmap='Greys'.

The impact: when one tries to cast an opened RGB image to ArrayImageBW, it will stay RGB and show() displays it in color, no matter what cmap is set innernally in _show_args or passed via an argument.

Nota bene: You need to use .convert(mode='L') when opening a PIL image if you intend it to be displayed as Greys (by default):
image
or use explicitly cmap='Greys_r':
image

Note also this comment by @jeremy regarding to why some monochrome images look “inverted”. Spoiler: it is not a bug.

1 Like

Like the latest torchvision release, the last PyTorch release is not compatible with fastai v2 (yet). I have constrained the requirements while we figure out a solution.

3 Likes

@sgugger, when you have time can you explain or improve docs about TransformBlock, defining Transform, linking them and how they work together …

I try to define BlockFloat but not sure where to start… I try to do image regression. I did it with TransformBlock and works fine… but I want to improve it with float and may be other types to use mix models (tabular, image, etc.)

@s.s.o Once I finish the heatmap implementation I’m going to release a tutorial video of sorts showing what looked at and how I implemented it which may be of some help to you and anyone looking to add custom implementations to the DataBlock API

(Trying to do this sooner than later)

2 Likes

@muellerzr, thank you… I’ll wait for it. It may help to solve show_batch problems as well.

@s.s.o For a hint, I’d peek at how CategoryBlock’s show looks. I was debating on making a FloatBlock as compared to other implementations it’s an ‘easier’ one so I’d be happy to help :slight_smile:

I tried to do:

d = {‘files’: [“file 1”, “file 2”], ‘ca’: [3.2, 4.4]}
data = pd.DataFrame(data=d)
data[‘files’] = data[‘files’].apply(lambda x: path_img+"/{}{}".format(x, ‘.jpg’)); data
class ToFloatTensor(Transform):
   "Transform to float tensor"
    order = 10 #Need to run after PIL transforms on the GPU
    def __init__(self, split_idx=None, as_item=True):
        super().__init__(split_idx=split_idx,as_item=as_item)

    def encodes(self, o): return o.astype(np.float32)
    def decodes(self, o): return o
def FloatBlock(vocab=None, add_na=False):
   "`TransformBlock` for single-label float targets"
   return TransformBlock(type_tfms=ToFloatTensor())
den_db = DataBlock(blocks=(ImageBlock, FloatBlock),
                   get_x=lambda x:x[0],
                   get_y=lambda x:x[1],
                   splitter=RandomSplitter())
item_tfms = [Resize(224),ToTensor()]
batch_tfms =[Normalize.from_stats(*imagenet_stats)]
dbunch = den_db.databunch(data, path=path_img, bs=128,  num_workers=0, item_tfms=item_tfms, batch_tfms=batch_tfms)
dbunch.c = 1

this seems to work… but still dbunch.show_batch() gives error but shows image:

AttributeError: ‘Tensor’ object has no attribute 'show’

also,

show_image_batch(xys) gives warning and gives wrong image coloring:

Clipping input data to the valid range for imshow with RGB data ([0…1] for floats or [0…255] for integers).

Any suggestions?

Hi! I am trying to port a project to v2 (mainly because of data loader improvements since I have a huge dataset). I am trying right now to understand some imports (still a bit allergic to import *, sorry :sweat_smile:).

Could anyone point out how to import RandomSplitter? So far I have found it at fastai_dev/dev/local/data/core.py but nowhere within fastaiv2.

from fastai2.text.all import *
does the trick, but I can’t figure out how.

The second import I can’t figure out is “Cuda”, as in
self.data = dsrc.databunch(bs=batch_size, val_bs=batch_size, after_batch=Cuda)

Thanks!

http://dev.fast.ai/data.transforms#RandomSplitter

The page has a search function that depending on your window size or addblocker might not show directly.

PS Using the * does work quite well

1 Like

Thanks, silly me, I don’t know how I missed that. I usually search within Github, which works reasonably well. Also PyCharm ctr + click takes you to the source, even for external code. For some reason I forgot about that. Interestingly I found the Cuda reference as well, in fastai2/data/transforms.py, but it’s not on the online version. There has probably been a recent change there.

Defined in fastcore/01_foundation.ipynb as shown,

class _T(metaclass=PrePostInitMeta):
    def __pre_init__(self):  self.a  = 0; assert self.a==0
    def __init__(self,b=0):  self.a += 1; assert self.a==1
    def __post_init__(self): self.a += 1; assert self.a==2

_T() raises error in pre_init when initialized with an argument intended for b:

_T(1)
TypeError: __pre_init__() takes 1 positional argument but 2 were given

Thus, a better definition:

class _T(metaclass=PrePostInitMeta):
    def __pre_init__(self,b=0):  self.a  = 0; assert self.a==0
    def __init__(self,b=0):  self.a += 1; assert self.a==1
    def __post_init__(self,b=0): self.a += 1; assert self.a==2

Sylvain @sgugger, nota bene.

I don’t get any error with _T(1).

@muellerzr,

Deep down in the dungeons I find ‘TitledFloat:grinning:
so that I can use dbunch.show_batch() for regression… some thing like below works . It might be useful to others …

class TitledFloatShort(Float, ShowTitle):

_show_args = {‘label’: ‘text’}
def show(self, ctx=None, **kwargs):
“Show self”

return show_title(f’{self:.2f}’, ctx=ctx, **merge(self._show_args, kwargs))

class ToFloatTensor(Transform):

“Transform to float tensor”
order = 10 #Need to run after PIL transforms on the GPU
_show_args = {‘label’: ‘text’}
def init(self, split_idx=None, as_item=True):
super().init(split_idx=split_idx,as_item=as_item)
def encodes(self, o): return o.astype(np.float32)
def decodes(self, o): return TitledFloatShort(o)

def FloatBlock(vocab=None, add_na=False):
"TransformBlock for single-label float targets"
return TransformBlock(type_tfms=ToFloatTensor())

den_db = DataBlock(blocks=(ImageBlock, FloatBlock),
get_x=lambda x:x[0],
get_y=lambda x:x[1],
splitter=RandomSplitter())

item_tfms = [Resize(224)]
dbunch = den_db.databunch(data2, path=path_img, bs=128, num_workers=0, item_tfms=item_tfms)
dbunch.c = 1

and now works with float titles :slight_smile:

dbunch.show_batch()

2 Likes

Good job finding that! I was about to post about it :slight_smile:

2 Likes

Sorry if this is too obvious. I am still working on moving a text model from v1 to v2. I am currently getting an AttributeError for mixed precision:

self.learner = language_model_learner(data, arch=AWD_LSTM, opt_func=opt_func, path=path, drop_mult=0.3).to_fp16()
AttributeError: 'LMLearner' object has no attribute 'to_fp16'

However, the train_imagenette example contains the line
if fp16: learn = learn.to_fp16()
so I am a bit confused :sweat_smile: