Fastai v2 chat

@muellerzr - Well my digging in forum did not provide this gem :slight_smile: . Thanks for the pointer.
@ilovescience - Appreciate the crisp code example.

1 Like

not sure if you have seen this example notebook - https://github.com/fastai/fastai2/blob/master/nbs/14a_callback.data.ipynb

1 Like

I’m not sure weighted cross entropy is working properly? I passed weights of 100:1 and 1:100 for a binary classification problem and it made no difference whatsoever in the precision/recall of the final model. I had expected it would have skewed in either precision or recall direction compared to 1:1 weights…

1 Like

Hi @fmobrj75,

How did you end up resolving this issue? I am facing the same error.

The weights I created are only for the training data. I am still having challenges in using weighted_dataloaders.

Hello! I’ve got this data set of pictures I’m using the vision package with. I’ve got some pictures that have repeating patterns of the image in the corners

It’s really prominent in the bottom middle one. Does this hurt model performance?

My transforms look like this:

batch_tfms = [*aug_transforms(size=224, max_warp=0,max_rotate=360.0), Normalize.from_stats(*imagenet_stats)]
item_tfms = RandomResizedCrop(460, min_scale=0.75, ratio=(1.,1.))
bs=64

The docs have been moved to a new fastai-docs repo.

The docs dir in fastai2 is now a symlink to …/fastai-doc . So you should clone the docs repo into the same dir that you have fastai2 in.

3 Likes

Heads up for Colab users:

To use PyTorch 1.6 in colab, you need to do the following (and when installing fastai):

!pip install torch==1.6.0+cu101 torchvision==0.7.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html

(then of course pip install fastai2, etc)

If you’re running CUDA 10.2 then you just need to do !pip install torch torchvision --upgrade

9 Likes

are the colab preinstalled torch-1.5.1+cu101 and torchvision 0.6.1+cu101 incompatible with fastai2 ?

Yes. We use torch 1.6 now

1 Like

Hi everyone!

Just did a fresh pull and install of the fastai2 repo.

I am going through the 02_production notebook.
Ran into issues in the cell that introduces aug_transforms.

Seems the decode fails. The Flip class only stored its p attribute. But I had to store (basically) all the attrs for it to not complain that Flip has no attribute "x"

Here is what I had to store:
self.store_attrs = 'p,mode,size,pad_mode,align_corners'

I remember something about this came up in the live coding session (or am I misremembering?). Is anyone facing the same?

Thanks!

1 Like

This is currently being worked on (by me) so please keep that in mind :slight_smile: (Jeremy did not get all the way through it). Thank you for pointing it out though :slight_smile:

1 Like

Gotcha, thank you!

Hey just followed your instructions. I got this warning in collab:

/usr/local/lib/python3.6/dist-packages/torch/cuda/__init__.py:125: UserWarning: 
Tesla T4 with CUDA capability sm_75 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_70.
If you want to use the Tesla T4 GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/

Any idea what is this ?
Also would i be able to use fastai2 on mac locally ?

@benihime91
I think he means If you want to use PyTorch 1.6:

To use PyTorch 1.6 in colab, you need to do the following (and when installing fastai):

I did test fastai2 by just pip install fastai2 and it works fine.

1 Like

Yes it got updated a few hours ago. @ilovescience tried pinging all the chats I mentioned, but he missed one :wink: Thanks! :slight_smile:

3 Likes

A new PR was merged to help increase readability of your transform pipelines. No changes are necessary, when your pipeline shows up (such as viewing dls.after_batch), or even as an individual transform (below) it will now read something like so:

Resize -- {'p': 1.0, 'size': (224, 224), 'method': 'crop', 'pad_mode': 'reflection'}: (TensorBBox,object) -> encodes
(TensorPoint,object) -> encodes
(Image,object) -> encodes 

So as we can see, we now know exactly what it took to make our transforms :slight_smile:

For those interested in wondering how to store away those terms when making your Transform, they are stored in the class store_attr variable, see an example below:

class Resize(RandTransform):
    split_idx,mode,mode_mask,order,store_attrs = None,Image.BILINEAR,Image.NEAREST,1,'size,method,pad_mode'
    "Resize image to `size` using `method`"
    def __init__(self, size, method=ResizeMethod.Crop, pad_mode=PadMode.Reflection,
                 resamples=(Image.BILINEAR, Image.NEAREST), **kwargs):
        super().__init__(**kwargs)
        self.size,self.pad_mode,self.method = _process_sz(size),pad_mode,method
        self.mode,self.mode_mask = resamples

Which of course then means you can grab them all by doing attrdict(tfm, *tfm.store_attrs.split(','))

This is not exclusive to vision augmentations either, type transforms have them as well as tabular. For tabular you will specifically have to look in dls.normalize, etc

Do note there is currently a bug when using aug_transforms, it will only show one transform. We’re working on fixing this :slight_smile:

3 Likes

Using --no-cache-dir fixed the issue for me as described here

pip install --no-cache-dir torch==1.6.0+cu101 torchvision==0.7.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html

2 Likes

After a wise suggestion from @hamelsmu, I’ve now made the docs a submodule of fastai2, instead of a symlink. Here’s how to use submodules, if you haven’t before:

https://git-scm.com/book/en/v2/Git-Tools-Submodules

To add the submodule to your directory if you’ve already cloned it, do:

git submodule update --init --recursive
4 Likes

I recommend running the following to ensure that when you pull you also get docs updates:

git config --global submodule.recurse true
4 Likes

Hi, Idk whether I could ask this question here but,

Should we also not to convert nn.LayerNorm to fp16 ?
(All I know is we don’t convert batch_norm to fp16)

# fp16_utils.py
def convert_network(network, dtype):
    """
    Converts a network's parameters and buffers to dtype.
    """
    for module in network.modules():
        if isinstance(module, torch.nn.modules.batchnorm._BatchNorm) and module.affine is True: # <----- here
            continue
        convert_module(module, dtype)
        if isinstance(module, torch.nn.RNNBase) or isinstance(module, torch.nn.modules.rnn.RNNBase):
            module.flatten_parameters()
    return network