Fastai v2 chat

That is that transform :slight_smile:

Check out the dls.after_batch to grab that exact transform and check itā€™s parameters (If you check what aug_transforms it inherits AffineCoordTfm as a type

As I see AffineCoordTfm can hold various transforms, how can I make sure thereā€™re only flips inside?

Letā€™s look at AugTransforms, this may help more:

def aug_transforms(mult=1.0, do_flip=True, flip_vert=False, max_rotate=10., min_zoom=1., max_zoom=1.1,
                   max_lighting=0.2, max_warp=0.2, p_affine=0.75, p_lighting=0.75, xtra_tfms=None, size=None,
                   mode='bilinear', pad_mode=PadMode.Reflection, align_corners=True, batch=False, min_scale=1.):
    "Utility func to easily create a list of flip, rotate, zoom, warp, lighting transforms."
    res,tkw = [],dict(size=size if min_scale==1. else None, mode=mode, pad_mode=pad_mode, batch=batch, align_corners=align_corners)
    max_rotate,max_lighting,max_warp = array([max_rotate,max_lighting,max_warp])*mult
    if do_flip: res.append(Dihedral(p=0.5, **tkw) if flip_vert else Flip(p=0.5, **tkw))
    if max_warp:   res.append(Warp(magnitude=max_warp, p=p_affine, **tkw))
    if max_rotate: res.append(Rotate(max_deg=max_rotate, p=p_affine, **tkw))
    if min_zoom<1 or max_zoom>1: res.append(Zoom(min_zoom=min_zoom, max_zoom=max_zoom, p=p_affine, **tkw))
    if max_lighting:
        res.append(Brightness(max_lighting=max_lighting, p=p_lighting, batch=batch))
        res.append(Contrast(max_lighting=max_lighting, p=p_lighting, batch=batch))
    if min_scale!=1.: xtra_tfms = RandomResizedCropGPU(size, min_scale=min_scale, ratio=(1,1)) + L(xtra_tfms)
    return setup_aug_tfms(res + L(xtra_tfms))

So we can see a bunch of if statements, and if true itā€™s there, if not itā€™s not. So (upon reading this) you see we multiply each of the probabilities by mult (1.0), so set all you donā€™t want to 0 and just the one you want to above and you should be good to go. Does this help? (Sorry, on mobile, I can walk us through it when Iā€™m back on the computer if itā€™s still confusing :slight_smile: )

Or we some tell me links I would see how predict and store the result for multi label classification it would be greatā€¦
Thanks

Yeah, I read this code before to choose parameters. You missed max_zoom which is on by default.
I just thought thereā€™s something else which can definitely tell me what is on.
Maybe itā€™s simpler to use Dihedral(p=0.5) or Flip() instead, although I am not sure whatā€™s happening inside Dihedral

1 Like

Has anyone managed to extract the column from ColReader within the Pipeline state?

Edit:

Okay, took a very long time to get to here, I wonder if thereā€™s a better way to try to do this?

idx = dls.tfms.name.index('ColReader')
tfm = dls.tfms.encodes[idx]
tfm.first().cols

I tried to build a combined model but had trouble with the text part of the architecture. Iā€™d be interested in an example on how to build a combined model using the LSTM-AWD. I didnā€™t understand whatā€™s happening in PoolingLinearClassifier (esp. masked_concat_pool) and if I should keep those layers in the combined architecture.

IIRC to actually do the metrics etc, we only use the first output (not the mask), else weā€™d need a custom loss each time. (To make sure, check whatā€™s being passed to the metrics during a dummy train). Otherwise Iā€™d take a look at the kernels here for inspiration:

(The petfinder adoption)

1 Like

Hello, Im having an issue while coding a custom Transform that needs to be applied only to the train items:

class FlipRot(Transform):
    split_idx=0
    def encodes(self,x):
        k = np.random.randint(8)
        if k in [1,3,4,7]: x = x.flip(-1)
        if k in [2,4,5,7]: x = x.flip(-2)
        if k in [3,5,6,7]: x = x.transpose(-1,-2)
        return x

I understand split_idx=0 would do that, but when I:

dls = DataLoaders.from_dsets(train_ds,valid_ds,test_ds,bs=bs,after_item=[FlipRot()],
                             after_batch=[IntToFloatTensor,Normalize.from_stats(*imagenet_stats),],device=default_device())

ā€¦FlipRot() does not get called at all when setting split_idx=0. Iā€™ve debugged the Transform class and the split_idx parameter it receives on _call is None so it doesnā€™t perform a _do_call, i.e. it doesnā€™t ever call encodes. If I remove split_idx=0 from the FlipRot class it does get called, but also gets called on validation.

Whatā€™s amiss?

It needs to be typed. Try:

class FlipRot(Transform):
    split_idx=0
    def encodes(self,x:TensorImage):
        ....
1 Like

Iā€™m trying to fit a multi label classification problem with custom loss, I face this problem

RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

Found out that the root cause of this problem is that the target tensor is not a tensor but TensorMultiCategory, so at the end the loss becomes TensorMultiCategory(0.7609, device='cuda:0') and pytorch canā€™t handle it, tried to cast the target with tensor it just returns back the same TensorMultiCategory type, I tried to cast it with Tensor and it throws me this:

TypeError: expected CPUTensorId (got DispatchKeySet(CUDATensorId))

I donā€™t know how to fix this.

EDIT: I fixed it by accessing the .data and setting the requires_grad=True, but I donā€™t like this hotfix, is there a proper way?

Typing was optional in my case, but I managed to find the issue: my datasets are pytorch datasets, and as such they did not have the split_idx property, which is needed. I assumed that DataLoaders.from_dsets would inject it but it didnā€™t. Solution was to include the split_idx property in the datasets directly. Mental reminder: Mixing pytorch and fastai2 requires knowing a lot of internals of fastai2.

2 Likes

Ah! Thatā€™s a really useful thing to know.

I suspect mixing the two will become a lot easier in a few months, once documentation is clearer. That would be really powerful.

1 Like

While working with fastai2 I hit some problems with the TensorBase class, so I implemented an alternative using composition instead of direct Tensor inheritance, and using the __torch_function__ interface introduced in pytorch 1.5.

Code is here.

The class is designed in a way that all of the basic tensor operations still work the same way, so you can call .cuda(), index it, clone, or add it to a torch.Tensor, but still keeping the metadata present while doing so, differently from the TensorBase.

It also makes possible to type dispatch on torch functions (not implemented but easy to do) and subclassing is working.

Also, autograd calling backward works, so it fix this closed(?) issue from fastai.

One problem is that a small number of functions are not compatible with __torch_function__, and in this case you need to pass the underlying tensor to the function.

1 Like

Would it be useful with a callback to overfit a single batch of the training set, to check the health of your model? The ShortEpochCallback() does something similar, but doesnā€™t print the loss (and does validation too until the cutoff I think). Iā€™ve usually done this with the standard PyTorch training loop, which works fine, but maybe a callback is a bit more elegant?

1 Like

Hi all, I just finished 2 lessons from the 2019 DL1 course and I absolutely love it so far!

Jeremy mentioned that the 2020 course and fastai2 will be released ā€œa couple of weeksā€ before Aug 4 - should I continue with the 2019 course for now or do you think itā€™s better to wait for the new version? Also is there an expected release date for the course?

Apologies if this isnā€™t the right place to post, this is my first time on the forums. Thanks in advance :smiley:

I had a similar query.
I wanted to do the course but I heard that the course-v4 will be releasing soon. Should I wait for it or do the course-v3?

You have a few options. Yes you can watch the old course for the knowledge/idea fundamentals, then do course-v4 here soon. Otherwise to get a speed up on v2 there is my walkthroughs which has a few things the course wonā€™t touch, but itā€™s more implementations rather than fundamentals (see the pinned threads on the v2 chat topic).

Personally, Iā€™d recommend part 1 or mine until the new version comes out as both are very intro friendly and certainly watching a somewhat older course will not so you wrong :slight_smile: (old being relative)

4 Likes

Iā€™ve been in a similar quandary. I decided to follow the as yet unreleased book which is based on the v2 course. Itā€™s a series of notebooks which should mirror the new course. It took a bit of knowhow at the command line to get everything going (in particular, installing the fastai v2 libraries and resolving a few package conflicts).

Thereā€™s something I donā€™t get about the Precision and Recall metrics in v2. It looks like there is a configurable pos_label, i.e.: Precision(pos_label=1) vs Precision(pos_label=0).

Iā€™m doing a simple binary classification and the Precision for pos_label=0 is not the same as that for pos_label=1. Same for Recall.

Is that supposed to be the case? In binary classification I only have 2 classes, how can you calculate different Precisions and Recalls?