Lesson 11 discussion and wiki

I’ve been using fastprogress even when I am not using fastai, and it’s flexible and works very well.


Looks really nice!

It is easy to evaluate output of augmentation in images. How would you do augmentation for tabular data, text, or time series?


Read your texts, or look at your data. You have to judge if it gives something that will help your model or not.


Isn’t putting abitrary _order numbers a bit obscure? If you want to add a new one you’d have to look up all possibly relevant functions / classes and their _order, right? Is it documented somewhere?


What does one do when having high res images which typically is the case? Resize to 224 by 224 or 128 by 128 (information loss)? Or does one make it to any appropriate size that can fit into GPU?

It’s not documented in this case because this isn’t a library. In fastai, transforms have internal order that are linked to their class (pixel/crop/lighting/affine/coord…)

The notional point of data augmentation is effectively to increase your training (& Val?) data volume, right?

If you crop the photo and resize do you leave all the pixels outside of the original photo as black pixels? doesn’t that bias the model training?

Training volume primarily. Unless you’re using TTA, there is no data augmentation at inference.

1 Like

So if you want to add a new custom transform to fastai, how would you define where it should be?

You can use a few different approaches. By default I think fastai actually mirrors the nearby pixels so they’re not black.

See: https://docs.fast.ai/vision.transform.html#_pad

1 Like

Jeremy will show how to use reflection to fill in the black pixels in a moment.

No you crop to, say 256 by 256 then resize that to 128 by 128. There are no black pixels.


In current fastai, it has a class like I said. TfmPixel or TfmCrop…

1 Like

Just realized that Jeremy is wearing a checkered shirt, was that on purpose to make a point on the transforms?

1 Like

Agree, I think it could be some enumeration instead, like order.PROGRESS_BAR, order.AUGMENT, etc. Or maybe some other “order groups”.

Is it 5 min per epoch (not batch)?

Yes, 5 minutes for a whole epoch.

LAMB is based on LARS, which was designed for imagenet. When they tried to apply it to much larger models (like BERT) they needed the new tricks that turn LARS into LAMB. Not sure if there are places where it would work less well than Adam, also interested in the answer. The paper doesn’t discuss trying to use LAMB on ImageNet.