Pytorch-based (gpu accelerated) 3D data augmentation

Hello,

I’ve been googling around, trying to find a good library for 3D data augmentation. It seems most of them rely on numpy/itk underneath.

Are you aware of, or would you have any recommendations, for a 3D data augmentation library based only/mostly on pytorch (to be gpu accelerated) ?

(Btw, are there any plans to add better support to 3D data in fastai? :innocent:)

NB edit: I’m talking about 3D volume/images (e.g. medical, not 3D objects)

1 Like

Hi,
I was interested in the same question a few weeks ago. I have not found anything really good tho.
What I have found is that you can easily implement things like permutations of D,H,W or flipping along axes yourself using PyTorch’s permute() and flip().
Another thing I have found is that you cannot easily implement the more complex stuff directly in PyTorch, mostly for one simple reason: the grid_sample() method in PyTorch does not support trilinear interpolation (yet, there are a lot of issues open on this…). I would assume for more fancy augmentation you would want to construct a morphed/rotated/whatever index volume which you would then sample the original volume with using the grid_sample method. At least out of my head that seems to be an efficient way and also lines up with how data augmentation is done in fastai (for images it is first computing and index image, then applying all the transforms and then sample accordingly. at least it was like this some time ago :D)

I would stick with the existing CPU libraries for now and I think once the sampling is supported nicely in PyTorch, bringing some of this on the GPU will be doable. As for which libraries, I have not tested many of them thoroughly, but I had some good experiences with torchio, which is already based on PyTorch for many parts.
Depending on what kind of augmentations you need you could probably do some on the GPU easily by using cuda Tensors here, but I would not assume that this gives a tremendous benefit for all kinds of augmentations, maybe some tho.

2 Likes

Do you have a simple example about how you’d use torchio within fastai (v1)?
For instance, what would be the most elegant way of adding torchio’s RandomAffine after creating my label lists.

Unfortunately I have no example, since I was using it without fastai.
However as far as I can tell, torchio’s transforms can just be called like a function and if I remember correctly, a fastai Transform can be initialized by something callable, so I would try
tfms = [..., Transform(RandomAffine(...))] and pass the transforms on as usual

Hi all,
I’m the creator of torchio. I have almost no experience with fastai. @ChoJin, can you point me to an example where you’d like to include a torchio transform?

Hello,

it’s hard to share my exact code because I have a bunch of custom stuff for 3D volumes, but as an example we could take:

which is doing 2D segmentation.
In the dataset section, my code is similar to

    src = (SegmentationItemList.from_folder(path_img)
           .split_by_fname_file('../valid.txt')
           .label_from_func(get_y_fn, classes=codes))

after that line you can see that fastai is calling his own transform

    data = (src.transform(get_transforms(), size=size, tfm_y=True)
            .databunch(bs=bs)
            .normalize(imagenet_stats))

I’d like to call torchio instead.

I tried

    _rndaffine = RandomAffine(scales=(0.9, 1.1), degrees=10)
    rndaffine = Transform(_rndaffine)

but it doesn’t work because fastai seems to be expecting a callable with __name__ and __annotations__

~/Documents/Dev/miniconda3/envs/fastai10/lib/python3.7/site-packages/fastai/vision/image.py in __init__(self, func, order)
    458         if order is not None: self.order=order
    459         self.func=func
--> 460         self.func.__name__ = func.__name__[1:] #To remove the _ that begins every transform function.
    461         functools.update_wrapper(self, self.func)
    462         self.func.__annotations__['return'] = Image

AttributeError: 'RandomAffine' object has no attribute '__name__'

Also, I dont think parse_sample() from torchio is going to be compatible with fastai code?

1 Like

I just tried to figure out what the Learner object expects in the data argument, but it’s hard to navigate through fast.ai code with all those from X import *.

parse_sample tries to make sure that the sample has been generated using torchio.ImagesDataset. The main issue with medical images is that orientation, voxel spacing, etc. must often be taken into account while applying the transforms.

fast.ai is nice to learn and to try experimental stuff, but at some point, you usually want to go lower level and understand a bit better what’s going on under the hood. I suggest you switch to pure PyTorch, letting torchio take care of the I/O. Here’s a notebook including training a 3D U-Net for brain segmentation.

1 Like