So, you want to create your custom pipeline with fastai

(Bobak Farzin) #23

I don’t understand your question. you have a 7 channel image? Maybe if you post an example it would be clear what you are trying to do and I could help out.

(Ayush) #24

I have 7 channels as input to CNN.The dataset is similar to one used in dstl competition.For One Output mask there are 7 channels as input.It is basically a segmentation problem

(Ayush) #25

@bfarzin Basically I have a (7 * size * size) as my Input and I want to pass it To UNET to get (n_classes * sz * sz) as output masks.How do I pass this to Fastai.While using Standard Pytorch I implemented Basic Transformations like horizontal flip and 90 degree rotations using numpy.Will the transforms of fastai work for 7 channel input i not how can i pass my own transforms to fastai.

(Bobak Farzin) #26

This is outside the scope of my experience, really. I would suggest breaking out as a new topic in the forums and see if there is broader help. I suspect you can do all the transforms in fastai, but I have not spent a ton of time working on that part of the library.

I think you are discussing these two lines:

        #channel 4 is all 255, drop it
        label = cv2.resize(img[:,:,:3],(int(img.shape[1]/8),int(img.shape[0]/8)),interpolation = cv2.INTER_CUBIC)*64

Is that correct? Here I am just saving space since the images are only three channel (RGB.) I don’t see why you couldn’t have it be N channel but have never tried that. Did you just try to allow all channels? Does something break?

(Ayush) #27

I think transforms are only defined for a channel<=3 so this isnt working.I have my own dataset with transforms available.All I need to know is How I can pass it to fastai.I guess I just need the one fit cycle on a custom model and a custom dataset.Could you help me with that

(Ayush) #28

Here is the function that is performing transforms

def applytransform(x,y,mode):
    if mode=="val":
      return x,y
    if random.random()>=0.5:
        return x,y
        if random.random()>=0.5:
            for i in range(len(x)):x[i]=np.fliplr(x[i])
        if random.random()>=0.5:
            for i in range(len(x)):x[i]=np.flipud(x[i])
        if random.random()>=0.5:
            for i in range(len(x)):x[i]=np.rot90(x[i],k)
    return x,y 

I am going through each channel and applying transformation to both image and mask

(Bobak Farzin) #29

Here is one idea. You could extend the TensorDataset with your own __getitem__ that can pull your own transforms (from your own data) rather than having the data be static. From there, it should all flow, but I have not tried this myself. Let me know if that works for you.

(Ayush) #30

@bfarzin Okay I will try that.But using Tensor Dataset Does it mean I must have all the tensors in memory first?I already have created something that returns me the Final Tensors after transformation.

(Ayush) #31


class GetTransformedSet(Dataset):
    def __init__(self,root,img_list,mask_list,transforms=None,mode="train"):
    def __len__(self):
        return len(self.img_list)
    def __getitem__(self,i):
        img = img.type('torch.FloatTensor')
        return img,mask    

def open_multiple_channel_img(file_name,mask_path,mode):
    for i in range(1,8):"_"+str(i)+".tif")
    images,masks=applytransform(np.stack(images, axis=0),mask,mode)
    return  torch.from_numpy(images.copy()),torch.from_numpy(masks.copy())


I am trying to create a subclass of ImageList to override the open method.
fastai open_image opens an image using PIL and returns a tensor image. I want to apply a preprocesing step to every image in my dataset to normalize the stain color of Histopathology slide images and then flow the normal fastai pipeline like applying transformations for data augmentation.
However while creating the imagedatabunch, instead of a filename in fn, a image is being passed and I am getting the following error.

Image (3, 96, 96)

AssertionError                    Traceback (most recent call last)
<ipython-input-24-8f1bfcf810dc> in <module>
      2                       max_lighting=0.05, max_warp=0)
----> 4 data = src.transform(tfms, size=sz, resize_method=ResizeMethod.SQUISH)
      6 # data.normalize(imagenet_stats);

~/.conda/envs/ankush/lib/python3.6/site-packages/fastai/ in transform(self, tfms, **kwargs)
    485         if not tfms: tfms=(None,None)
    486         assert is_listy(tfms) and len(tfms) == 2, "Please pass a list of two lists of transforms (train and valid)."
--> 487         self.train.transform(tfms[0], **kwargs)
    488         self.valid.transform(tfms[1], **kwargs)
    489         if self.test: self.test.transform(tfms[1], **kwargs)

~/.conda/envs/ankush/lib/python3.6/site-packages/fastai/ in transform(self, tfms, tfm_y, **kwargs)
    701     def transform(self, tfms:TfmList, tfm_y:bool=None, **kwargs):
    702         "Set the `tfms` and `tfm_y` value to be applied to the inputs and targets."
--> 703         _check_kwargs(self.x, tfms, **kwargs)
    704         if tfm_y is None: tfm_y = self.tfm_y
    705         if tfm_y: _check_kwargs(self.y, tfms, **kwargs)

~/.conda/envs/ankush/lib/python3.6/site-packages/fastai/ in _check_kwargs(ds, tfms, **kwargs)
    572     if (tfms is None or len(tfms) == 0) and len(kwargs) == 0: return
    573     if len(ds.items) >= 1:
--> 574         x = ds[0]
    575         try: x.apply_tfms(tfms, **kwargs)
    576         except Exception as e:

~/.conda/envs/ankush/lib/python3.6/site-packages/fastai/ in __getitem__(self, idxs)
    104     def __getitem__(self,idxs:int)->Any:
    105         idxs = try_int(idxs)
--> 106         if isinstance(idxs, Integral): return self.get(idxs)
    107         else: return[idxs], inner_df=index_row(self.inner_df, idxs))

<ipython-input-11-a346a42416b7> in get(self, i)
     14     def get(self, i):
     15         fn = super().get(i)
---> 16         res =
     17         self.sizes[i] = res.size
     18         return res

<ipython-input-11-a346a42416b7> in open(self, fn)
     10     def open(self, fn):
     11         "Open image in `fn`, subclass and overwrite for custom behavior."
---> 12         return open_image(fn, normalizer=self.normalizer)
     14     def get(self, i):

<ipython-input-10-88cb1fc4982e> in open_image(fn, normalizer, div, cls)
     15     "Return `Image` object created from image in file `fn`."
     16     print(str(fn))
---> 17     x = staintools.read_image(str(fn))
     18     x = staintools.LuminosityStandardizer.standardize(x)
     19     normalized_x = normalizer.transform(x)

~/.conda/envs/ankush/lib/python3.6/site-packages/staintools/preprocessing/ in read_image(path)
     10     :return: RGB uint8 image.
     11     """
---> 12     assert os.path.isfile(path), "File not found"
     13     im = cv.imread(path)
     14     # Convert from cv2 standard of BGR to our convention of RGB.
~/.conda/envs/ankush/lib/python3.6/ in isfile(path)
     28     """Test whether a path is a regular file"""
     29     try:
---> 30         st = os.stat(path)
     31     except OSError:
     32         return False

TypeError: stat: path should be string, bytes, os.PathLike or integer, not Image

Here is my custom open_image function and HistopathImageLIst class:

_img = staintools.read_image(str(ref_img))
_img_standardize = staintools.LuminosityStandardizer.standardize(_img)
normalizer = staintools.StainNormalizer(method='vahadane')

def open_image(fn:PathOrStr, normalizer=normalizer, div:bool=True, cls:type=Image)->Image:
    "Return `Image` object created from image in file `fn`."
    print(str(fn))   #Printing fn to check what is being passed to read_image
    x = staintools.read_image(fn)
    x = staintools.LuminosityStandardizer.standardize(x)
    normalized_x = normalizer.transform(x)
    normalized_x = pil2tensor(normalized_x,np.float32)
    if div: normalized_x.div_(255)
    return cls(normalized_x)

class HistopathImageList(ImageList):

"`ItemList` suitable for histopathology images."
_bunch,_square_show,_square_show_res = ImageDataBunch,True,True

def __init__(self, *args, normalizer=normalizer, **kwargs):
    super().__init__(*args, **kwargs)
    self.normalizer = normalizer
    self.c,self.sizes = 3,{}

def open(self, fn):
    "Open image in `fn`, subclass and overwrite for custom behavior."
    return open_image(fn, normalizer=self.normalizer)

def get(self, i):
    fn = super().get(i)
    res =
    self.sizes[i] = res.size
    return res

Thanks in advance. Any help is appreciated!