PILBase.create Failed to Work after Patched and Transformed

I wanna extend the functionality of PILBase.create so that it can read list of image paths (for image of more than 3 channels). So naturally I tried patching to achieve this:

@patch(cls_method=True)
def create(cls: PILBase, fn:(Path,str,Tensor,ndarray,bytes,list), **kwargs)->None:
    if isinstance(fn,TensorImage): fn = fn.permute(1,2,0).type(torch.uint8)
    if isinstance(fn, TensorMask): fn = fn.type(torch.uint8)
    if isinstance(fn,Tensor): fn = fn.numpy()

    # handle list of images for 4 channels
    if isinstance(fn, list):
        channel_imgs = [Image.open(img_path) for img_path in fn]
        fn = np.stack(channel_imgs, axis = -1)

    # return PILImage object for consistency
    if isinstance(fn,ndarray): 
        return cls(Image.fromarray(fn))
    if isinstance(fn,bytes): fn = io.BytesIO(fn)
    return cls(load_image(fn, **merge(cls._open_args, kwargs)))

However, something strange happens after I patch PILBase.create and then wrap it by Transform:

path = Path('../hpa-2019-data/external-data/external_512/10580_1610_C1_1_red.jpg')
after_func = Transform(PILBase.create)
after_func(path)

>> Path('../hpa-2019-data/external-data/external_512/10580_1610_C1_1_red.jpg')

As shown above, after_func should have returned an instance of PILImage but ended up returning input itself. When I further inspect the signature(?) of the transformed function, I get the following which doesnt seem right:

after_func
>>> PILBase.create:
encodes: (PILBase,Path) -> create
(PILBase,str) -> create
(PILBase,Tensor) -> create
(PILBase,ndarray) -> create
(PILBase,bytes) -> create
(PILBase,list) -> createdecodes: 

From my understanding, a correct signature of a transformed function should look like this:

PILBase.create:
encodes: (Path,object) -> create
(str,object) -> create
(Tensor,object) -> create
(ndarray,object) -> create
(bytes,object) -> createdecodes: 

It is important for my patched PILBase.create to work properly after being wrapped by Transform because it is what is going on inside DataBlock. I have no clue on what is causing the issue. It would be great if any fellow could shed some light on it!

Hi @riven314. Use this below instead. I’ll try to put in a pr to fix an error on patch to help it remove the type on cls when classmethods are used with cls_methods=True.

@patch_to(PILBase, cls_method=True)
def create(cls, fn:(Path,str,Tensor,ndarray,bytes,list), **kwargs)->None:
    if isinstance(fn,TensorImage): fn = fn.permute(1,2,0).type(torch.uint8)
    if isinstance(fn, TensorMask): fn = fn.type(torch.uint8)
    if isinstance(fn,Tensor): fn = fn.numpy()

    # handle list of images for 4 channels
    if isinstance(fn, list):
        channel_imgs = [Image.open(img_path) for img_path in fn]
        fn = np.stack(channel_imgs, axis = -1)

    # return PILImage object for consistency
    if isinstance(fn,ndarray): 
        return cls(Image.fromarray(fn))
    if isinstance(fn,bytes): fn = io.BytesIO(fn)
    return cls(load_image(fn, **merge(cls._open_args, kwargs)))
5 Likes

@Tendo
Thanks! awesome it works now!
I would like to understand why patch_to works but patch doesnt. I tried to inspect the source code:

def patch(f=None, *, as_prop=False, cls_method=False):
    "Decorator: add `f` to the first parameter's class (based on f's type annotations)"
    if f is None: return partial(patch, as_prop=as_prop, cls_method=cls_method)
    cls = next(iter(f.__annotations__.values()))
    return patch_to(cls, as_prop=as_prop, cls_method=cls_method)(f)

If I understand it correctly, it seems like cls = next(iter(f.__annotations__.values())) is causing the issue here. It is not parsing the right class

The detail is actually in the code block below.
The patch method is dependent on the patch_to. From the code, one of the things that patch_to does is it takes all the type_hints of the function being patched f and tries to attach them to the new class method nf. The bug however occurs with classmethods such that when using patch, the class to be patched to which in our case is PILBase is passed as a type hint to the cls arg that comes with all classmethods. We now find ourselves sending the type hints of cls instead of that of the actual f into the new class nf. The fix is to remove the cls argument from the wrapped methods to be patched inside the patch function before sending it to patch_to. Let me know if you need more clarifications

def patch_to(cls, as_prop=False, cls_method=False):
    "Decorator: add `f` to `cls`"
    if not isinstance(cls, (tuple,list)): cls=(cls,)
    def _inner(f):
        for c_ in cls:
            nf = copy_func(f)
            nm = f.__name__
            # `functools.update_wrapper` when passing patched function to `Pipeline`, so we do it manually
            for o in functools.WRAPPER_ASSIGNMENTS: setattr(nf, o, getattr(f,o))
            nf.__qualname__ = f"{c_.__name__}.{nm}"
            if cls_method:
                setattr(c_, nm, MethodType(nf, c_))
            else:
                setattr(c_, nm, property(nf) if as_prop else nf)
        # Avoid clobbering existing functions
        return globals().get(nm, builtins.__dict__.get(nm, None))
    return _inner
2 Likes

Here is a pr that fixes it https://github.com/fastai/fastcore/pull/309

1 Like

Really appreciate ur prompt help! I think I need some time to wrap my head on this.

Could I say that the core issue arises because Transform is applied on a patched function whose types annotations (i.e. function.__annotations__.values()) has been contaminated by the class that it has been patched to. so that we need to remove the class from the list of type annotations?

Exactly. So in what you did earlier the method has an extra type_hints for cls which is PILBase.

This next(iter(f.__annotations__.values())) just gets the type_hint of the first arg to the function f which is PILBase

I’d suggest you track down https://github.com/fastai/fastcore/blob/5975fb9acf8cfd4b720942b2cc08a799c1501618/fastcore/transform.py#L65 to see why it doesn’t work when it is put in Transform. TypeDispatch in Transform expects an uncorrupted function f(ie without cls)

Umm
On the other hand I am wondering why setting class_method=True would make the difference here. Even if I set class_method=False, the first argument of the type hints is still PILBase.
And I think the function is still containing PILBase in its list of type annotations.

So can I say even if I am not setting it as class method, the bug will still persist?

You could say that. I usually use patch_to when I have functions such as create that have first args with type_hints already for example fn:(Path,str,Tensor,ndarray,bytes,list). This removes the complication that comes from removing the type_hints of fn and replacing with PILBase as you’d do when using just patch. This complication is especially noticable when you use Transform on patched methods as you have because of the way TypeDispatch works

1 Like

Thanks for your detailed explanation!
Got some time to read through the documentation of TypeDispatch and now I finally see through what is going wrong here :smile:

If I understood how TypeDispatch works underneath the hood earlier, I should have been able to spot out the underlying issue from what was printed out from PILBase.create, i.e.:

after_func
>>> PILBase.create:
encodes: (PILBase,Path) -> create
(PILBase,str) -> create
(PILBase,Tensor) -> create
(PILBase,ndarray) -> create
(PILBase,bytes) -> create
(PILBase,list) -> createdecodes: 

It hinted that TypeDispatch is not working as expected because argument cls (type-annotated by PILBase) is mistakenly treated as the first argument while it should have been a signature of a class-method. And one should expect fn to be treated as the first argument (i.e. str, Tensor, … etc to be put in the first entry of the tuple), which is not the case here.

It indicated that the PILBase annotated in PILBase.create during patching is mistakenly propagated to the TypeDispatch

Exactly! You’ve gotten it