Help: reading fastcore and fastai docs

I am currently exploring fastcore library and documenting my learnings along the way in a different post. However, each update I post there is a bit too long to read for many and my questions about the docs and sources could be buried too.

So, I would like this post to be exclusively about asking questions and getting help on reading and experiment fastcore library.

If you are interested, please feel free to ask questions and help out on anything related to the docs and source codes of fastcore library.

Milestones
2022.9.26 - finished explore and document fastcore.meta, and my little helper fastdebug has survived its first test :sweat_smile: Any questions related to fastcore.meta is welcome, I want to put my understanding of it to the test :grin:

2022.10.18 - started reading fastai 00_Torch_core.ipynb

3 Likes

does anno_dict do anything more or new compared with __annotations__?

from fastcore.meta import *
from fastcore.test import *
import inspect

anno_dict docs

inspect.getdoc(anno_dict)
"`__annotation__ dictionary with `empty` cast to `None`, returning empty if doesn't exist"

I have to confess I don’t undersatnd the docs statement very well. So, I look into the source code of anno_dict and empty2none.

print(inspect.getsource(anno_dict))
def anno_dict(f):
    "`__annotation__ dictionary with `empty` cast to `None`, returning empty if doesn't exist"
    return {k:empty2none(v) for k,v in getattr(f, '__annotations__', {}).items()}
print(inspect.getsource(empty2none))
def empty2none(p):
    "Replace `Parameter.empty` with `None`"
    return None if p==inspect.Parameter.empty else p

Dive in

If a parameter’s default value is Parameter.empty, then empty2none is to replace Parameter.empty with None . So, I think it is reasonable to assume p is primarily used as a parameter’s default value. The cell below supports this assumption.

def foo(a, b:int=1): pass
sig = inspect.signature(foo)
for k,v in sig.parameters.items():
    print(f'{k} is a parameter {v}, whose default value is {v.default}, \
if apply empty2none to default value, then the default value is {empty2none(v.default)}')
    print(f'{k} is a parameter {v}, whose default value is {v.default}, \
if apply empty2none to parameter, then we get: {empty2none(v)}')
a is a parameter a, whose default value is <class 'inspect._empty'>, if apply empty2none to default value, then the default value is None
a is a parameter a, whose default value is <class 'inspect._empty'>, if apply empty2none to parameter, then we get: a
b is a parameter b: int = 1, whose default value is 1, if apply empty2none to default value, then the default value is 1
b is a parameter b: int = 1, whose default value is 1, if apply empty2none to parameter, then we get: b: int = 1

So, what is odd is that in anno_dict, empty2none is applied to v which is not parameter’s default value, but mostly classes like int, list ect, as in __annotations__.

Then I experimented the section below and didn’t find anno_dict doing anything new than __annotations__.

anno_dict seems not add anything new to __annotations__

def foo(a, b:int=1): pass
test_eq(foo.__annotations__, {'b': int})
test_eq(anno_dict(foo), {'b': int})
def foo(a:bool, b:int=1): pass
test_eq(foo.__annotations__, {'a': bool, 'b': int})
test_eq(anno_dict(foo), {'a': bool, 'b': int})
def foo(a, d:list, b:int=1, c:bool=True): pass
test_eq(foo.__annotations__, {'d': list, 'b': int, 'c': bool})
test_eq(anno_dict(foo), {'d': list, 'b': int, 'c': bool})
from fastcore.foundation import L
def foo(a, b): pass
test_eq(foo.__annotations__, {})
test_eq(anno_dict(foo), {})

def _f(a:int, b:L)->str: ...
test_eq(_f.__annotations__, {'a': int, 'b': L, 'return': str})
test_eq(anno_dict(_f), {'a': int, 'b': L, 'return': str})

Question! so far above anno_dict has done nothing new or more, so what am I missing here?

Does fastcore want anno_dict to include params with no annos?

If so, I have written a lengthy anno_dict_maybe to do it.

def anno_dict_maybe(f):
    "`__annotation__ dictionary with `empty` cast to `None`, returning empty if doesn't exist"
    new_anno = {}
    for k, v in inspect.signature(f).parameters.items():
        if k not in f.__annotations__:
            new_anno[k] = None
        else: 
            new_anno[k] = f.__annotations__[k]
    if 'return' in f.__annotations__:
        new_anno['return'] = f.__annotations__['return']
    if True in [bool(v) for k,v in new_anno.items()]:
        return new_anno
    else:
        return {}
def foo(a:int, b, c:bool=True)->str: pass
test_eq(foo.__annotations__, {'a': int, 'c': bool, 'return': str})
test_eq(anno_dict(foo), {'a': int, 'c': bool, 'return': str})
test_eq(anno_dict_maybe(foo), {'a': int, 'b': None, 'c': bool, 'return': str})
def foo(a, b, c): pass
test_eq(foo.__annotations__, {})
test_eq(anno_dict(foo), {})
test_eq(anno_dict_maybe(foo), {})

I’ve searched the source code for the main fast.ai libs, and this function isn’t used anywhere. So I’d guess that suggests it’s not actually needed and doesn’t do anything useful.

1 Like

You are right, I only find one place where anno_dict is used.

1 Like

Good find. That repo isn’t used for anything – it’s just an old proof-of-concept when kicking around a few ideas for what became nbdev2.

1 Like

I see, thanks a lot for your replies! I feel a little more confident in exploring fastcore day by day.

2 Likes

Recently I have finished exploring docs and sources of fastcore.meta, see my exploration documents if you are interested in details.

I would like to test how much I really understood, so I have written a little summary of it. I don’t know how much my English make sense, but putting the summary together certainly enhanced my understanding of individual funcs and classes of this submodule and as a whole.

I hope people who are interested in fastcore could have a look to correct me for things I got wrong.

Or, if you are new to this submodule, you could ask me anything about it, I think I could be a good use to you here :grin:

What’s inside fastcore.meta

whatinside(fm, dun=True)
fastcore.meta has: 
13 items in its __all__, and 
43 user defined functions, 
19 classes or class objects, 
2 builtin funcs and methods, and
74 callables.

test_sig:            function    Test the signature of an object
FixSigMeta:          metaclass, type    A metaclass that fixes the signature on classes that override `__new__`
PrePostInitMeta:     metaclass, type    A metaclass that calls optional `__pre_init__` and `__post_init__` methods
AutoInit:            class, PrePostInitMeta    Same as `object`, but no need for subclasses to call `super().__init__`
NewChkMeta:          metaclass, type    Metaclass to avoid recreating object passed to constructor
BypassNewMeta:       metaclass, type    Metaclass: casts `x` to this class if it's of type `cls._bypass_type`
empty2none:          function    Replace `Parameter.empty` with `None`
anno_dict:           function    `__annotation__ dictionary with `empty` cast to `None`, returning empty if doesn't exist
use_kwargs_dict:     decorator, function    Decorator: replace `**kwargs` in signature with `names` params
use_kwargs:          decorator, function    Decorator: replace `**kwargs` in signature with `names` params
delegates:           decorator, function    Decorator: replace `**kwargs` in signature with params from `to`
method:              function    Mark `f` as a method
funcs_kwargs:        decorator, function    Replace methods in `cls._methods` with those from `kwargs`

Review individual funcs and classes

What is fastcore.meta all about?

It is a submodule contains 4 metaclasses, 1 class built by a metaclass, 4 decorators and a few functions.

Metaclasses give us the power to create new breeds of classes with new features.

Decorators give us the power to add new features to existing funcions.

We can find their basic info above

What can these metaclasses do for me?

We design/create classes to breed objects as we like.

We design/create metaclasses to breed classes as we like.

Before metaclasses, all classes are created by type and are born the same.

With metaclasses, e.g., FixSigMeta first uses type to its instance classes exactly like above, but then FixSigMeta immediately adds new features to them right before they are born.

FixSigMeta

can breed classes which are free of signature problems (or they can automatically fix signature problems).

PrePostInitMeta

inherited/evolved from FixSigMeta to breed classes which can initialize their objects using __pre_init__,
__init__, __post_init__ whichever is available (allow me to abbreviate it as triple_init).

AutoInit

is an instance class created by PrePostInitMeta, and together with its own defined __pre_init__, subclasses of AutoInit has to worry about running super().__init__(...) no more.

  • As AutoInit is an instance class created by PrePostInitMeta, it can pass on both features (free of signature problem and triple_init) to its subclasses.
  • As it also defines its own __pre_init__ function which calls its superclass __init__ function, its subclasses will inherit this __pre_init__ function too.
  • When subclasses of AutoInit create and initialize object intances through __call__ from PrePostInitMeta, AutoInit’s __pre_init__ runs super().__init__(...), so when we write __init__ function of subclasses which inherits from AutoInit, we don’t need to write super().__init__(...) any more.

NewChkMeta

is inherited from FixSigMeta, so any instance classes created by NewChkMeta can also pass on the no_signature_problem feature.

It defines its own __call__ to enable all the instance objects e.g., t created by all the instance classes e.g., T created by NewChkMeta to do the following:

  • T(t) is t if isinstance(t, T) returns true
  • when T(t) is t if not isinstance(t, T), or when T(t, 1) is t if isinstance(t, T) or when T(t, b=1) is t if isinstance(t, T), all return False

In other words, NewChkMeta creates a new breed of classes T as an example which won’t recreate the same instance object t twice. But if t is not T’s instance object, or we choose to add more flavor to t, then T(t) or T(t, 1) will create a new instance object of T.

BypassNewMeta

is inherited from FixSigMeta, so it has the feature of free from signature problems.

It defines its own __call__, so that when its instance classes _T create and initialize objects with a param t which is an instance object of another class _TestB, they can do the following:

  • If _T likes _TestB and prefers t as it is, then when we run t2 = _T(t), and t2 is t will be True, and both are instances of _T.
  • If _T is not please with t, it could be that _T does not like _TestB any more, then _T(t) is t will be False
  • or maybe _T still likes _TestB, but want to add some flavors to t by _T(t, 1) or _T(t, b=1) for example, in this case _T(t) is t will also be False.

In other words, BypassNewMeta creates a new breed of instance classes _T which don’t need to create but make an object t of its own object instance, if t is an instance object of _TestB which is liked by _T and if _T likes t as it is.

What can those decorators do for me?

A decorator is a function that takes in a function and returns a modified function.

A decorator allows us to modify the behavior of a function.

use_kwargs_dict

allows us to replace an existing function’s param kwargs with a number of params with default values.

The params with their default values are provided in a dictionary.

use_kwargs

allows us to replace an existing function’s param kwargs with a number of params with None as their default values.

The params are provided as names in a list.

delegates

allows us to replace an existing function’s param kwargs with a number of params with their default values from another existing function.

In fact, delegates can work on function, classes, and methods.

funcs_kwargs

is a decorator to classes. It can help classes to bring in existing functions as their methods.

It can set the methods to use or not use self in the class.

The remaining functions

test_sig and method are straightforward, their docs tell it all clearly.

empty2none and anno_dict are no in use much at all. see the thread here.

1 Like

torch_core.to_float is missing a not in its source?

I am reading the docs of torch_core, and noticed the source code of torch_core.to_float is little strange, I think a not is missed from the source.

The original source code is as below

#|export
def to_float(b):
    "Recursively map lists of int tensors in `b ` to float."
    return apply(lambda x: x.float() if torch.is_floating_point(x) else x, b)

I think a not is missing between if and torch.is_floating_point.... Because with the original source code, we have the following tests all true, and the last one is not what we want from to_float.

test_eq(torch.is_floating_point(tensor(1,2)), False)
test_eq(torch.is_floating_point(tensor(1.,2)), True)
test_eq([t.float() for t in [tensor(1,2), tensor([3,4])]], [tensor([1., 2.]), tensor([3., 4.])])
test_eq(to_float([tensor(1,2), tensor([3,4])]), [tensor([1, 2]), tensor([3, 4])]) 
1 Like

TensorBase.debug=True instead of a.debug=True

There is a minor typing error I guess in the example below from 00_torch_core

What the example intended to do I think is the following

some minor typos in

The docs below from 00_torch_core.ipynb should replace out with inp.

I think the docs in 01_layers intended to be: “convert x to a tensor class and default to TensorBase” but the original dos is as follows

#|export
@module(tensor_cls=TensorBase)
def ToTensorBase(self, x):
    "Remove `tensor_cls` to x"
    return self.tensor_cls(x)

@Daniel would it be possible to do PRs for errors you find as you go?

1 Like

Thanks Jeremy!

I hesitated to do it as my previous experience was that it can be time consuming to do PRs.

Well, I guess I have learned to avoid unnecessary troubles this time. So, it is much quicker this time to do the PR. I guess I have to keep trying and it will be easier for the next time.