Fastai v2 chat

Yes it’s best to include them, since that way people can use it right away without running the export script. Also, it means that they can view and navigate the python modules directly on github.

BTW, the work-flow for doing a PR is much easier if you use this:

https://hub.github.com/

2 Likes

Question moved to V2 core thread

So I have one doubt in notebook 01_core, in python it isn’t recommended to write your own dunder methods as they are usually reserved keywords. So won’t having pre_init and post_init in metaclasses cause a problem?

I believe, the use or redefinition of the so called dunder (from double under) methods is not at all prohibited, if you are aware of the implications.

Thus in the PrePostInitMeta metaclass custom optional __pre_init__ and __post_init__ are named as dunder simply to mimic the existing __init__ method as they wrap it before and after. And unless defined in the subclasses, these are empty.

1 Like

aah. ok . Thanks for the explanation.:slight_smile:

Yeah it’s very common to add dunder methods for functionality like this. E.g. __array__ et al in numpy.

1 Like

Cuda Transform

tfm = Cuda()
t = tfm((tensor(1),))
test_eq(*t,1)
test_eq(t[0].type(),'torch.cuda.LongTensor')

If I run CUDA tfm on a CPU only machine, test_eq(t[0].type(),'torch.cuda.LongTensor') test fails and we get assertion error.

AssertionError: ==:
torch.LongTensor
torch.cuda.LongTensor

Perhaps because of

def __init__(self,device=None):
        self.device=default_device() if device is None else device

as default_device() is CPU.

Is there a need to update the test?

1 Like

Yes that sounds like a good idea @arora_aman

Are we missing sklearn for the dependencies ?
I am trying to set up the environment of fastai v2 by running the cmd below (follow on github)

conda install -c fastai -c pytorch jupyter "pytorch>=1.2.0" torchvision matplotlib pandas requests pyyaml fastprogress pillow scipy
pip install typeguard jupyter_nbextensions_configurator

Then I ran the test:

for i in {0,1,2}*.ipynb; do sleep 1; python run_notebook.py --fn $i & done

Several errors appeared. and the one below seems the environment doesn’t have sklearn yet.

Doing 21_tutorial_imagenette.ipynb
Exception in 20_metrics.ipynb
Traceback (most recent call last):
  File "run_notebook.py", line 27, in <module>
    fn:Param("Filename glob",str)=None):
  File "/home/gpuserver/fastai_dev/dev/local/script.py", line 39, in call_parse
    func(**args.__dict__)
  File "run_notebook.py", line 31, in main
    for f in sorted(fns): run_nb(f)
  File "run_notebook.py", line 17, in run_nb
    raise e from None
  File "run_notebook.py", line 14, in run_nb
    try: ExecutePreprocessor(timeout=600).preprocess(nb, {})
  File "/home/gpuserver/anaconda3/envs/fastai_dev/lib/python3.7/site-packages/nbconvert/preprocessors/execute.py", line 381, in preprocess
    nb, resources = super(ExecutePreprocessor, self).preprocess(nb, resources)
  File "/home/gpuserver/anaconda3/envs/fastai_dev/lib/python3.7/site-packages/nbconvert/preprocessors/base.py", line 69, in preprocess
    nb.cells[index], resources = self.preprocess_cell(cell, resources, index)
  File "/home/gpuserver/anaconda3/envs/fastai_dev/lib/python3.7/site-packages/nbconvert/preprocessors/execute.py", line 424, in preprocess_cell
    raise CellExecutionError.from_cell_and_msg(cell, out)
nbconvert.preprocessors.execute.CellExecutionError: An error occurred while executing the following cell:
------------------
import sklearn.metrics as skm
------------------

---------------------------------------------------------------------------
ModuleNotFoundError                       Traceback (most recent call last)
<ipython-input-3-8475d862872f> in <module>
----> 1 import sklearn.metrics as skm

ModuleNotFoundError: No module named 'sklearn'

2 Likes

Indeed. Good catch, @dhoa.
I’ve reproduced this error and was able to eliminate it by installing scikit-learn package.
Will update the environment.yml file.
PR: https://github.com/fastai/fastai_dev/pull/178

1 Like

Hi,

I like to develop and contribute a checkpoint-alike functionality, and just want to be sure it will match fastai v2’s philosophy. My apologies if this post belongs to another chat/topic/wiki like dev projects index, developer chat, or fastai-v2-callbacks-learner-optimizer.

In particular, I wonder what combinations of decorator, Pipeline, callback, monad, coroutine, etc., will fit fastai v2’s design most.

It is something I need for efficiency (esp. with volatile Google Colaboratory session). For example, when experimenting on NLP augmentation with batch-size scaling, I want intermediate models reusable:

Control Group

ctrl_period_1 ← freeze();      fit_one_cycle(2)
ctrl_period_2 ← freeze_to(-2); fit_one_cycle(3)

Experiment Group

ctrl_period_1 ← resume(prng_states, mdl_states, opt_states, fp16_states)
exp_period_2  ← augment(); increase_bs(); freeze_to(-2); fit_one_cycle(3)

So far what I had to do manually (in a stupid and verbose way) are:

  1. Resume every PRNG states;
  2. Resume MixedPrecision's dynamic loss_scale.

AFAIK, MixedPrecision of v1 uses _order to ensure when it should be invoked, and v2 uses _after_something explicitly. Also I’m aware that there are some more discussions about it on the forum.

I’m curious about how can I help contributing a somewhat general API. Because PRNG states are not only updated during fit_one_cycle but also touched along with data transformation Pipeline, yet loss_scale is only used as almost the last callback. Not to mention that the possibility to have new callbacks is supposed to be infinite.

I imagine that an extensible decorator may work like @Transform (and @differentiable?) such that it will pickle a state dict and pass it along like a coroutine monadic computation, ideally.

Thank you.

Thanks @jeremy
I have created a PR regarding the same :slight_smile:

Generally the way I do things is to start with the simplest possible API, and a good set of tests, and then refactor it from there. Ideally, I think when we save a model that includes the optimizer (which is an option in v1, and we should probably do the same in v2) it should be possible to continue training it after loading it later.

Frankly, I never quite got that working well, so any help would be most welcome!

1 Like

EDIT Please ignore this question! i am just confused :slight_smile: - just leaving it here in case anyone else is confused like me

I’m just startng to look around and was wondering about this snippet (see below) from the [01_core]

(http://localhost:8888/notebooks/dev/01_core.ipynb) notebook.

should the first functools.wraps be:

@functools.wraps(old_new)

snippet:

#export
class NewChkMeta(PrePostInitMeta):
    "Metaclass to avoid recreating object passed to constructor (plus all `PrePostInitMeta` functionality)"
    def __new__(cls, name, bases, dct):
        x = super().__new__(cls, name, bases, dct)
        old_init,old_new = x.__init__,x.__new__

        @functools.wraps(old_init)
        def _new(cls, x=None, *args, **kwargs):
            if x is not None and isinstance(x,cls):
                x._newchk = 1
                return x
            res = old_new(cls)
            res._newchk = 0
            return res

        @functools.wraps(old_init)
        def _init(self,*args,**kwargs):
            if self._newchk: return
            old_init(self, *args, **kwargs)

        x.__init__,x.__new__ = _init,_new
        return x

Thank you for the advice about the workflow of simple API → tests → refactoring, I love it and do practise in a similar fashion. :metal:
For v2’s model loading/saving mechanism, I will definitely try my best to contribute. :vulcan_salute:

answering my own question, making the change i suggested breaks the tests immediately following so clearly i need to better understand what is going on - please ignore

my guess:
i guess the goal is to make __new__ look like a constructor (or rather have the same signature as the __init__ constructor) instead of an allocator… this probably makes inspection in notebooks more natural - especially for the objects that look like functions

@313V I’ve found that defining __new__ seems to break the signature of the class and all subclasses - it uses the signature of __new__ instead of __init__, although the former is generally *args,**kwargs or something similar. So there’s a couple of places where I work around that problem. I don’t know if there’s a better solution - I haven’t found anything online.

2 Likes

I’ve added an index to useful topics to the FAQ

1 Like

I’ve removed the Transform feature that uses the return type annotation to automatically cast the result to the return type. We used to need it to avoid problems with unwanted type casts, but we’ve now made it so it’s unnecessary. So if you want to cast o to type T in your transform, just use T(o) (assuming that T.__init__ works that way).

There is still one place that return type annotation is used in Transform: use return annotation None to specify that you want to disable any casting to subclass in your transform. (Note that I don’t think we’ve ever actually needed this yet in fastai - it’s just there “in case”, but you probably don’t need to know about it).

1 Like

I don’t know if this has been discussed but I remember that @jeremy mentioned in the first walk-thru that fastai v2 did not have a deadline, unlike other versions, which had to be ready for the course. Is there not a fastai course this fall? If so, does fastai v2 have to be done by then? Or is there something I misunderstood?

1 Like