Fastai v2 chat

@muellerzr nb 21 is fully trainable Imagenette FYI.

2 Likes

Awesome! Iā€™ll give it a look today :slight_smile: Thanks for the heads up!

@jeremy seems to be a bug on the local.layers import. I get a ā€œname ā€˜nnā€™ is not definedā€ for class Module. (Can you reproduce this?)

I just split core into two modules, core and torch_core, and same for imports. Thereā€™s a little torch_basics modules you can now import that has all 4 of those modules imported.

1 Like

True - Iā€™ve only fixed up to nb 09 in the change I just mentioned. Will do the rest now.

1 Like

I wasnā€™t fast enough so Jeremy posted before I could, but Iā€™ll keep this for reference:


If I try to from local.layers import * I get the following error:

/content/fastai_dev/dev/local/layers.py in <module>()
     15 
     16 #Cell 4
---> 17 class Module(nn.Module, metaclass=PrePostInitMeta):
     18     "Same as `nn.Module`, but no need for subclasses to call `super().__init__`"
     19     def __pre_init__(self): super().__init__()

NameError: name 'nn' is not defined

and it looks like there is an import missing at the beginning of 11_layers.ipynb. If I fix this, I get another error, saying that torch is not defined. Am I doing something wrong in importing?

See Jeremyā€™s previous comment. Fix coming soon :slight_smile:

You were just too fast for me :wink:

1 Like

OK all nbs up to 20 are now working again.

1 Like

You could set your labelled test data as the validation set in the data loader and run whatever you need from there. Not too much effort.

Currently in the framework we canā€™t do that as test sets donā€™t have labels :slight_smile: Hence the request

1 Like

But validation sets have labels. Iā€™m suggesting to use your labelled test set as the validation set for inference. Itā€™s not exactly what youā€™re asking for but it works.

1 Like

Thatā€™s what Iā€™ve been doing for a few months. Some cases itā€™s even more hacky (especially tabular) to get working, and itā€™d be easier to just have it as a function call :slight_smile:

1 Like

Quick Q for everyone - what is the best way to navigate source code from Jupyter Notebook?
Example, we have untar_data being called inside 08_pets_tutorial - I know one possible way is to go ??untar_data but that seems very cumbersome.

Any other better ways?

Inside vim we could just do ctrl+] and inside VSCode its pretty easy as well

1 Like

For navigation itā€™s easiest to use vim or vscode in the python modules (local/).

1 Like

Thank you :slight_smile:

What would be a proper way to contribute to code in v2 notebooks?
Please correct me:

  1. Fork the master repo;
  2. Clone the forked repo and run tools/run-after-git-clone;
  3. Create a branch;
  4. Run jupyter notebook and introduce a correction;
  5. Save the notebook.
  6. Run notebook2script.py on the changed notebook ( --fname )
  7. Commit and push to a forked repo;
  8. Create a PR from the forked repo on Github.

Question: When I follow this process, there are files appear in local repo that I didnā€™t explicitly change.

For example, I corrected dev/05_data_core.ipynb
and after running notebook2script.py in the last cell git status reports:

  • changed: dev/05_data_core.ipynb
  • changed: dev/local/notebook/export.py
  • changed: dev/local/notebook/index.txt
  • changed: dev/local/tabular/core.py

Shall I include them all into commit and PR?

1 Like

I dont think so. Ideally I would include only the lines that are changed in notebook. These files are generated out of those changes. Hence, I dont think it is needed to be committed.

TL;DR: Put everything into one commit.

There is usually two ways to go about this: a) put the local folder in .gitignore so it wonā€™t appear in git, as these are generated files. This is not the approach taken by fastai_dev (obviously). Or b) include generated files in the repo, because you want to use it to distribute the ā€œtranspiledā€ library. In this scenario, you could either make two separate commits, one including the changes made and one including the generated files that changed, or you could put everything into one commit. Looking at the recent commit history, it seems that Jeremy and Sylvain prefer to keep everything in one commit.