@muellerzr nb 21 is fully trainable Imagenette FYI.
Awesome! Iāll give it a look today Thanks for the heads up!
@jeremy seems to be a bug on the local.layers
import. I get a āname ānnā is not definedā for class Module. (Can you reproduce this?)
I just split core
into two modules, core
and torch_core
, and same for imports
. Thereās a little torch_basics
modules you can now import that has all 4 of those modules imported.
True - Iāve only fixed up to nb 09 in the change I just mentioned. Will do the rest now.
I wasnāt fast enough so Jeremy posted before I could, but Iāll keep this for reference:
If I try to from local.layers import *
I get the following error:
/content/fastai_dev/dev/local/layers.py in <module>()
15
16 #Cell 4
---> 17 class Module(nn.Module, metaclass=PrePostInitMeta):
18 "Same as `nn.Module`, but no need for subclasses to call `super().__init__`"
19 def __pre_init__(self): super().__init__()
NameError: name 'nn' is not defined
and it looks like there is an import missing at the beginning of 11_layers.ipynb
. If I fix this, I get another error, saying that torch
is not defined. Am I doing something wrong in importing?
See Jeremyās previous comment. Fix coming soon
You were just too fast for me
OK all nbs up to 20 are now working again.
You could set your labelled test data as the validation set in the data loader and run whatever you need from there. Not too much effort.
Currently in the framework we canāt do that as test sets donāt have labels Hence the request
But validation sets have labels. Iām suggesting to use your labelled test set as the validation set for inference. Itās not exactly what youāre asking for but it works.
Thatās what Iāve been doing for a few months. Some cases itās even more hacky (especially tabular) to get working, and itād be easier to just have it as a function call
Quick Q for everyone - what is the best way to navigate source code from Jupyter Notebook?
Example, we have untar_data
being called inside 08_pets_tutorial
- I know one possible way is to go ??untar_data
but that seems very cumbersome.
Any other better ways?
Inside vim we could just do ctrl+]
and inside VSCode
its pretty easy as well
For navigation itās easiest to use vim or vscode in the python modules (local/
).
Thank you
What would be a proper way to contribute to code in v2 notebooks?
Please correct me:
- Fork the master repo;
- Clone the forked repo and run tools/run-after-git-clone;
- Create a branch;
- Run jupyter notebook and introduce a correction;
- Save the notebook.
- Run notebook2script.py on the changed notebook ( --fname )
- Commit and push to a forked repo;
- Create a PR from the forked repo on Github.
Question: When I follow this process, there are files appear in local repo that I didnāt explicitly change.
For example, I corrected dev/05_data_core.ipynb
and after running notebook2script.py in the last cell git status
reports:
- changed: dev/05_data_core.ipynb
- changed: dev/local/notebook/export.py
- changed: dev/local/notebook/index.txt
- changed: dev/local/tabular/core.py
Shall I include them all into commit and PR?
I dont think so. Ideally I would include only the lines that are changed in notebook. These files are generated out of those changes. Hence, I dont think it is needed to be committed.
TL;DR: Put everything into one commit.
There is usually two ways to go about this: a) put the local
folder in .gitignore
so it wonāt appear in git, as these are generated files. This is not the approach taken by fastai_dev (obviously). Or b) include generated files in the repo, because you want to use it to distribute the ātranspiledā library. In this scenario, you could either make two separate commits, one including the changes made and one including the generated files that changed, or you could put everything into one commit. Looking at the recent commit history, it seems that Jeremy and Sylvain prefer to keep everything in one commit.