You should use https://github.com/fastai/fastscript, it works well with nbdev. Haven’t tried argparse, so I don’t know if there are bad side effects.
I’ll try that, thanks. Was hoping to be able to use test_tube
HyperOptArgumentParser to enable hyperparameter search, but guess it might not work well with nbdev.
Just as a side note, I tried to completely isolate the argument parser on an independent notebook (in case it was the fact that it got called multiple times in parallel from multiple imports) but it doesn’t work better. I would guess the problem comes from multiprocessing but it still didn’t work with
n_workers=1, so I’m not sure.
EDIT: Probably comes from a compatibility problem between nbconvert and argparse, as
notebook2script works fine and it doesn’t use nbconvert while it uses multiprocessing.
anyone going to try to make this work with metaflow, or mlflow, or kubeflow? Would love to hear reports of others experiences.
No, notebook2script uses a single process. It doesn’t try to execute the notebook either, so it could be either of those things.
Im testing nbdev, congrats its amazing,
now while running the CI after pushing, Im getting on github an error on the clean step:
however if I run locally nbdev_clean_nbs, it runs all well with no errors,
When it fails on github, it says:
Check if all notebooks are cleaned1s
##[error]Process completed with exit code 1.
Run echo “Check we are starting with clean git checkout”
Check we are starting with clean git checkout
Trying to strip out notebooks
Check that strip out was unnecessary
!!! Detected unstripped out notebooks
!!!Remember to run nbdev_install_git_hooks
##[error]Process completed with exit code 1.
however, I did run earlier the nbdev_install_git_hooks command a few times
and used nbstripout to clean also the output of notebooks
any tips? thank you
Pd: just fixed it, there was some sync issue, once all got back in sync the CI is now executing perfectly congratulations again for creating such an amazing library
Just started using nbdev and it is great! Two quick questions:
- Is there a way to use nbdev with Github Desktop? It looks like the git hooks from
nbdev_install_git_hookscause errors when using the Desktop client.
- What is the best way to take an existing repo and port it to nbdev?
I have no knowledge of GitHub desktop so I have no idea how to make the git hooks work with it.
To port an existing repo to nbdev, the easiest is to create a new one from the template, copy your notebooks inside, test and when you’re ready, erase your old repo with the new content.
I expect it would also be fine to copy all the files from nbdev_template into your repo.
Oh indeed. Executing the notebook shouldn’t fail though, as it works for me. And it still fails with
n_workers=1 so I guess it’s not multiprocessing. I’ll look into it, maybe I’ll find a way to at least see the stack trace
Ok found where it comes from. Nbconvert adds a custom argument
--HistoryManager.hist_file which triggers and
unrecognized argument error from argparse (same way notebooks add a
--file argument). Just adding it in my
ArgumentParser solved the problem.
hello, thx for the release.
I’m giving it a try, and I’m not sure how to handle dependencies between notebooks.
I have two notebook, 00_a and 01_b, how do I import/use a within b?
Do I have to run nbdev_build_lib each time I modify 00_a?
Also in 00_a I have “# default_exp a”, I tried to run nbdev_build_lib and then call “import a” from within 00_b, but I got a “no module named…” error.
Could someone clarify the workflow for notebook dependencies?
Small edit: it seems using “nbs_path” in settings.ini to put the notebook in a subdirectory doesnt go well along since the library path does not include the root directory.
In other words, if using nbs_path = ., then you can import mylib.a properly within 01_b, but if using “nbs_path= nbs” and putting the notebooks within this subdirectory, “import mylib.a” is not working because “sys.path” doesn’t include the root directory (or the output directory from nbdev_build_lib).
(The question about having to run nbdev_build_dev each time I modify a dependency remains though.)
You should have a simlink to your library path if you have a notebooks path different from ‘.’ so that python can find your library.
As for the build, you can do as we did in fastai2 and have the last cell of each notebook call
notebook2script() (imported from nbdev.export) and just run that last cell each time you change the notebook.
Got a new question: is there any way to export a script and not a module from a notebook. For instance, I have a notebook where I fit the model, which I don’t want to happen when I call
nbdev_build_docs for instance. Is there a way to export this notebook into a python file but still prevent it from being run when building docs ? I know I can prefix it with an underscore but it will prevent it from being converted to python as well. I guess I can force conversion while still prefixing with underscore, so that I only convert it but never actually run it. Just wanted to be sure there is not better way to do it.
The easiest way I can see is to have your notebook that becomes a script in a different folder. It won’t be converted automatically when you call nbdev_build_lib, but you can still pass a manual path to that command to do the conversion, and since it’s not in the same path, it won’t be considered for the doc building.
While adding nbdev to existing projects if there are already existing modules in
lib_path I believe there is no way to convert those to notebooks. A script can only be converted back if it already has an associated notebook. Is that correct? Would it be possible to convert scripts to notebooks which doesn’t have notebook to start with.
No. As documented, the command that updates the notebooks from the scripts can only work with small changes.
Hi, is it expected behavior that only first failed test per notebook raises an AssertionError, and then nb stops executing? Here’s what I mean:
Of these two cells, only top one is executed and fails, then notebook execution stops.
for quick one time conversion if you have VS Code and can use
#%% tag in your python code and then convert to notebooks … Not so elegant but may ease up things…
Yes this is expected. This not like pytest that runs tests in parallel, it stops at the first problem in each notebook (you know you have to go there and fix things in any case).