Question regarding tests

When I attempt to push my code I get to one last test in which I fail. These notebooks are run in Colab and so they have a cell drive.mount() to mount one’s google drive (and this is included in the library). drive.mount() expects a user response. I think this is why I’m getting the following failure:

ValueError: signal only works in main thread
testing: /home/runner/work/nbdev_template/nbdev_template/nbs/01_MoreFunctions.ipynb
Error in /home/runner/work/nbdev_template/nbdev_template/nbs/01_MoreFunctions.ipynb:
Kernel didn't respond in 60 seconds
testing: /home/runner/work/nbdev_template/nbdev_template/nbs/00_core.ipynb
Error in /home/runner/work/nbdev_template/nbdev_template/nbs/00_core.ipynb:
Kernel didn't respond in 60 seconds
Traceback (most recent call last):
  File "/opt/hostedtoolcache/Python/3.6.9/x64/bin/nbdev_test_nbs", line 8, in <module>
  File "/opt/hostedtoolcache/Python/3.6.9/x64/lib/python3.6/site-packages/fastscript/", line 73, in _f
  File "/opt/hostedtoolcache/Python/3.6.9/x64/lib/python3.6/site-packages/nbdev/", line 71, in nbdev_test_nbs
    raise Exception(msg + '\n'.join([ for p,f in zip(passed,files) if not p]))
Exception: The following notebooks failed:
##[error]Process completed with exit code 1.

Is there a way to ignore the functions with this particular call?

To ignore specific cells during tests you whould flag them with one of the test flags defined in your settings.ini. For instance, you could have a colab flag for those cells, then a normal run of nbdev_test_nbs would ignore those.

1 Like

I wanted to ask about this as well, as I’ve seen the flags mentioned a couple of times, but not sure I’ve got it.
Basically, I can define a list of flags (not limited by an existing list), but they all have the same semantics, i.e. “Ignore this cell in tests”. Is that right?
Also, is there a way to ignore a cell in GitHub actions but keep it when running nbdev_test_nbs? For example I don’t upload large datasets to GitHub, but I want these cells to run when I test locally.

It ignores it, unless you pass the right flag. So it allows you to have different test suites, core + that part that needs something extra (for instance a GPU, or an extra dep, or a large dataset).
Which means that you should use the same for your second question. Define a custom alias or make command for

nbdev_test_bvs --flags large_dataset

and run this locally while the CI on github will only run the core test suite.


Ah, great, I wasn’t aware of the --flags option, so it appeared all flags did the same thing. And that solves my dataset issue. Very cool, thanks.
[ I think that nbdev_test_bvs is meant to be nbdev_test_nbs in your answer]