I’m using fastai v2 and nbdev to build a timeseries package. At the beginning , I saved my jupyter notebooks on the root folder like in the walkthrus back then. I wanted to move them to a newly added nbs folder, that I created under the root folder, like it’s currently done in fastai2, fastcore, etc. I changed my settings.ini like this:

lib_name = timeseries
nbs_path = nbs
doc_path = docs
lib_path = timeseries

When I run from timeseries.core import *, it tiggers the following error:

ModuleNotFoundError                       Traceback (most recent call last)
----> 1 from timeseries.core import *

ModuleNotFoundError: No module named 'timeseries'

If I add this

import sys

the error disappear and everything works well. In both fastai2 and fastcore, this trick is not necessary. So, I am wondering what I am missing.

Thank you for you help.

PS: if you feel that this post should be in the nbdev section, please feel free to move it there.


I did that :slight_smile:

1 Like

I’m having the same kind of problem but on fastpages. I’m running the server locally (make server). But my local docker does not have torch installed. I did insert !pip install torch in my notebook, but it does away when I stop and reboot my local server. Shouldn’t the docker image used for running local version of fastpages/nbdev keep it’s dependencies ? Am I missing something ?

I will answer to my own question, in case there is someone facing the ModuleNotFoundError problem.

In linux, add a symlink to lib_path (timeseries in my case) in your nbs folder by running this under the nbs folder :

ln -s ../timeseries/ timeseries

Replace timeseries by your lib_path value that you entered in the settings.ini file.

The problem that I am facing now is when I use my repo in Windows, the linux symlink is not recognized. Any thoughts?

Also having the same issue. Wonder if its a windows thing?

Alright progress. I’m running notebooks in app/nbs folder and trying to import the module from app/app and it can’t find the module. But if I make a notebook in the app/ folder, i can import the autogenerated module.

Farid maybe this is what you were saying. How to work around this?

No, it will go away unless you edit the Dockerfile to add this dependency. Every time you restart the server it builds from scratch

Here below is my work around:
1 - Under linux, I created a symlink link like this

ln -s ../timeseries/ timeseries

2- In my notebooks, I add this cell (before I import my own modules (i.e timeseries in my case))

# hide
# Only for Windows users because symlink to `timeseries` folder is not recognized by Windows
import sys

This way, I’m able to store my notebooks in the nbs folder, and everything works well. You can check out one of my notebooks in my repo (e.g. 81_timeseries_core.ipynb in the import section on top)

I hope you will find it helpful


Thank you you saved me some time finding this !
It might be interesting to show this somewhere or add a configuration somewhere for this as I guess most people will like some packages installed at some point, no ?

1 Like

Great idea. I’ll start writing docs on this.

Hmm I needed GPU support for a training loop too, docker-compose does not yet support the “–gpu all” flag equivalent, so I had to hack a bit around my local docker container. Not sure how this will play on Github workflow anyway as I’m just doing this locally for now. I’m guessing they don’t provide gpu machines there anyway, but I’m guessing you could develop your notebook on colab or locally (with those hacks) and just push with the regular flow.

Reference issue for docker-compose

Just FYI

1 Like

@Narsil GitHub / fastpages doesn’t run your notebook it only renders it.

I am not completely sure you should be stuffing the Docker container full of dependencies as that might interfere with your blog environment and might hinder your ability to test things locally (because you want to mirror the environment of GitHub Actions). We also may make changes and bug fixes to this container over time.

You might want to build a separate Docker container with all of your desired dependencies that is personalized to you. I’m still thinking about this. Perhaps I should remove Jupyter from the development environment and move that to a stand alone service which means you have to start the blog And the Jupyter server separately, which means two separate commands but at least you can more easily personalize your development environment.

@jeremy @xnutsive any thoughts or preferences here?

My advice in the meantime is to write notebooks and manage environments in the way you are comfortable normally doing this using conda, etc etc and copy that notebook into _notebooks folder when you want to preview it locally, and don’t worry about the Jupyter server that starts in the Docker container. Just make sure you start your other Jupyter service on another port than what’s being used by the existing Docker process.

Hope that helps

Keep it simple :slight_smile: Folks who want complex things can make them as complex as they want!


Good point, I went ahead and removed Jupyter from the development environment to simplify things via My rationale is detailed in this PR but I’ll copy and paste it here as well. I hope this helps. Thank you for the feedback everyone.

Summary of Changes

  • Removed Jupyter Server (not the watcher) from the development environment
  • Edited instructions in that users start/manage their own Jupyter server on their own, in the manner they are most comfortable with.

Reason for this change

  • Encourages people to use package management etc they are comfortable with which might be easier for them
  • Simplifies the development environment considerably (no tokens, no jupyter server, etc)
  • Encourages folks to maintain a development enviornment with all their dependencies (fastai, pytorch, tensorflow, etc. whatever) that is decoupled from the converter and Jekyll server, which is much safer, and a better way to ensure that the development environment mirrors what happens in GitHub Actions.
  • Makes this project easier to maintain as we can focus on fastpages, not managing a global “data science” dependency manager
1 Like

What’s shown there now is too complex for me to be able to handle, FYI. I’m terrified of Docker despite using it in three different projects. There seems to be a lot of assumptions about my knowledge of docker and that I’d have things set up already in some way, but I’m not sure what those assumptions are, or how to make them happen.

So, at least for me, it’s currently not simple - or at least not simple enough!.. (I may be slower than most folks, however… :slight_smile: )

For me, “simple” is: press the following buttons, and it will work.

When I googled for “docker windows” the first hit is a long page with lots of steps: . I’m not sure what I need to do.

I think the docker stuff sounds like a nice optional extra, but (unless I’m misreading) at the moment it seems to be the recommended way. If that’s what we want, then we need simple step-by-step instructions for each OS to make it work.

Alternatively, regular jekyll could be the recommended way, and docker could be an advanced alternative. I haven’t tried it, but there’s a conda package: . If that worked, that might be the easiest. Or we could have a little script that sets up rvm and ruby and the gems.

With conda, we have the benefit that we could create a single meta-package that sets up everything automatically, in a cross-platform way.

Happy to discuss all this live if that’s helpful.

1 Like

@jeremy Thank you so much for this feedback, this is very helpful as I may have developed blindspots regarding what is easy and hard.

In this case it might save us time to discuss live (rather than having too many back and forth questions). I’ll email you so we can setup a quick zoom, I anticipate the discussion wouldn’t take longer than 15 minutes)

For anyone following this thread, I will be making a local development environment based on conda ( but will retain Docker as well for those who like that). This will give folks multiple ways on configuring their local environment depending on what they want.

I’m currently trying to upgrade fastpages to use jekyll 4.0, ( which has been much more difficult than it sounds ) before trying to create the conda environment


I have been trying to figure out how to do this. If anyone is watching this thread and would be willing to pair-program that would be helpful, I am really stuck on trying to get Jekyll to work on conda properly ( perhaps I am just slow and don’t now how to look for the right answers ( I am new to conda).

Keep banging my head against this error:

/Users/hamelsmu/anaconda3/envs/jekyll/lib/ruby/2.5.0/mkmf.rb:456:in `try_do': The compiler failed to generate an executable file. (RuntimeError)
You have to install development tools first.
	from /Users/hamelsmu/anaconda3/envs/jekyll/lib/ruby/2.5.0/mkmf.rb:590:in `try_cpp'
	from /Users/hamelsmu/anaconda3/envs/jekyll/lib/ruby/2.5.0/mkmf.rb:1098:in `block in have_header'
	from /Users/hamelsmu/anaconda3/envs/jekyll/lib/ruby/2.5.0/mkmf.rb:948:in `block in checking_for'
	from /Users/hamelsmu/anaconda3/envs/jekyll/lib/ruby/2.5.0/mkmf.rb:350:in `block (2 levels) in postpone'
	from /Users/hamelsmu/anaconda3/envs/jekyll/lib/ruby/2.5.0/mkmf.rb:320:in `open'
	from /Users/hamelsmu/anaconda3/envs/jekyll/lib/ruby/2.5.0/mkmf.rb:350:in `block in postpone'
	from /Users/hamelsmu/anaconda3/envs/jekyll/lib/ruby/2.5.0/mkmf.rb:320:in `open'
	from /Users/hamelsmu/anaconda3/envs/jekyll/lib/ruby/2.5.0/mkmf.rb:346:in `postpone'
	from /Users/hamelsmu/anaconda3/envs/jekyll/lib/ruby/2.5.0/mkmf.rb:947:in `checking_for'
	from /Users/hamelsmu/anaconda3/envs/jekyll/lib/ruby/2.5.0/mkmf.rb:1097:in `have_header'
	from extconf.rb:10:in `system_libffi_usable?'
	from extconf.rb:34:in `<main>'

To see why this extension failed to compile, please check the mkmf.log which can be found here:


extconf failed, exit code 1

Gem files will remain installed in /Users/hamelsmu/anaconda3/envs/jekyll/lib/ruby/gems/2.5.0/gems/ffi-1.12.2 for inspection.
Results logged to /Users/hamelsmu/anaconda3/envs/jekyll/lib/ruby/gems/2.5.0/extensions/x86_64-darwin-17/2.5.0/ffi-1.12.2/gem_make.out

I tried to understand why the install of ffi is not working and the logs give me this

 cat /Users/hamelsmu/anaconda3/envs/jekyll/lib/ruby/gems/2.5.0/extensions/x86_64-darwin-17/2.5.0/ffi-1.12.2/mkmf.log                                                               5 ↵
"pkg-config --exists libffi"
dyld: Symbol not found: _iconv
  Referenced from: /usr/lib/libarchive.2.dylib
  Expected in: /Users/hamelsmu/anaconda3/envs/jekyll/lib/libiconv.2.dylib
 in /usr/lib/libarchive.2.dylib
package configuration for libffi is not found
"x86_64-apple-darwin13.4.0-clang -o conftest -I/Users/hamelsmu/anaconda3/envs/jekyll/include/ruby-2.5.0/x86_64-darwin17 -I/Users/hamelsmu/anaconda3/envs/jekyll/include/ruby-2.5.0/ruby/backward -I/Users/hamelsmu/anaconda3/envs/jekyll/include/ruby-2.5.0 -I. -D_FORTIFY_SOURCE=2 -mmacosx-version-min=10.9 -I/Users/hamelsmu/anaconda3/envs/jekyll -D_XOPEN_SOURCE -D_DARWIN_C_SOURCE -D_DARWIN_UNLIMITED_SELECT -D_REENTRANT   -march=core2 -mtune=haswell -mssse3 -ftree-vectorize -fPIC -fPIE -fstack-protector-strong -O2 -pipe -I/Users/hamelsmu/anaconda3/envs/jekyll/include -fdebug-prefix-map=/usr/local/miniconda/conda-bld/ruby_1570429193060/work=/usr/local/src/conda/ruby-2.5.7 -fdebug-prefix-map=/Users/hamelsmu/anaconda3/envs/jekyll=/usr/local/src/conda-prefix -fno-common conftest.c  -L. -L/Users/hamelsmu/anaconda3/envs/jekyll/lib -L. -Wl,-pie -Wl,-headerpad_max_install_names -Wl,-dead_strip_dylibs -Wl,-rpath,/Users/hamelsmu/anaconda3/envs/jekyll/lib -L/Users/hamelsmu/anaconda3/envs/jekyll/lib -fstack-protector -L/usr/local/miniconda/conda-bld/ruby_1570429193060/_build_env/lib/clang/4.0.1/lib     -lruby.2.5.7  -lpthread -lgmp -ldl -lobjc "
checked program was:
/* begin */
1: #include "ruby.h"
3: int main(int argc, char **argv)
4: {
5:   return 0;
6: }
/* end */

dyld: Symbol not found: _iconv

cc: @jeremy

I guess the only thing I can understand is something is going wrong in the ruby installation or environment.