How to setup a fast.ai environment for a real-world, production application?

About to start working on an actual project at the university I work at, and I’d like to hear what you all would recommend as to setting up a suitable environment.

To date, I’ve just done the typical git clone fastai and symlink the core fastai folder from wherever I run the notebooks. BUT now, I need to create an environment that is stable, that multiple developers can potentially use, and that can be setup for dev, QA, and production uses.

I can continue to work as is (if that really is the best approach), or pip install (from pypy or from the github repo), etc…, but I’d love to hear what you all suggest (especially those using fastai in a real-world system).

Thanks - wg

3 Likes

Personally I think it’s best for each dev to have their own anaconda in their home dir. To make fastai available to all projects, just symlink it into the python site-packages folder.

3 Likes

Thanks Jeremy!

Do you know offhand how to get the site-packages folder for an Anaconda environment? Btw, I’m assuming with this approach I get just git clone fastai onto a network share and have dev symlink from there.

It’s in your home dir in anaconda3/lib/python3.6/site-packages. I suggest you let each person maintain their own fastai repo so they can update it when they’re ready. If you’re going to have all share one fastai repo, then they’ll probably want to share the whole anaconda lib too (otherwise they’ll have issues with versioning).

Correct me if I’m wrong, but I think you mean anaconda/envs/{my-env}/lib/python3.6/site-packages/ rather than anaconda3/lib/python3.6/site-packages.

I tried the later and then created the environment and it wasn’t there.

2 Likes

Apologies, that’s exactly what I mean :slight_smile:

1 Like

Any thoughts on using pipenv instead of Anaconda?

The core lecturers at the Full Stack Deep Learning Bootcamp were really pushing it as the recommended defacto methodology for package management. I’m not even sure if something like your recommendation is possible with it.

I use virtualenv+pip instead of Anaconda. Works fine.

I imagine pipenv would also be good.

@wgpubs you should get in touch with @matttrent. He’s done the same getting fast.ai up and running within facebook and I believe he explored some of these issues as well.

I am working on productionalizing a fastai model. I am using an Anaconda environment. I use Anaconda’s pip to install the fastai library from GitHub. So far, so good.

1 Like

I am using an fastai.ai image classifier as part of a geoinformatics
production environment (drone & satellite data), in the same way as @samh describes it. Most packages that are used in Geoinformatics are available through the conda channel conda-forge, so this solution works perfectly for me!

Hi Jeremy,

I am a novice to fastai. Can we have multiple environments for fast ai , one for the v3 and one with fastai 0.7 since if I need to do custom pytorch model building I only know part 2 which uses fastai 0.7 at this point of time explained in lesson 6 and lesson 7 of fastai part2.

minimal API server boilerplate One more flask server example
about environment managers:

1 Like

https://docs.fast.ai/dev/develop.html#switching-conda-environments-in-jupyter

1 Like

You can also use the Automatic Environment Kernel Detection for Jupyter. It´s super easy to use!

1 Like