Wiki: Fastai Library Feature Requests

How can we use multiple augmentation (RandomFlip, center crop, top down etc) on the same dataset?

By doing so will the learner become more versatile in nature as now it can handle different images better?

Can it be done in this way-:

  • train the learner with any one augs.
  • save and load the weights and then re-execute the learner exactly except for the augs.

Is it just me or the notebooks run way slower after my latest conda env update. It is something about opencv, even opening single images is taking forever. My epochs are 3x slower. Nothing else changed on my p2 instance. I am working with the lesson7-CAM notebook.

I restarted my p2 instance for good measure.

Did something change in the env about pytorch? opencv? or cuda?


It is one of my goals in the soon future to improve this, despite the fact that validation metric could be a varying function of lists and what not !! The notebooks just don’t look right with those numbers spread out like that.

Also, I want to output something about the wall-time used per epoch.

I’m hoping to use some kind of reflection to determine what kind of loss functions and metric were passed, and print them out. Thus, we can get something more specific than plain “val metrics”

I will try to rerun the entire set of notebooks, and submit them for PR.

Would that be nice to have @jeremy ?


1 Like

I just added a line for compatibility with scikit-learn:

I feel like this could make code simpler by making scikit-learn more accessible to fastai. It would also enable using tools like xcessiv, which seems like a really cool way to do parameter tuning and stacking/ensembling.


Hi all experienced coders,

Is it possible to put fastai environment into a Docker container (see example below)? @radek Any comment?

Thank you for @hamelsmu to share his knowledge and congratulation to the recent success.

It would be great if we can make this wiki accessible in the Part 2 category as well.


I think so :slight_smile: Today or tomorrow I am going to find out as I work through the tutorial from @hamelsmu (have not used docker before)

1 Like

Hey folks, I have made a docker container for this class:

This is using Nvidia-Docker v2.0 and contains all dependencies for You still have to download once you are in the container. You can view the Dockerfile here:

I hope this is helpful!!! cc: @radek@Moody


Request: Multi-GPU automagic scaling

Description: Option in Fastai to utilize a set of GPU for a task, and have Fastai automatically handle parallelization, data distribution, and synchronization. For example, I know Jeremy has 4 GPUs as do others. I have 2. With this, you could have a list , GPUS=[0,2] for the first and third GPU to automagically work together on a task.


Hi @Moody I’ve posted some docker files for fastai on github

1 Like

Make flags in Tokenizer in that allow a user to set the different t_up, t_st, and t_mx flags optional. I am working on a project currently in SQL and it would be nice to have a quick way to not have the capitalization flagged. I am going to code this, but not tonight so if somebody else wants to do it, I think it would be a good one.

arch = senet154
learn = ConvLearner.pretrained(arch,data

Hi Rob, so was this implemented at the end?

1 Like

Anybody aware of any sklearn like wrapper for fastai ?

[Feature Request] Weld End-to-End optimization;

Let’s face it, performance matters but proper optimization takes hard work, time, and a lot of testing.

Weld enables end-to-end optimization across disjoint libraries and functions without changing the libraries’ user-facing APIs . So why not baking int into ?

Can I request mermaid support in nbdev?