Wiki: Fastai Library Feature Requests

Its RMSE for Train and Valiation if this is coming from print_score(m). print_score is defined near the top of the notebook.

Got it! Thank you. @ramesh

For CV I think it’s helpful to have the same split each time.


They are, I believe the point he is making is that, when do you do a relatively large number of cycles with different cycle_len and cycle_mult it is not clear right away after what epochs our model was saved. We do need to make some calculations to find out where our cycles eneded, so I do belive that highlighting in any way the end of the cycle could be very helpful sometimes so that we see what kind of loss and accuracy we get from saved weight right away.
Based on what @KevinB proposed, i suggest something like the following

[ 0.       0.29109  0.22709  0.9116 ] [cycle 0]                       
[ 1.       0.27596  0.21606  0.91596]                        
[ 2.       0.25896  0.21201  0.91738] [cycle 1]                        
[ 3.       0.23578  0.21034  0.91812]
[ 4.       0.25659  0.20687  0.92041]                        
[ 5.       0.2449    0.1977   0.9226]                            
[ 6.       0.23827  0.1925   0.92457] [cycle 2]

Yeah thanks for explaining it in a much better way. Usually you can figure out because the loss pops back up but sometimes it’s tough to tell for sure.

I like that! If anyone wants to try a PR for this I’d be interested - although I’m not quite sure the best design for this. Might be a tricky one!..

Extracting Output from Intermediate Layer (potential feature request)

I am interested in extracting the output of an intermediate layer. For example, the layer right before classification. Is there a relatively straightforward way to do this with Fastai/Pytorch?

I’m thinking I should be able to just refer to the layer name by index and simply call for its output? I’m sure it must be possible but I think it would be nice to have a simple high-level function for doing this. (i.e. model.get_layer(layer_name).output)

Yes, you can use a forward hook. We’ll learn about them on Monday :slight_smile:

1 Like

a very small bug that you must have already seen.

When you define the data object and do provide test_name='test'

data = ImageClassifierData.from_csv(PATH, folder='train', csv_fname=f'{PATH}labels.csv',
                                    tfms=tfms, val_idxs=get_cv_idxs(n=4750), test_name='test', bs=12)

And, there’s no test folder or if the test folder is empty. This assignment results in the following error:

The function read_dir() in is used to read the test. The function already contains a TODO: warn or error if no files are found?

def read_dir(path, folder):
    # TODO: warn or error if no files found?
    full_path = os.path.join(path, folder)
    fnames = iglob(f"{full_path}/*.*")
    return [os.path.relpath(f,path) for f in fnames]

read_dir() is only used to read the test data folder.

16 AM

I think this should raise an error is it returns an empty list. Something like the following:

21 PM

That way the error message would be more explicit.

You can edit your fastai code and check whether the that works and feel free to create a PR, Jeremy will see to it after that…

1 Like

Yes, I think it’s better to report the issue there and submit a PR.

How can we use multiple augmentation (RandomFlip, center crop, top down etc) on the same dataset?

By doing so will the learner become more versatile in nature as now it can handle different images better?

Can it be done in this way-:

  • train the learner with any one augs.
  • save and load the weights and then re-execute the learner exactly except for the augs.

Is it just me or the notebooks run way slower after my latest conda env update. It is something about opencv, even opening single images is taking forever. My epochs are 3x slower. Nothing else changed on my p2 instance. I am working with the lesson7-CAM notebook.

I restarted my p2 instance for good measure.

Did something change in the env about pytorch? opencv? or cuda?


It is one of my goals in the soon future to improve this, despite the fact that validation metric could be a varying function of lists and what not !! The notebooks just don’t look right with those numbers spread out like that.

Also, I want to output something about the wall-time used per epoch.

I’m hoping to use some kind of reflection to determine what kind of loss functions and metric were passed, and print them out. Thus, we can get something more specific than plain “val metrics”

I will try to rerun the entire set of notebooks, and submit them for PR.

Would that be nice to have @jeremy ?


1 Like

I just added a line for compatibility with scikit-learn:

I feel like this could make code simpler by making scikit-learn more accessible to fastai. It would also enable using tools like xcessiv, which seems like a really cool way to do parameter tuning and stacking/ensembling.


Hi all experienced coders,

Is it possible to put fastai environment into a Docker container (see example below)? @radek Any comment?

Thank you for @hamelsmu to share his knowledge and congratulation to the recent success.

It would be great if we can make this wiki accessible in the Part 2 category as well.


I think so :slight_smile: Today or tomorrow I am going to find out as I work through the tutorial from @hamelsmu (have not used docker before)

1 Like

Hey folks, I have made a docker container for this class:

This is using Nvidia-Docker v2.0 and contains all dependencies for You still have to download once you are in the container. You can view the Dockerfile here:

I hope this is helpful!!! cc: @radek@Moody


Request: Multi-GPU automagic scaling

Description: Option in Fastai to utilize a set of GPU for a task, and have Fastai automatically handle parallelization, data distribution, and synchronization. For example, I know Jeremy has 4 GPUs as do others. I have 2. With this, you could have a list , GPUS=[0,2] for the first and third GPU to automagically work together on a task.