Wiki: Fastai Library Feature Requests

I’ve seen a lot of threads in the forum about adding new features to the fast ai library (i.e. new models, new functionality, etc.) but I personally think it would be better and more convenient for everyone to have a master thread, preferably a Wiki so everyone can edit.

The Wiki would have all the ongoing feature requests for the Fastai library including the status of each and who the task is assigned to (some of us students could potentially take on the requests). Also by having a list of open/completed requests then we would see a lot less duplicate threads asking the same things :slight_smile: Since all of us are coming from different backgrounds/skill levels it would also help for example if the feature is already in the library but some of us just didn’t realize it or know how to use it so the wiki could help address situations like that too.

What do you guys think? cc @jeremy

Here are a few Feature Requests I pulled together from other threads…anyone can feel free to add more or edit as needed!

New Models
NASNet * completed
SENet * completed
Resnet152 and VGG19 - *completed
VGG16 - *completed

New data engineering

Before building any model, data needs to be prepared, cleaned-up, labeled, and so on. That takes a considerable amount of time and no proper solution exists for the problem of going from messy data to something to ingest into fast.ai.

Proposal: Bake Snorkel’s programmatic data engineering into fast.ai to provide an end-to-end solution from idea, to data-engineering, to model building & deployment.

Sources & ressources:

New Functionality
Saving Model location
Enable tqdm progress meter for learn.predict & learn.TTA * completed
Add from_df which will accept a pandas df (from_csv only accepts path to csv)
Add texts_from_df function to load text from inside a pandas df
Allow duplicate files in from_csv (i.e. Upsampling minority classes)

Performance
Weld End-to-End optimization

Compatibility
Add scikit-learn wrapper for fastai, as done for Keras, XGBoost, or Pytorch with Skorch

Recent Updates
2018-11-17: Returned to OpenCV
2018-11-08: Removed OpenCV dependency

Data Processing
Currently Size of an image is specified as an int value. Add option to specify Height x Width -
(Ramesh) - I looked into this, it’s not straight forward because of how we use size in our training process. It’s entirely possibly and library doesn’t constrain us, just that I have not found any easy ways to do it. So far for what I needed, I just increase the background on the tall image to make it a square).

Unfreeze Options
Currently unfreeze automatically unfreezes to 0. This causes long runs when working with larger sized images see (Lesson 3 In-Class Discussion). Would be good to have options on how may layers / sub-layers to unfreeze.

Jeremy suggested - you can use freeze_to() for that
But most pre-trained networks have only two layers above the finetune layer. They were both huge. Is it possible to freeze_to a sub layer or breakdown to more layers in the pre-trained network?

The caveat is we have to give more learning Rates. Might be better to give an option to specify a dictionary of layer names we want to unfreeze and learning rates for them? Thoughts / suggestions?

Learner Fit Options

  • Return a History of Train and Validation loss / metrics from learner.fit method
  • Add Early Stopping with Patience (similar to Keras)
  • Add Model Checkpoint - *available in fastai library, see cycle_save_name

TTA Enhancements

Environment

26 Likes

Sounds reasonable :slight_smile: I’ve made this a wiki.

4 Likes

I was actually posting the same questions somewhere else…

Would it be a good idea if I worked on some segmentation architecture, and tried to include this in the FastAI library? I can’t promise to recreate the original results, but sure will give it a good shot. If nothing, i suppose, some other could help debug. Also would be a good exercise to be familiar with PyTorch and fastai.

Let me know… thanks

1 Like

I was planning to do this for part2. Feel free to have a go, but my first thoughts are:

  • I suspect it would be more valuable to your learning and to the library to help with other stuff such as adding docstrings to the functions, creating examples showing how to solve other kaggle competitions with it, etc.
  • I feel like I need to spend a week or two thinking about the segmentation architecture, so I can’t promise I’d merge your changes.

is there a way to enable tqdm progress during learn.TTA(). I am doing a TTA on a validation set of 4000 images and it is for some reason table wayyy too long on a g2.xlarge machine. It would be nice to see the progress to see if is stuck or actually moving.

Also, if someone well versed in GIT could make a video on how to make a pull request to fast-ai, that would be awesome. I would love to help document the library but never contributed to open source packages before.

3 Likes

Its funny I was literally thinking the same thing today. Totally agree we need this!

A fastai student actually posted a blog about how to contribute to fastai. You should probably check that out!

1 Like

Created a Pull Request for adding progress bar to TTA:

5 Likes

BTW creating a pull request is far far easier if you use this: https://github.com/github/hub

1 Like

Nice!

I would like two features and want to see if others would be interested -

  1. Modify the learner.fit method to return a history dictionary of Train loss, Valid loss and the metrics, so that you can plot it if you choose to. I generally used this feature in Keras to plot the loss curves between Tran and valid in the subsequent cell.

  2. I remember Jeremy had said in Lesson 2 that he doesn’t believe in Early Stopping, because loss could go down further. But I think it’s a useful feature to have which can be turned ON if our model is overfitting too much.

7 Likes

I would definitely like both of these. I would also like to add that it would be great to have a Model Checkpoint equivalent so that the best weights are always saved automatically as long as you use the callback.

Although I will add re Early Stopping, I think one of the reasons we relied on it so much is that in Keras when val loss went south you are kinda just SOL. With fastai we have the luxury of SGDR where val loss can correct itself at seemingly any moment so we may not find as much use for it :slight_smile: (but still nice to have as an option)

2 Likes

Agree with this. The callback feature in Keras is quite powerful for which metrics have to be returned as dict. Helps with checkpointing, visualizations and various other nifty little things. Adding https://keras.io/callbacks/ for reference.

Wow, hub is pretty amazing! Created my first pull request using hub (albeit a simple one).

Had a bit of a snuffle separating my work account from my personal account. But once I got that sorted out, it was straightforward from there.

2 Likes

You can do this with cycle_save_name.

2 Likes

We have callbacks in fastai too, much inspired by keras in fact! Have a look at sgdr.py to see an example of their use.

2 Likes

These both sound like nice additions.

Added a new feature request to the Wiki Above for Unfreeze of Layers:

Currently unfreeze automatically unfreezes to 0. This causes long runs when working with larger sized images see (Lesson 3 In-Class Discussion). Would be good to have options on how may layers / sub-layers to unfreeze.

Jeremy suggested - you can use freeze_to() for that
But most pre-trained networks have only two layers above the finetune layer. They were both huge. Is it possible to freeze_to a sub layer or breakdown to more layers in the pre-trained network?

The caveat is we have to give more learning Rates. Might be better to give an option to specify a dictionary of layer names we want to unfreeze and learning rates for them? Thoughts / suggestions?

One idea I’ve had is if you have a cycle_save_name=’ ’ in a fit, it would be nice if it noted where the models were saved so something like this:

[ 0.       0.29109  0.22709  0.9116 ]                        
[ 1.       0.27596  0.21606  0.91596]                        
[ 2.       0.25896  0.21201  0.91738]                        
[ 3.       0.23578  0.21034  0.91812]<-----PtH_cyc_0.h5
[ 4.       0.25659  0.20687  0.92041]                        
[ 5.      0.2449  0.1977  0.9226]                            
[ 6.       0.23827  0.1925   0.92457]                        
[ 7.       0.22647  0.19253  0.92468]<-----PtH_cyc_1.h5   
1 Like

You can keep three learning rates while adding more possibilities on how many layers to unfreeze. You can make a function. We can write something called:

freeze_to_layer()

I’m considering changing freeze_to to make it take a layer number, rather than a layer group number. I do agree the current behavior isn’t necessarily what we want…

1 Like