Share your V2 projects here

After lesson 7, I was curious how much better random forests are at predicting rows where the trees agree versus when they don’t.

I ran a simple experiment: Split the rows of the validation set into quintiles based on preds_std (i.e. the standard deviation of the tree predictions for a given row). Then, for each quintile, calculate the RMSE.

image

The results, at least on the bulldozer’s dataset, definitely validate that the model performs better on rows where the tree is in agreement. I think the most interesting takeaway is that, even in the lowest quintile, the RMSE is still 0.15 (versus ~0.23 for the whole validation set)

Notably, this means that the tree variance generally overstates the model performance:
image

For example, while the most uncertain predictions ended up having an RMSE of 0.15, the tree variance method indicated 0.11

The code behind these charts is simple. You can run it right after preds_std is defined. Running it later on is tricky, since the model variable gets redefined later in the notebook.

Hi all

so I took the challenge to rebuild the MNIST classifier for all images in the MNIST dataset using the MNIST code we used in class and I think I came up with good results and I would like to post it in my blog but I would like some feedback before that if anyone can take a look and let me know if what I did actually makes sense and its correct.

Here is my notebook: https://github.com/victor-vargas2009/FastAI_Experiments/blob/master/nbs/MNIST_Classifier.ipynb

Thanks a lot in advance

1 Like

Done :slight_smile:

See here for more info: https://ohmeow.github.io/blurr/

6 Likes

Fastai V2 now running on the Nvidia Jetson Nano!

As the speedy new GPU accelerated image transforms of fastai V2 needs some functions not included in with Nvidia’s stock pytorch wheels, I decided to write up the recipe for rolling your own.

If you want to play with fastai V2 on your jetson nano, check out https://github.com/streicherlouw/fastai2_jetson_nano

15 Likes

A mini project. I created a callback that shows a chart of GPU utilization as you train. I find it useful for debugging and more handy than looking at nvidia-smi in the console. The code is here. Feel free to try it out.

20 Likes

I just finished writing and recording a tutorial on the fastai2 DataLoader, and how to easily incorporate it with NumPy/Tabular data as a simple example. Read more here: DataLoaders in fastai2, Tutorial and Discussion

5 Likes

I built a package to help interpret models better, export them to other environments, and some experimental data augmentations as well.

Repo: https://github.com/Synopsis/fastai2_extensions
Forum Post: Fastai-extensions Package

5 Likes

I’ve been working for several weeks on a new AI feature for Camera+ 2, my company’s photography app. It examines any photo you took with your phone and determines the best adjustments to apply to improve exposure and color. Most of the work is based on fastai2 and nbdev, I found both fantastic. We released the feature today, and I wrote a blog post to explain how we did it: https://camera.plus/blog/magic-ml-the-making-of/.

I tried to make the post readable for a non-technical audience, so I apologize if many of you find it lacking sufficient technical detail. The most interesting technical achievement, I think, is that we created custom network layers to implement rendering operations as part of the training process.

I also apologize if this message is considered self-promotion, but I did not want to miss the opportunity to thank Jeremy, Sylvain, Rachel and, very specially, the fastai community. Whenever I seemed to hit a wall in the direction I was following I always found some hint (in old or recent posts) that helped me get back on track. These forums are my go-to resource to start learning about any DL topic.

9 Likes

As someone who used your app and it’s precursor before I took fastai (And using), this is absolutely amazing to hear. Well done! :slight_smile:

2 Likes

So, bad news: fastshap and ClassConfusion are now gone. Good news? Instead we have fastinference :slight_smile: What all does it do?

  • Speed up inference
  • A more verbose get_preds and predict
    • You can fully decode the classes, choose to not have the loss function decodes or the loss function final activation if you choose, return the input, and the other behaviors you would expect
  • ClassConfusion and SHAP
  • Feature Importance for tabular models with custom “rank” methods
  • ONNX support
    All while never leaving the comfortable fastai language!
    See the attached screenshots. To install do pip install fastinference. Documentation is a WIP, please see the /nbs for examples for now. Need to deal with some fastpages issues.



    image

14 Likes

Hi Zachary,

that’s great, I was trying to build an inference class to handle all the different inputs and tasks - but yours looks WAY better :smiley: .

I’m trying to use fastinference with load_learner but I think I’m doing something wrong:

from fastai2.vision.all import *
from fastinference import *

learn = load_learner('export_ml_resnet50_200_15ep.pkl')
files = glob.glob("_images/*.png")
dl = learn.dls.test_dl(files)

preds = learn.get_preds(dl=dl, fully_decoded=True)
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
 in 
----> 1 preds = learn.get_preds(dl=dl, fully_decoded=True)

~/miniconda3/envs/fastai2/lib/python3.7/site-packages/fastai2/learner.py in get_preds(self, ds_idx, dl, with_input, with_decoded, with_loss, act, inner, reorder, **kwargs)
    221             idxs = dl.get_idxs()
    222             dl = dl.new(get_idxs = _ConstantFunc(idxs))
--> 223         cb = GatherPredsCallback(with_input=with_input, with_loss=with_loss, **kwargs)
    224         ctx_mgrs = [self.no_logging(), self.added_cbs(cb), self.no_mbar()]
    225         if with_loss: ctx_mgrs.append(self.loss_not_reduced())

TypeError: __init__() got an unexpected keyword argument 'fully_decoded'

I guess get_preds doesn’t get “patched”?!

1 Like

Thanks Florian!

Good news, and bad news. You’re doing nothing wrong! My __init__'s got adjusted at some point. Pushed a new release that fixes this (and tried this myself) :slight_smile: Thanks!

Have you seen the medical research done on kids (as far as I know) that EEG enables early autism diagnosis? It would be crazy to actually build some early pre-diagnosis using some consumer eeg sets with deep learning. I have no idea how to deal with EEG data at the moment, but that would be very cool project.

That’s great. Thank You for this. I was actually smashing my head a bit trying to port my most_confused method to fastai v2 :smiley:

It is a bit rough :slight_smile: there’s a bug in Colab with the tab outputs sadly but it works :slight_smile:

1 Like

I have been spending the last couple of weeks on a number of medical based kaggle competitions and wanted to share a couple of the kernels as well as how to get fastai working with internet off competitions. I have seen a couple of discussions on this but for some reason they did not work for me. The one I created involved just using the fastcore, fastprogress and fastai2 .whl files.

Here are the current kernels:

The kaggle dataset so that you can easily load all fastai2 dependencies with internet off: fastai017_whl. This kernel Balanced Data Starter | Submission Example shows you have to submit your submissions to internet off competitions. Hope this is useful as it took me a while to get this to work ;0

8 Likes

I hadn’t seen that, that is super interesting though! Worth investigating, let me know if you come across any useful resources :slight_smile: . Thank you.

Here’s my latest blog post introducing natural language processing using Fastai by building a text classifier on Kaggle’s “Real or Not? NLP with Disaster Tweets” competition by following the ULMFiT approach and decoding the paper in detail.

Please feel free to reach out to me and let me know of any feedback! :slight_smile:

8 Likes

Do you have a paper reference perharps? it sounds really interesting.

One of the further research questions in chapter 17 of the fastai book is to use the unfold function in PyTorch to create a CNN module and train a model with it. I tried my hands on it and I would be happy to hear your thoughts about it and how you think it could be made better.

1 Like