Share your V2 projects here

I built a package to help interpret models better, export them to other environments, and some experimental data augmentations as well.

Repo: https://github.com/Synopsis/fastai2_extensions
Forum Post: Fastai-extensions Package

5 Likes

I’ve been working for several weeks on a new AI feature for Camera+ 2, my company’s photography app. It examines any photo you took with your phone and determines the best adjustments to apply to improve exposure and color. Most of the work is based on fastai2 and nbdev, I found both fantastic. We released the feature today, and I wrote a blog post to explain how we did it: https://camera.plus/blog/magic-ml-the-making-of/.

I tried to make the post readable for a non-technical audience, so I apologize if many of you find it lacking sufficient technical detail. The most interesting technical achievement, I think, is that we created custom network layers to implement rendering operations as part of the training process.

I also apologize if this message is considered self-promotion, but I did not want to miss the opportunity to thank Jeremy, Sylvain, Rachel and, very specially, the fastai community. Whenever I seemed to hit a wall in the direction I was following I always found some hint (in old or recent posts) that helped me get back on track. These forums are my go-to resource to start learning about any DL topic.

9 Likes

As someone who used your app and it’s precursor before I took fastai (And using), this is absolutely amazing to hear. Well done! :slight_smile:

2 Likes

So, bad news: fastshap and ClassConfusion are now gone. Good news? Instead we have fastinference :slight_smile: What all does it do?

  • Speed up inference
  • A more verbose get_preds and predict
    • You can fully decode the classes, choose to not have the loss function decodes or the loss function final activation if you choose, return the input, and the other behaviors you would expect
  • ClassConfusion and SHAP
  • Feature Importance for tabular models with custom “rank” methods
  • ONNX support
    All while never leaving the comfortable fastai language!
    See the attached screenshots. To install do pip install fastinference. Documentation is a WIP, please see the /nbs for examples for now. Need to deal with some fastpages issues.



    image

14 Likes

Hi Zachary,

that’s great, I was trying to build an inference class to handle all the different inputs and tasks - but yours looks WAY better :smiley: .

I’m trying to use fastinference with load_learner but I think I’m doing something wrong:

from fastai2.vision.all import *
from fastinference import *

learn = load_learner('export_ml_resnet50_200_15ep.pkl')
files = glob.glob("_images/*.png")
dl = learn.dls.test_dl(files)

preds = learn.get_preds(dl=dl, fully_decoded=True)
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
 in 
----> 1 preds = learn.get_preds(dl=dl, fully_decoded=True)

~/miniconda3/envs/fastai2/lib/python3.7/site-packages/fastai2/learner.py in get_preds(self, ds_idx, dl, with_input, with_decoded, with_loss, act, inner, reorder, **kwargs)
    221             idxs = dl.get_idxs()
    222             dl = dl.new(get_idxs = _ConstantFunc(idxs))
--> 223         cb = GatherPredsCallback(with_input=with_input, with_loss=with_loss, **kwargs)
    224         ctx_mgrs = [self.no_logging(), self.added_cbs(cb), self.no_mbar()]
    225         if with_loss: ctx_mgrs.append(self.loss_not_reduced())

TypeError: __init__() got an unexpected keyword argument 'fully_decoded'

I guess get_preds doesn’t get “patched”?!

1 Like

Thanks Florian!

Good news, and bad news. You’re doing nothing wrong! My __init__'s got adjusted at some point. Pushed a new release that fixes this (and tried this myself) :slight_smile: Thanks!

Have you seen the medical research done on kids (as far as I know) that EEG enables early autism diagnosis? It would be crazy to actually build some early pre-diagnosis using some consumer eeg sets with deep learning. I have no idea how to deal with EEG data at the moment, but that would be very cool project.

That’s great. Thank You for this. I was actually smashing my head a bit trying to port my most_confused method to fastai v2 :smiley:

It is a bit rough :slight_smile: there’s a bug in Colab with the tab outputs sadly but it works :slight_smile:

1 Like

I have been spending the last couple of weeks on a number of medical based kaggle competitions and wanted to share a couple of the kernels as well as how to get fastai working with internet off competitions. I have seen a couple of discussions on this but for some reason they did not work for me. The one I created involved just using the fastcore, fastprogress and fastai2 .whl files.

Here are the current kernels:

The kaggle dataset so that you can easily load all fastai2 dependencies with internet off: fastai017_whl. This kernel Balanced Data Starter | Submission Example shows you have to submit your submissions to internet off competitions. Hope this is useful as it took me a while to get this to work ;0

8 Likes

I hadn’t seen that, that is super interesting though! Worth investigating, let me know if you come across any useful resources :slight_smile: . Thank you.

Here’s my latest blog post introducing natural language processing using Fastai by building a text classifier on Kaggle’s “Real or Not? NLP with Disaster Tweets” competition by following the ULMFiT approach and decoding the paper in detail.

Please feel free to reach out to me and let me know of any feedback! :slight_smile:

8 Likes

Do you have a paper reference perharps? it sounds really interesting.

One of the further research questions in chapter 17 of the fastai book is to use the unfold function in PyTorch to create a CNN module and train a model with it. I tried my hands on it and I would be happy to hear your thoughts about it and how you think it could be made better.

1 Like

Hello. Based on the Transformers tutorial of @sgugger, I published a “from English to any language” fine-tuning method of models based on generative English pre-trained transformers like GPT-2 using Hugging Face libraries (Tokenizers and Transformers) and fastai v2.

There are a medium post + notebook in github + model in Hugging face model hub.

As proof of concept, I fine-tuned a GPorTuguese-2 (Portuguese GPT-2 small), a language model for Portuguese text generation (and more NLP tasks…), from an English pre-trained GPT-2 small downloaded from Hugging Face Transformers library. Here are examples of what it can generate:

If you want to test it online (without running my notebook), you can thanks to the Hugging Face model hub at https://huggingface.co/pierreguillou/gpt2-small-portuguese

13 Likes

This is brilliant, thanks for sharing, so many great little tips and tricks!

Not sure if I missed it in the article but I’m curious if you think the Portuguese text is as good as the English text it can generate?

Also, is there much research on fine-tuning text generation models on different languages? I know cross-lingual models can help for translation, but I don’t think I had seen it for text generation before…

Nice work!

I’ve just released an update to fastinference, and two new libraries:

fastinference_pytorch and fastinference_onnx

The goal of these two is to be lightweight modules to run your fastai models within a familiar API. Currently it just supports tabular models but vision and NLP are on the way! See below for a numpy example (there is zero fastai code being used :wink: )

Not to mention there is a speed boost here as well :slight_smile:

For more, the documentation is available:


Note: To use this you must export your model from fastinference with learn.to_fastinference

5 Likes

Hello @morgan. Well, it was the objective of my work :wink:

Yes, I think that a small generative model (like GP-2 small) fine-tuned from English to another language like Portuguese allows to get a working model with a relatively small fine-tuning dataset (I used a bit more than 1 GB from Portuguese Wikipedia).

Well, great if I opened up some research but I’m sure there was already.

About applying the same fine-tuning method on encoder transformer-based models like BERT (RoBERTa, ALBERT, etc.), I’m currently testing my method on your FastHugs code. It works very well (because you did a great work Morgan!) :slight_smile: I will publish soon.

By the way, I have 2 questions on your code that I will publish in the FastHugs thread. Many thanks!

1 Like

Hey guys, recently took the time to write about an experimental study I did few months back about Edge Detection models.

Also it’s on my brand new blog where I publish annotated research papers that I read.

1 Like

My first ever two models. Just finished Chapter Two of the book


and
3 Likes