so I took the challenge to rebuild the MNIST classifier for all images in the MNIST dataset using the MNIST code we used in class and I think I came up with good results and I would like to post it in my blog but I would like some feedback before that if anyone can take a look and let me know if what I did actually makes sense and its correct.
As the speedy new GPU accelerated image transforms of fastai V2 needs some functions not included in with Nvidia’s stock pytorch wheels, I decided to write up the recipe for rolling your own.
A mini project. I created a callback that shows a chart of GPU utilization as you train. I find it useful for debugging and more handy than looking at nvidia-smi in the console. The code is here. Feel free to try it out.
I just finished writing and recording a tutorial on the fastai2 DataLoader, and how to easily incorporate it with NumPy/Tabular data as a simple example. Read more here: DataLoaders in fastai2, Tutorial and Discussion
I’ve been working for several weeks on a new AI feature for Camera+ 2, my company’s photography app. It examines any photo you took with your phone and determines the best adjustments to apply to improve exposure and color. Most of the work is based on fastai2 and nbdev, I found both fantastic. We released the feature today, and I wrote a blog post to explain how we did it: https://camera.plus/blog/magic-ml-the-making-of/.
I tried to make the post readable for a non-technical audience, so I apologize if many of you find it lacking sufficient technical detail. The most interesting technical achievement, I think, is that we created custom network layers to implement rendering operations as part of the training process.
I also apologize if this message is considered self-promotion, but I did not want to miss the opportunity to thank Jeremy, Sylvain, Rachel and, very specially, the fastai community. Whenever I seemed to hit a wall in the direction I was following I always found some hint (in old or recent posts) that helped me get back on track. These forums are my go-to resource to start learning about any DL topic.
So, bad news: fastshap and ClassConfusion are now gone. Good news? Instead we have fastinference What all does it do?
Speed up inference
A more verbose get_preds and predict
You can fully decode the classes, choose to not have the loss function decodes or the loss function final activation if you choose, return the input, and the other behaviors you would expect
ClassConfusion and SHAP
Feature Importance for tabular models with custom “rank” methods
ONNX support
All while never leaving the comfortable fastai language!
See the attached screenshots. To install do pip install fastinference. Documentation is a WIP, please see the /nbs for examples for now. Need to deal with some fastpages issues.
Good news, and bad news. You’re doing nothing wrong! My __init__'s got adjusted at some point. Pushed a new release that fixes this (and tried this myself) Thanks!
Have you seen the medical research done on kids (as far as I know) that EEG enables early autism diagnosis? It would be crazy to actually build some early pre-diagnosis using some consumer eeg sets with deep learning. I have no idea how to deal with EEG data at the moment, but that would be very cool project.
I have been spending the last couple of weeks on a number of medical based kaggle competitions and wanted to share a couple of the kernels as well as how to get fastai working with internet off competitions. I have seen a couple of discussions on this but for some reason they did not work for me. The one I created involved just using the fastcore, fastprogress and fastai2 .whl files.
The kaggle dataset so that you can easily load all fastai2 dependencies with internet off: fastai017_whl. This kernel Balanced Data Starter | Submission Example shows you have to submit your submissions to internet off competitions. Hope this is useful as it took me a while to get this to work ;0
Here’s my latest blog post introducing natural language processing using Fastai by building a text classifier on Kaggle’s “Real or Not? NLP with Disaster Tweets” competition by following the ULMFiT approach and decoding the paper in detail.
Please feel free to reach out to me and let me know of any feedback!
One of the further research questions in chapter 17 of the fastai book is to use the unfold function in PyTorch to create a CNN module and train a model with it. I tried my hands on it and I would be happy to hear your thoughts about it and how you think it could be made better.
Hello. Based on the Transformers tutorial of @sgugger, I published a “from English to any language” fine-tuning method of models based on generative English pre-trained transformers like GPT-2 using Hugging Face libraries (Tokenizers and Transformers) and fastai v2.
As proof of concept, I fine-tuned a GPorTuguese-2 (Portuguese GPT-2 small), a language model for Portuguese text generation (and more NLP tasks…), from an English pre-trained GPT-2 small downloaded from Hugging Face Transformers library. Here are examples of what it can generate: