really, cldnt find it
Built a web application “insightsR” based on old machine learning course from fast.ai. Dealt with providing automated insights for tabular data, taking just 2 inputs from the user (dataset and the target). Completely based on Jeremy’s lessons - added automation, putting things together and using streamlit for a better appealing web application…
Was first hesitant to display as my addition wasn’t much…
Details below:-
Github: GitHub - Vinothsuku/insightsR: automated insights for tabular data
Blog: insightsR — automated insights for tabular data | by Vinoth Sukumaran | Analytics Vidhya | Apr, 2021 | Medium
Online: https://insightsr.herokuapp.com/
Please let me know your comments.
Hi everyone,
I released a new, fastai2 compatible, version of my library for neural network compression, shamelessly called Fasterai.
Focus has been done on sparse neural network training, i.e. replacing most of the weights in the network by zeroes, but other techniques such as Knowledge Distillation are also available.
It has all been made possible thanks to the magic of the fastai callbacks system.
Details:
Give it a try, you would be surprised by how many weights you can remove from your networks while keeping performance intact !
Congrats! Also I love the theme colors!
For the first project, I took the fastai tutorial and created an image classifier that attempts to classify native trees in Kentucky. USA. It works to a point depending on how you take the picture. The inspiration came from my recent move out of the suburbs and into a more wooded are with lots of wildlife. I really didn’t know how to determine what tree was which type, so I developed this app.
Hey! I started implementing what I have learned, here is my first medium article explaining how I tackle the problem of classifying Malware.
Have a nice day!
I like that, I like how you added interaction to this site. How did you do that? did you upload a video?
I’ve documented my work with medium for those who may be interested.
Hi all, sharing a mini-project on utilizing an external dataset in performing a multi-label classification. Enjoy!
Here’s a simple step-by-step process in determining what the metric accuracy_multi measures.
FastServe - Generate API endpoints from fast.ai models
Hi all,
We’ve just launched the private beta for FastServe: a service to turn pre-trained fast.ai models into APIs, that you can then plug into your applications. You upload a model file (e.g., export.pkl), and get back an API endpoint that can serve inference. We’re hoping this will help data scientists deploy models more quickly and easily.
If you are interested in trying it out, please check out the introduction video on YouTube (link below), and sign up for the beta at https://launchable.ai/fastserve-beta. We’d love to hear what folks think!
Thanks!
Is it designed around fastai v1? Or fastai v2?
fastai v2
Using Images and an Algorithm to Triage Ill Babies
Maybe you have seen thispersondoesnotexist.
Me and my partner are trying to do this for interiors/housing images.
So, we are using this repository: https://github.com/lucidrains/stylegan2-pytorch
We are running on amazon P2 instances recommended for this application.
As a rehearsal, we used the following:
- 50 images
- Instance: p2.xlarge =** 1 GPUs, 4vCPUs, 61 GiB RAM
Now we want to use our actual dataset and a faster machine:
- 15K images
- Instance: p2.16xlarge =** 16 GPUs, 64vCPUs, 732 GiB RAM
- 0.5% after 1 hours
We are concerned about the progress so far. The progress speed is similar to our rehearsal run even with this monster instance. We increased GPUs - but also the training data set. How can we estimate the time this will take?
What metrics on the AWS instance can we look at to make sure we are using its full capacity? Like GPU utilization?
This is probably my first post here on this forum. I am learning through the 2019 course and created a simple donut vs bagel vs vada classifier. While the model itself is simple, rather silly, I used this opportunity to experiment with deploying this as a serverless deep learning inferencing function using AWS Lambda.
The web application can be accessed here: https://bit.ly/donutornot.
You can read about how I did it on my blog: Donut or Not! - Deploying Deep Learning Inference as Serverless Functions | Atma's blog or checkout the GitHub repo at: https://github.com/AtmaMani/donut_or_not
I was doing something pretty similar to this before. however, I switched from lucidrains (although he is awesome) to nvidias official repo.
As far as estimating how long something will take, you might be able to roughly calculate based on this GitHub - NVlabs/stylegan2-ada-pytorch: StyleGAN2-ADA - Official PyTorch implementation
but you never really know with gans, so I can’t comment to your resources/timing. i did find I had to checkpointed saved outputs for manual checking, since the % isn’t the same as , say a loss.
to check GPU usage, you can I used to use nvidia-smi
(don’t remember the flags) to make sure.
Using Fastai to recognize Kissing Bugs using mobile phone images
I am proud to have my first paper published! It took almost 2 years, that’s why it uses Fastai 1. I have used and copied a lot of materials from Jeremy courses.
Any comment will be more than welcomed.
The paper reports the use of mobile phones images to identify kissing bugs, using deep learning. Kissing bugs are Chagas disease vectors.
Chagas disease is endemic in 21 countries in the Americas and affects an estimated 6 million people. In the Americas, there are 30,000 new cases each year, 12,000 deaths on average, and 8,600 newborns are infected during gestation.
I am very proud of it for these reasons:
- It has been published in Ecological Informatics a journal dedicated to articles on all aspects of computational ecology, data science, biogeography, and ecosystem analysis. I am a System Engineer plus I have an amateur interest in ecology and deseases, resulting in a combination of these two fields.
- The images for this publication come from the photos collected with the Geovin project, developed at CEPAVE from the team members who coauthored the paper. The images were taken from people around Argentina with regular mobile phones using the Geovinapp.
- The complete project it’s shared on Github, where anyone interested can access the code and data to train the CNN, and the image recognition app.
- Finally the most important reason: the breakthrough comes from the combination of image recognition with the use of images from mobile phones. Allowing a very fast response plus a geolocation for the spotted bug.
Abstract
There are several identification tools that can assist researchers, technicians and the community in the recognition of Chagas vector insects (triatomines), from other insects with similar morphologies*.*They involve using dichotomous keys, field guides, expert knowledge or, in more recent approaches, through the classification by a neural network of high quality photographs taken in standardized conditions. The aim of this research was to develop a deep neural network to recognize triatomines (insects associated with vectorial transmission of Chagas disease) directly from photos taken with any commonly available mobile device, without any other specialized equipment. To overcome the shortcomings of taking images using specific instruments and a controlled environment an innovative machine-learning approach was used: Fastai with Pytorch, a combination of open-source software for deep learning. The Convolutional Neural Network (CNN) was trained with triatomine photos, reaching a correct identification in 94.3% of the cases. Results were validated using photos sent by citizen scientists from the GeoVin project, resulting in 91.4% of correct identification of triatomines. The CNN provides a lightweight, robust method that even works with blurred images, poor lighting and even with the presence of other subjects and objects in the same frame. Future steps include the inclusion of the CNN into the framework of the GeoVin science project, which will also allow to further train the network using the photos sent by the citizen scientists. This would allow the participation of the community in the identification and monitoring of the vector insects, particularly in regions where government-led monitoring programmes are not frequent due to their low accessibility and high costs.