For the first project, I took the fastai tutorial and created an image classifier that attempts to classify native trees in Kentucky. USA. It works to a point depending on how you take the picture. The inspiration came from my recent move out of the suburbs and into a more wooded are with lots of wildlife. I really didn’t know how to determine what tree was which type, so I developed this app.
FastServe - Generate API endpoints from fast.ai models
We’ve just launched the private beta for FastServe: a service to turn pre-trained fast.ai models into APIs, that you can then plug into your applications. You upload a model file (e.g., export.pkl), and get back an API endpoint that can serve inference. We’re hoping this will help data scientists deploy models more quickly and easily.
If you are interested in trying it out, please check out the introduction video on YouTube (link below), and sign up for the beta at https://launchable.ai/fastserve-beta. We’d love to hear what folks think!
We are concerned about the progress so far. The progress speed is similar to our rehearsal run even with this monster instance. We increased GPUs - but also the training data set. How can we estimate the time this will take?
What metrics on the AWS instance can we look at to make sure we are using its full capacity? Like GPU utilization?
This is probably my first post here on this forum. I am learning through the 2019 course and created a simple donut vs bagel vs vada classifier. While the model itself is simple, rather silly, I used this opportunity to experiment with deploying this as a serverless deep learning inferencing function using AWS Lambda.
I am proud to have my first paper published! It took almost 2 years, that’s why it uses Fastai 1. I have used and copied a lot of materials from Jeremy courses.
Any comment will be more than welcomed.
The paper reports the use of mobile phones images to identify kissing bugs, using deep learning. Kissing bugs are Chagas disease vectors.
Chagas disease is endemic in 21 countries in the Americas and affects an estimated 6 million people. In the Americas, there are 30,000 new cases each year, 12,000 deaths on average, and 8,600 newborns are infected during gestation.
I am very proud of it for these reasons:
It has been published in Ecological Informatics a journal dedicated to articles on all aspects of computational ecology, data science, biogeography, and ecosystem analysis. I am a System Engineer plus I have an amateur interest in ecology and deseases, resulting in a combination of these two fields.
The images for this publication come from the photos collected with the Geovin project, developed at CEPAVE from the team members who coauthored the paper. The images were taken from people around Argentina with regular mobile phones using the Geovinapp.
Finally the most important reason: the breakthrough comes from the combination of image recognition with the use of images from mobile phones. Allowing a very fast response plus a geolocation for the spotted bug.
There are several identification tools that can assist researchers, technicians and the community in the recognition of Chagas vector insects (triatomines), from other insects with similar morphologies*.*They involve using dichotomous keys, field guides, expert knowledge or, in more recent approaches, through the classification by a neural network of high quality photographs taken in standardized conditions. The aim of this research was to develop a deep neural network to recognize triatomines (insects associated with vectorial transmission of Chagas disease) directly from photos taken with any commonly available mobile device, without any other specialized equipment. To overcome the shortcomings of taking images using specific instruments and a controlled environment an innovative machine-learning approach was used: Fastai with Pytorch, a combination of open-source software for deep learning. The Convolutional Neural Network (CNN) was trained with triatomine photos, reaching a correct identification in 94.3% of the cases. Results were validated using photos sent by citizen scientists from the GeoVin project, resulting in 91.4% of correct identification of triatomines. The CNN provides a lightweight, robust method that even works with blurred images, poor lighting and even with the presence of other subjects and objects in the same frame. Future steps include the inclusion of the CNN into the framework of the GeoVin science project, which will also allow to further train the network using the photos sent by the citizen scientists. This would allow the participation of the community in the identification and monitoring of the vector insects, particularly in regions where government-led monitoring programmes are not frequent due to their low accessibility and high costs.