Hey! I started implementing what I have learned, here is my first medium article explaining how I tackle the problem of classifying Malware.
Have a nice day!
Hey! I started implementing what I have learned, here is my first medium article explaining how I tackle the problem of classifying Malware.
Have a nice day!
I like that, I like how you added interaction to this site. How did you do that? did you upload a video?
I’ve documented my work with medium for those who may be interested.
Hi all, sharing a mini-project on utilizing an external dataset in performing a multi-label classification. Enjoy!
Here’s a simple step-by-step process in determining what the metric accuracy_multi measures.
FastServe - Generate API endpoints from fast.ai models
Hi all,
We’ve just launched the private beta for FastServe: a service to turn pre-trained fast.ai models into APIs, that you can then plug into your applications. You upload a model file (e.g., export.pkl), and get back an API endpoint that can serve inference. We’re hoping this will help data scientists deploy models more quickly and easily.
If you are interested in trying it out, please check out the introduction video on YouTube (link below), and sign up for the beta at https://launchable.ai/fastserve-beta. We’d love to hear what folks think!
Thanks!
Is it designed around fastai v1? Or fastai v2?
fastai v2
Using Images and an Algorithm to Triage Ill Babies
Maybe you have seen thispersondoesnotexist.
Me and my partner are trying to do this for interiors/housing images.
So, we are using this repository: https://github.com/lucidrains/stylegan2-pytorch
We are running on amazon P2 instances recommended for this application.
As a rehearsal, we used the following:
Now we want to use our actual dataset and a faster machine:
We are concerned about the progress so far. The progress speed is similar to our rehearsal run even with this monster instance. We increased GPUs - but also the training data set. How can we estimate the time this will take?
What metrics on the AWS instance can we look at to make sure we are using its full capacity? Like GPU utilization?
This is probably my first post here on this forum. I am learning through the 2019 course and created a simple donut vs bagel vs vada classifier. While the model itself is simple, rather silly, I used this opportunity to experiment with deploying this as a serverless deep learning inferencing function using AWS Lambda.
The web application can be accessed here: https://bit.ly/donutornot.
You can read about how I did it on my blog: Donut or Not! - Deploying Deep Learning Inference as Serverless Functions | Atma's blog or checkout the GitHub repo at: https://github.com/AtmaMani/donut_or_not
I was doing something pretty similar to this before. however, I switched from lucidrains (although he is awesome) to nvidias official repo.
As far as estimating how long something will take, you might be able to roughly calculate based on this GitHub - NVlabs/stylegan2-ada-pytorch: StyleGAN2-ADA - Official PyTorch implementation
but you never really know with gans, so I can’t comment to your resources/timing. i did find I had to checkpointed saved outputs for manual checking, since the % isn’t the same as , say a loss.
to check GPU usage, you can I used to use nvidia-smi
(don’t remember the flags) to make sure.
I am proud to have my first paper published! It took almost 2 years, that’s why it uses Fastai 1. I have used and copied a lot of materials from Jeremy courses.
Any comment will be more than welcomed.
The paper reports the use of mobile phones images to identify kissing bugs, using deep learning. Kissing bugs are Chagas disease vectors.
Chagas disease is endemic in 21 countries in the Americas and affects an estimated 6 million people. In the Americas, there are 30,000 new cases each year, 12,000 deaths on average, and 8,600 newborns are infected during gestation.
I am very proud of it for these reasons:
There are several identification tools that can assist researchers, technicians and the community in the recognition of Chagas vector insects (triatomines), from other insects with similar morphologies*.*They involve using dichotomous keys, field guides, expert knowledge or, in more recent approaches, through the classification by a neural network of high quality photographs taken in standardized conditions. The aim of this research was to develop a deep neural network to recognize triatomines (insects associated with vectorial transmission of Chagas disease) directly from photos taken with any commonly available mobile device, without any other specialized equipment. To overcome the shortcomings of taking images using specific instruments and a controlled environment an innovative machine-learning approach was used: Fastai with Pytorch, a combination of open-source software for deep learning. The Convolutional Neural Network (CNN) was trained with triatomine photos, reaching a correct identification in 94.3% of the cases. Results were validated using photos sent by citizen scientists from the GeoVin project, resulting in 91.4% of correct identification of triatomines. The CNN provides a lightweight, robust method that even works with blurred images, poor lighting and even with the presence of other subjects and objects in the same frame. Future steps include the inclusion of the CNN into the framework of the GeoVin science project, which will also allow to further train the network using the photos sent by the citizen scientists. This would allow the participation of the community in the identification and monitoring of the vector insects, particularly in regions where government-led monitoring programmes are not frequent due to their low accessibility and high costs.
I’ve made a bird classification hosted on huggingface
using a dataset from BIRDS 450 SPECIES- IMAGE CLASSIFICATION dataset, and here’s the repo, This project is a reference from Tanishq Abraham’s blog(gradio). The model i trained
Hi, beginner DL student here. I fine-tuned the resnet18 presented in lesson 1 to create a penguin species classifier! To make a first quick experiment, currently i fine-tuned the model to recognize:
Side notes:
Link to the notebook: here
Pretty challenging to differentiate specially to a newbie.
And you don’t have a concertina yet. Let me find one for you.
I completed my Chapter 2 and make model and application to recognize Male or Female Human by giving face, sharing My work:
model_creation
app_development