Share your work here ✅

Genius!

1 Like

I’ve just finished porting over ClassConfusion into fastai2:

And for those who are unaware of ClassConfusion, it let’s you further examine classification models (images and tabular) to see any trends. It also shows the filenames of misclassified images. Colab only at the moment (supporting regular jupyter once I have time) :slight_smile:

Here’s the old documentation to read more:
https://docs.fast.ai/widgets.class_confusion.html

9 Likes

I found some time to play with URL classification in fastai, a relevant application of RNN to cyber security. The paper I considered is called Classifying Phishing URLs Using Recurrent Neural Networks.
The problem looks like this:


where we try to predict whether a url is a phishing one or a good one.
The authors were kind enough to share their dataset, roughly 1000000 samples for each class.

Approach:
The approach starts simple by using an LSTM plus all the goodies provided by Fastai, including fit_one_cycle, lr_find, etc.


Finally training the network:

Results
Initial results seem quite interesting, with the F1-score going from 98.76 to 99.25 in our model. Similarly all the metrics cited in the paper (AUC, accuracy, precision, recall) are improved, even without the 3-fold cross-validation:


It is very interesting how far we can go using the power of fastai and a straightforward LSTM model.
Of course more involved techniques can further improve this model. Stay tuned!

10 Likes

Hi everyone! I’ve created an open source web app that allows a user to take a picture using an interface similar to a native camera app. After the picture is taken, it’s sent to a fast.ai CNN model running on render. The modified server will the return a classification of the image along with related content.

There are a handful of really helpful prebuilt apps that allow a user to upload a photo (this and this), but I couldn’t find an app that allowed the user to skip the step of manually uploading it.

You can check out a demo app here that recognizes everyday objects (cars, people, trees, computers, chairs, etc).

I hope it’s helpful to someone and I welcome any feedback or pull requests that could make it more helpful or clear.

Thanks!

5 Likes

I’ve tried out SimCLR, and it seems to be a good direction to go with self-supervised learning.
Pre-training on ImageNet with SimCLR for 50 epochs and then fine-tune on ImageNette gives us 92% accuracy on the validation, whereas starting from random weights gives 79%. Starting from pre-trained weights on supervised ImageNet gives 95.5%.

I think that the first layer filters the model came up with is just mesmerizing:

7 Likes

I’ve trained a model on MNIST 14x14 downscaled images.
This model knows the structure of numbers on this scale.
The original 28x28 images have the same structure but just having twice the size.
If we adjust the first layer’s scale accordingly, we can use this network directly on 28x28 pictures without any fine-tuning. It is just adjusting the first layer’s dilation and stride to be twice it’s original value and we’re done, we have the same accuracy as we had on 14x14 pictures.

Or we can have first layer filters twice the size as was before and having weights of the 14x14 model’s first layer’s weights resized as if they were images.
This gives a slight accuracy drop, but training for just a little bit with a learning rate of 1e-100!!, we have back our accuracy.

Don’t take a look at my notebook for details.

Nice work.

I was trying to use your repositorry to make my own image classifier.

I am geting this error when I try to download the images from google:

"Unfortunately all 100 could not be downloaded because some images were not downloadable. 0 is all we got for this search filter!

Errors: 0"

Do you have any advice?

Tank you.

Just wanted to say thanks so much for the fastai course, it was a brilliant introduction and way to get started!

I’ve been using what I learned on this course and others to explore generative algorithms. Here are some of the pictures I’ve generated so far using style transfer

I wrote a blog piece describing the process for anyone that’s interested

More recently I’ve been exploring semantic segmentation of point clouds in architectural models, which has proved trickier than I initially thought :rofl:

Thanks again

1 Like

Hi everyone,
as you knew, WHO suggests washing hands and not touching face would protect citizens from coronavirus infection
but not touching your face is easier said than done! you can check out this video for example. :roll_eyes:

so based on Lesson 1,2 we built a face touching classifier,
Touching your face is bad for ya!

When you touch your face, a warning sound will be played. That’s it.

you can check our demo here.

source code

2 Likes

I have just released an implementation of gaussian process compatible with the fastai tabular API:

See here for a topic on the subject where I detail the pros of the algorithm.

3 Likes

Do checkout my experience in participating in Bengali AI competition here.

Training Notebook

Inference Notebook

1 Like

Hi everyone,

I’m the creator of SeeMe.ai, an AI marketplace (currently in beta).

We focus on letting you train, use and share AI models with your friends and the rest of the world. We have been a fan of fast.ai since its early days, and are happy to contribute to this wonderful community.

Once you have trained your model, you can easily deploy it and use it on the web, mobile, our Python SDK or via the API. After testing, you can share it with others so they can use your model.

We have just created a fastai v1 quick guide for you, that helps you get up and running and we would love for you to try it out and get your feedback!

All the best and stay safe.

Thanks!

4 Likes

Hi there @sabou.teodor .

Sorry, not quite sure what that would imply. Do you have an actual error trace when this happens?

C

Thank you for the answer. I´ve found another jawa script that worked for me just fine. The original one was generating an empty txt file in Chrome after searching the images.

Teodor

Hi everyone,

I wrote an article on Medium on how to deploy your machine learning model on AWS, using Cortex. In the example I used the first course of Part I (2019), and deployed a pets classifier as a web service. Dig in and tell me what you think!

2 Likes

Predicting the origin of an ancient, ceramic artifact

Covid-19 has been depressing for a data guy like me. Always thinking about stats, but these ones suck. Trying to keep my mind off things, I finished up a passion project of mine: IndAIna. I love how I can combine my love for history with data science. Check out the link and stay safe, y’all! :cowboy_hat_face: https://bit.ly/2UlJfmv


https://www.linkedin.com/pulse/belongs-musaim-discovering-artifacts-origin-using-deep-eerdmans/

4 Likes

Hi everyone.

Its been a while, but I have managed to finally get round to training a model using DeOldify to re-colour Anime sketches. Got some amazing results so far!


If you would like to try it out, try here!
https://colab.research.google.com/github/Dakini/AnimeColorDeOldify/blob/master/ImageColorizerColab.ipynb

9 Likes

I’ve been following your results on twitter very closely and I absolutely love this. Fantastic work!

2 Likes

Thank you! That means alot to me :smile:

That’s great!

1 Like