Share your work here ✅

Hi All,

I’m happy to find this thread, there’s a lot of cool projects going on! I’d like to join in and share a small project I built while working through part 1.

My model classifies water as hot or cold from the sound of pouring (it was inspired by an NPR piece I heard some time ago).

Links:

I kept a fairly detailed build log in my notebooks, and made sure to MIT license my work and CC license the data I collected. Feel free to reference and remix my work if you find it helpful :+1:


Feedback is tremendously appreciated. This is my first ML model, and my first fully self-hosted web server.

I’m sure I didn’t hit the mark on best practices in many places. I want to improve this project where I can so this project can act as a quick refresher for myself when starting subsequent projects.

5 Likes

Here’s the first reactions .
To begin with, this is a wrong use of AI, using subjective choices and tastes to make absolute results - and making it as public apps.

Further on testing this , it confirms the racist bias that we already have in AI pretrained models. Please see the results.


You classify fair skinned ( Caucasians in general ) as “hot” and dark skinned as “not” .
Am not sure if you did any sincere efforts ( or even bothered ) to look into your training image sets -
Nothing personal on you - but these practices should be minimized. ( or not entertained )
Maybe it is my personal view on this - but its distasteful .

5 Likes

Hi robert_deep hope you are having a wonderful day.!

It’s good to see you have completed your first model.

I think it may be helpful if you read the https://www.fast.ai/ homepage there are many good articles about responsibilities and ethics related to the use of AI.

I can’t help thinking if google.com, microsoft.com, twitter.com github.com, amazon.com or any company deployed your model, there would be some serious concerns raised.

Cheers mrfabulous1 :smiley: :smiley:

4 Likes

Hello everyone! This is my first time posting in the forums. I have done many projects but here is one I did today.
It learns from little bits of color in the Image and fills up the rest. A sort of super powered Adaptive Filling.
Repository
Feedback would be much appreciated.
Cheers!
PS. I really love your way of teaching @jeremy

3 Likes

Nice idea! What data did you use?

Hello! Thanks a lot :pleading_face:
I used the CelebA dataset and preprocessed it using PILs Find edges function to retain a tiny bit of color and remove the rest. Unlike Canny it still keeps a bit of color information which is what I wanted. @jeremy

I’ve been working with multiband image data and fastai2, and quick tests for BigEarthNet19 can be found here. Because the dataset is huge, I’ve only tested with smaller samples (20k train, 6k valid, 6k test) and after 25 epochs the results (MultiPre, MultiRec, MultiF1, JaccardMulti and HammingLoss, all relevant micro averages) are worse (as expected from randomly selecting some portions of splits).

For some reason, learn.validate() gives the following error:

---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
<ipython-input-20-631604a2e07b> in <module>
----> 1 learn.validate()

/projappl/project_2001325/miniconda3/envs/ibc-carbon/lib/python3.7/site-packages/fastai2/learner.py in validate(self, ds_idx, dl, cbs)
    187             self(_before_epoch)
    188             self._do_epoch_validate(ds_idx, dl)
--> 189             self(_after_epoch)
    190         return getattr(self, 'final_record', None)
    191 

/projappl/project_2001325/miniconda3/envs/ibc-carbon/lib/python3.7/site-packages/fastai2/learner.py in __call__(self, event_name)
    106     def ordered_cbs(self, cb_func): return [cb for cb in sort_by_run(self.cbs) if hasattr(cb, cb_func)]
    107 
--> 108     def __call__(self, event_name): L(event_name).map(self._call_one)
    109     def _call_one(self, event_name):
    110         assert hasattr(event, event_name)

/projappl/project_2001325/miniconda3/envs/ibc-carbon/lib/python3.7/site-packages/fastcore/foundation.py in map(self, f, *args, **kwargs)
    360              else f.format if isinstance(f,str)
    361              else f.__getitem__)
--> 362         return self._new(map(g, self))
    363 
    364     def filter(self, f, negate=False, **kwargs):

/projappl/project_2001325/miniconda3/envs/ibc-carbon/lib/python3.7/site-packages/fastcore/foundation.py in _new(self, items, *args, **kwargs)
    313     @property
    314     def _xtra(self): return None
--> 315     def _new(self, items, *args, **kwargs): return type(self)(items, *args, use_list=None, **kwargs)
    316     def __getitem__(self, idx): return self._get(idx) if is_indexer(idx) else L(self._get(idx), use_list=None)
    317     def copy(self): return self._new(self.items.copy())

/projappl/project_2001325/miniconda3/envs/ibc-carbon/lib/python3.7/site-packages/fastcore/foundation.py in __call__(cls, x, *args, **kwargs)
     39             return x
     40 
---> 41         res = super().__call__(*((x,) + args), **kwargs)
     42         res._newchk = 0
     43         return res

/projappl/project_2001325/miniconda3/envs/ibc-carbon/lib/python3.7/site-packages/fastcore/foundation.py in __init__(self, items, use_list, match, *rest)
    304         if items is None: items = []
    305         if (use_list is not None) or not _is_array(items):
--> 306             items = list(items) if use_list else _listify(items)
    307         if match is not None:
    308             if is_coll(match): match = len(match)

/projappl/project_2001325/miniconda3/envs/ibc-carbon/lib/python3.7/site-packages/fastcore/foundation.py in _listify(o)
    240     if isinstance(o, list): return o
    241     if isinstance(o, str) or _is_array(o): return [o]
--> 242     if is_iter(o): return list(o)
    243     return [o]
    244 

/projappl/project_2001325/miniconda3/envs/ibc-carbon/lib/python3.7/site-packages/fastcore/foundation.py in __call__(self, *args, **kwargs)
    206             if isinstance(v,_Arg): kwargs[k] = args.pop(v.i)
    207         fargs = [args[x.i] if isinstance(x, _Arg) else x for x in self.pargs] + args[self.maxi+1:]
--> 208         return self.fn(*fargs, **kwargs)
    209 
    210 # Cell

/projappl/project_2001325/miniconda3/envs/ibc-carbon/lib/python3.7/site-packages/fastai2/learner.py in _call_one(self, event_name)
    109     def _call_one(self, event_name):
    110         assert hasattr(event, event_name)
--> 111         [cb(event_name) for cb in sort_by_run(self.cbs)]
    112 
    113     def _bn_bias_state(self, with_bias): return bn_bias_params(self.model, with_bias).map(self.opt.state)

/projappl/project_2001325/miniconda3/envs/ibc-carbon/lib/python3.7/site-packages/fastai2/learner.py in <listcomp>(.0)
    109     def _call_one(self, event_name):
    110         assert hasattr(event, event_name)
--> 111         [cb(event_name) for cb in sort_by_run(self.cbs)]
    112 
    113     def _bn_bias_state(self, with_bias): return bn_bias_params(self.model, with_bias).map(self.opt.state)

/projappl/project_2001325/miniconda3/envs/ibc-carbon/lib/python3.7/site-packages/fastai2/callback/core.py in __call__(self, event_name)
     21         _run = (event_name not in _inner_loop or (self.run_train and getattr(self, 'training', True)) or
     22                (self.run_valid and not getattr(self, 'training', False)))
---> 23         if self.run and _run: getattr(self, event_name, noop)()
     24         if event_name=='after_fit': self.run=True #Reset self.run to True at each end of fit
     25 

/projappl/project_2001325/miniconda3/envs/ibc-carbon/lib/python3.7/site-packages/fastai2/callback/progress.py in after_epoch(self)
     82         iters = range_of(rec.losses)
     83         val_losses = [v[1] for v in rec.values]
---> 84         x_bounds = (0, (self.n_epoch - len(self.nb_batches)) * self.nb_batches[0] + len(rec.losses))
     85         y_bounds = (0, max((max(Tensor(rec.losses)), max(Tensor(val_losses)))))
     86         self.progress.mbar.update_graph([(iters, rec.losses), (self.nb_batches, val_losses)], x_bounds, y_bounds)

IndexError: list index out of range

Also the matplotlib errors are not technically errors but log messages. If anyone can tell how to disable them please tell me.

Some more articles. Let me know what y’all think :slight_smile:

A walkthru for writing better functions

Decorators in Python

3 Likes

@dipam7 Nice walk through on writing functions part. Are you going to expand on it ?

Not sure if

read_image(x,mode)

would take more execution time when mode='None' as compared to a function which does/ does not execute a .convert()

So if I just want to convert an image into np.array, and not do any conversions , mode=‘None’ in the function would force a call to .convert() and take extra time ( even though it is doing nothing )
Instead, would it be a good check to see
if mode==None
then do just a np.array(x) conversion.
if mode != None,
then do the .convert(mode=mode) also

Hey, thanks. I might expand it. What sort of expansion would you expect?

Nice question. However, it won’t make much of a difference. Let’s try it.

The reason it is written the way it is written is that we want to give the user a lot of options, but we don’t want to pile up a lot of if statements because that will make our code clunky. We want to keep it neat. But yes if the overhead is too much we can prefer an if over an operation that does not do anything.

1 Like

Genius!

1 Like

I’ve just finished porting over ClassConfusion into fastai2:

And for those who are unaware of ClassConfusion, it let’s you further examine classification models (images and tabular) to see any trends. It also shows the filenames of misclassified images. Colab only at the moment (supporting regular jupyter once I have time) :slight_smile:

Here’s the old documentation to read more:
https://docs.fast.ai/widgets.class_confusion.html

9 Likes

I found some time to play with URL classification in fastai, a relevant application of RNN to cyber security. The paper I considered is called Classifying Phishing URLs Using Recurrent Neural Networks.
The problem looks like this:


where we try to predict whether a url is a phishing one or a good one.
The authors were kind enough to share their dataset, roughly 1000000 samples for each class.

Approach:
The approach starts simple by using an LSTM plus all the goodies provided by Fastai, including fit_one_cycle, lr_find, etc.


Finally training the network:

Results
Initial results seem quite interesting, with the F1-score going from 98.76 to 99.25 in our model. Similarly all the metrics cited in the paper (AUC, accuracy, precision, recall) are improved, even without the 3-fold cross-validation:


It is very interesting how far we can go using the power of fastai and a straightforward LSTM model.
Of course more involved techniques can further improve this model. Stay tuned!

10 Likes

Hi everyone! I’ve created an open source web app that allows a user to take a picture using an interface similar to a native camera app. After the picture is taken, it’s sent to a fast.ai CNN model running on render. The modified server will the return a classification of the image along with related content.

There are a handful of really helpful prebuilt apps that allow a user to upload a photo (this and this), but I couldn’t find an app that allowed the user to skip the step of manually uploading it.

You can check out a demo app here that recognizes everyday objects (cars, people, trees, computers, chairs, etc).

I hope it’s helpful to someone and I welcome any feedback or pull requests that could make it more helpful or clear.

Thanks!

5 Likes

I’ve tried out SimCLR, and it seems to be a good direction to go with self-supervised learning.
Pre-training on ImageNet with SimCLR for 50 epochs and then fine-tune on ImageNette gives us 92% accuracy on the validation, whereas starting from random weights gives 79%. Starting from pre-trained weights on supervised ImageNet gives 95.5%.

I think that the first layer filters the model came up with is just mesmerizing:

7 Likes

I’ve trained a model on MNIST 14x14 downscaled images.
This model knows the structure of numbers on this scale.
The original 28x28 images have the same structure but just having twice the size.
If we adjust the first layer’s scale accordingly, we can use this network directly on 28x28 pictures without any fine-tuning. It is just adjusting the first layer’s dilation and stride to be twice it’s original value and we’re done, we have the same accuracy as we had on 14x14 pictures.

Or we can have first layer filters twice the size as was before and having weights of the 14x14 model’s first layer’s weights resized as if they were images.
This gives a slight accuracy drop, but training for just a little bit with a learning rate of 1e-100!!, we have back our accuracy.

Don’t take a look at my notebook for details.

Nice work.

I was trying to use your repositorry to make my own image classifier.

I am geting this error when I try to download the images from google:

"Unfortunately all 100 could not be downloaded because some images were not downloadable. 0 is all we got for this search filter!

Errors: 0"

Do you have any advice?

Tank you.

Just wanted to say thanks so much for the fastai course, it was a brilliant introduction and way to get started!

I’ve been using what I learned on this course and others to explore generative algorithms. Here are some of the pictures I’ve generated so far using style transfer

I wrote a blog piece describing the process for anyone that’s interested

More recently I’ve been exploring semantic segmentation of point clouds in architectural models, which has proved trickier than I initially thought :rofl:

Thanks again

1 Like

Hi everyone,
as you knew, WHO suggests washing hands and not touching face would protect citizens from coronavirus infection
but not touching your face is easier said than done! you can check out this video for example. :roll_eyes:

so based on Lesson 1,2 we built a face touching classifier,
Touching your face is bad for ya!

When you touch your face, a warning sound will be played. That’s it.

you can check our demo here.

source code

2 Likes

I have just released an implementation of gaussian process compatible with the fastai tabular API:

See here for a topic on the subject where I detail the pros of the algorithm.

3 Likes