Share your work here ✅

Hi JonathanSum You’re cleverer than you think! :trophy:

Cheers mrfabulous1 :smile: :smile:

1 Like

All Results


I think the model is pretty successful, and you will understand it if you watch a lot of anime too.

I think if I build a website for an anime fan, I have no problem determining what anime they like based on their review. In addition, I can also group out the taste of anime.

One more thing, I deleted the Anime reviews that were not reviewed. Maybe your favorite one was in there. But what should I do? Should I put them 2-star review because it is not as bad as one star? if you can give me a good suggestion, I will train it again like 4 hours long.

For the whole map, I can tell the upper left region are mostly about alone, violence, sexy. But the middle left is mostly about the team, Clan(Group), decent young Anime. The upper right region is about the monster tale or strange thing. For example Bakemonogatari and the headless biker. The middle right is the region that I don’t like and I don’t know why. Thus, I don’t think I can not really explain it. For the lowerest part is much more an anime for female.

a) https://gitlab.com/huix/leaf-disease-plant-village/-/tree/master/plantvillage_deeplearning_paper_dataset%2Fcolor

b) https://github.com/spMohanty/PlantVillage-Dataset/tree/master/raw/color
This also has all plants ( slightly modified )

c) If you have access to crowai platform ( registrations are closed now ) , then you can download it from there

e) This link
https://zenodo.org/record/1204914/files/plantvillage_deeplearning_paper_dataset.7z?download=1

e) Alternatively , you can write to the maintainer of the dataset and he will provide you with a link to the dropbox

Hi everyone! So I was trying to make some interesting classifier and I decided to make a classifier that classifies an image of guitar into one of Acoustic, Electric, Archtop, resonator and double-neck. Here is the google colab notebook:
https://colab.research.google.com/drive/1MLBWSYBRqhLzLj-IooyxthqnRIXoJ14M

Please suggest any ideas that I can improve my work with :smiley:

Hey everyone. :herb: Let me share a classifier with you that distinguishes between seven different herbs. I wrote a tutorial that sums up the first three lessons of part 1 and I’m sharing the friends link to it (so you don’t need to pay anything to have a look…)
In the article you’ll also find a link to a starter kernel, for those who are interested. Let me know what you think. By the way: I’m so glad that fastai exists! :green_heart:

1 Like

I build sorters.

I have two of them by now - LEGO sorter and Magic the Gathering cards sorter.

The LEGO sorter sorts the parts by mold or color. It finally uses a set of standard single label CNNs based on Resnet34. The CNNs are organized hierarchically - first sorts to basic categories (brick, plate, technic, slope etc.), other networks do the more specific sorting of higher category. I have a data of cca 150 different molds by now. I have at least 500 images for most of the molds.

I tried to build just one multilabel CNN to cover all the hierarchy, but with no big success. The common categories were successful, the detailed didn’t learn well (obviously - much smaller dataset).

The MtG sorter originally used the simmilar hierarchical approach - and it worked quite well. I tried a single CNN, too, but with no success - 50 000 categories were too much for resnet34. Even if I were successfull, I’d have to rebuild the CNN with every new card.
Finally I adapted the @radek 's siamese network from Whale competition. It still has problems to distinguish between the almost the same cards sometimes, but works quite well. The controlling software is connected to the current card prices, so I can sort out the cards worth of selling.
The dataset contains one image per card.

You can see both machines on the video from LEGO exhibition in Olomouc, Czech.

There are a few more videos in my channel:
https://www.youtube.com/channel/UCfc7oHyDpceKFabTuM9Hzew/

11 Likes

Hello!

I finished lesson 2, and have to say this course is the best course I have ever encountered, hands down. The top-down approach is amazing. Being able to train and deploy deep networks from day 1 is amazing.

I decided to make a model that my friends and family would be able to have some fun with, so I googled pictures of attractive and unattractive people, and made a cnn classifier to try to be able to find the difference between the two, and after cleaning up the data (a lot, the searches provided a very noisy dataset), my model achieved about 94% accuracy. I deployed it as a web app on Render using the bear classification template you provided, and my friends are having lots of fun with it!
Here is the link: hot-or-not.onrender.com

Thank you so much for making this course, and the fastai library!

Here is the link to the notebook, I have no idea how to link it properly, it would be cool if someone could tell me :slight_smile:
https://gist.github.com/Robertleoj/8e64cda6188a7beb993c8e330a28f186

Hi All,

I’m happy to find this thread, there’s a lot of cool projects going on! I’d like to join in and share a small project I built while working through part 1.

My model classifies water as hot or cold from the sound of pouring (it was inspired by an NPR piece I heard some time ago).

Links:

I kept a fairly detailed build log in my notebooks, and made sure to MIT license my work and CC license the data I collected. Feel free to reference and remix my work if you find it helpful :+1:


Feedback is tremendously appreciated. This is my first ML model, and my first fully self-hosted web server.

I’m sure I didn’t hit the mark on best practices in many places. I want to improve this project where I can so this project can act as a quick refresher for myself when starting subsequent projects.

5 Likes

Here’s the first reactions .
To begin with, this is a wrong use of AI, using subjective choices and tastes to make absolute results - and making it as public apps.

Further on testing this , it confirms the racist bias that we already have in AI pretrained models. Please see the results.


You classify fair skinned ( Caucasians in general ) as “hot” and dark skinned as “not” .
Am not sure if you did any sincere efforts ( or even bothered ) to look into your training image sets -
Nothing personal on you - but these practices should be minimized. ( or not entertained )
Maybe it is my personal view on this - but its distasteful .

5 Likes

Hi robert_deep hope you are having a wonderful day.!

It’s good to see you have completed your first model.

I think it may be helpful if you read the https://www.fast.ai/ homepage there are many good articles about responsibilities and ethics related to the use of AI.

I can’t help thinking if google.com, microsoft.com, twitter.com github.com, amazon.com or any company deployed your model, there would be some serious concerns raised.

Cheers mrfabulous1 :smiley: :smiley:

4 Likes

Hello everyone! This is my first time posting in the forums. I have done many projects but here is one I did today.
It learns from little bits of color in the Image and fills up the rest. A sort of super powered Adaptive Filling.
Repository
Feedback would be much appreciated.
Cheers!
PS. I really love your way of teaching @jeremy

3 Likes

Nice idea! What data did you use?

Hello! Thanks a lot :pleading_face:
I used the CelebA dataset and preprocessed it using PILs Find edges function to retain a tiny bit of color and remove the rest. Unlike Canny it still keeps a bit of color information which is what I wanted. @jeremy

I’ve been working with multiband image data and fastai2, and quick tests for BigEarthNet19 can be found here. Because the dataset is huge, I’ve only tested with smaller samples (20k train, 6k valid, 6k test) and after 25 epochs the results (MultiPre, MultiRec, MultiF1, JaccardMulti and HammingLoss, all relevant micro averages) are worse (as expected from randomly selecting some portions of splits).

For some reason, learn.validate() gives the following error:

---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
<ipython-input-20-631604a2e07b> in <module>
----> 1 learn.validate()

/projappl/project_2001325/miniconda3/envs/ibc-carbon/lib/python3.7/site-packages/fastai2/learner.py in validate(self, ds_idx, dl, cbs)
    187             self(_before_epoch)
    188             self._do_epoch_validate(ds_idx, dl)
--> 189             self(_after_epoch)
    190         return getattr(self, 'final_record', None)
    191 

/projappl/project_2001325/miniconda3/envs/ibc-carbon/lib/python3.7/site-packages/fastai2/learner.py in __call__(self, event_name)
    106     def ordered_cbs(self, cb_func): return [cb for cb in sort_by_run(self.cbs) if hasattr(cb, cb_func)]
    107 
--> 108     def __call__(self, event_name): L(event_name).map(self._call_one)
    109     def _call_one(self, event_name):
    110         assert hasattr(event, event_name)

/projappl/project_2001325/miniconda3/envs/ibc-carbon/lib/python3.7/site-packages/fastcore/foundation.py in map(self, f, *args, **kwargs)
    360              else f.format if isinstance(f,str)
    361              else f.__getitem__)
--> 362         return self._new(map(g, self))
    363 
    364     def filter(self, f, negate=False, **kwargs):

/projappl/project_2001325/miniconda3/envs/ibc-carbon/lib/python3.7/site-packages/fastcore/foundation.py in _new(self, items, *args, **kwargs)
    313     @property
    314     def _xtra(self): return None
--> 315     def _new(self, items, *args, **kwargs): return type(self)(items, *args, use_list=None, **kwargs)
    316     def __getitem__(self, idx): return self._get(idx) if is_indexer(idx) else L(self._get(idx), use_list=None)
    317     def copy(self): return self._new(self.items.copy())

/projappl/project_2001325/miniconda3/envs/ibc-carbon/lib/python3.7/site-packages/fastcore/foundation.py in __call__(cls, x, *args, **kwargs)
     39             return x
     40 
---> 41         res = super().__call__(*((x,) + args), **kwargs)
     42         res._newchk = 0
     43         return res

/projappl/project_2001325/miniconda3/envs/ibc-carbon/lib/python3.7/site-packages/fastcore/foundation.py in __init__(self, items, use_list, match, *rest)
    304         if items is None: items = []
    305         if (use_list is not None) or not _is_array(items):
--> 306             items = list(items) if use_list else _listify(items)
    307         if match is not None:
    308             if is_coll(match): match = len(match)

/projappl/project_2001325/miniconda3/envs/ibc-carbon/lib/python3.7/site-packages/fastcore/foundation.py in _listify(o)
    240     if isinstance(o, list): return o
    241     if isinstance(o, str) or _is_array(o): return [o]
--> 242     if is_iter(o): return list(o)
    243     return [o]
    244 

/projappl/project_2001325/miniconda3/envs/ibc-carbon/lib/python3.7/site-packages/fastcore/foundation.py in __call__(self, *args, **kwargs)
    206             if isinstance(v,_Arg): kwargs[k] = args.pop(v.i)
    207         fargs = [args[x.i] if isinstance(x, _Arg) else x for x in self.pargs] + args[self.maxi+1:]
--> 208         return self.fn(*fargs, **kwargs)
    209 
    210 # Cell

/projappl/project_2001325/miniconda3/envs/ibc-carbon/lib/python3.7/site-packages/fastai2/learner.py in _call_one(self, event_name)
    109     def _call_one(self, event_name):
    110         assert hasattr(event, event_name)
--> 111         [cb(event_name) for cb in sort_by_run(self.cbs)]
    112 
    113     def _bn_bias_state(self, with_bias): return bn_bias_params(self.model, with_bias).map(self.opt.state)

/projappl/project_2001325/miniconda3/envs/ibc-carbon/lib/python3.7/site-packages/fastai2/learner.py in <listcomp>(.0)
    109     def _call_one(self, event_name):
    110         assert hasattr(event, event_name)
--> 111         [cb(event_name) for cb in sort_by_run(self.cbs)]
    112 
    113     def _bn_bias_state(self, with_bias): return bn_bias_params(self.model, with_bias).map(self.opt.state)

/projappl/project_2001325/miniconda3/envs/ibc-carbon/lib/python3.7/site-packages/fastai2/callback/core.py in __call__(self, event_name)
     21         _run = (event_name not in _inner_loop or (self.run_train and getattr(self, 'training', True)) or
     22                (self.run_valid and not getattr(self, 'training', False)))
---> 23         if self.run and _run: getattr(self, event_name, noop)()
     24         if event_name=='after_fit': self.run=True #Reset self.run to True at each end of fit
     25 

/projappl/project_2001325/miniconda3/envs/ibc-carbon/lib/python3.7/site-packages/fastai2/callback/progress.py in after_epoch(self)
     82         iters = range_of(rec.losses)
     83         val_losses = [v[1] for v in rec.values]
---> 84         x_bounds = (0, (self.n_epoch - len(self.nb_batches)) * self.nb_batches[0] + len(rec.losses))
     85         y_bounds = (0, max((max(Tensor(rec.losses)), max(Tensor(val_losses)))))
     86         self.progress.mbar.update_graph([(iters, rec.losses), (self.nb_batches, val_losses)], x_bounds, y_bounds)

IndexError: list index out of range

Also the matplotlib errors are not technically errors but log messages. If anyone can tell how to disable them please tell me.

Some more articles. Let me know what y’all think :slight_smile:

A walkthru for writing better functions

Decorators in Python

3 Likes

@dipam7 Nice walk through on writing functions part. Are you going to expand on it ?

Not sure if

read_image(x,mode)

would take more execution time when mode='None' as compared to a function which does/ does not execute a .convert()

So if I just want to convert an image into np.array, and not do any conversions , mode=‘None’ in the function would force a call to .convert() and take extra time ( even though it is doing nothing )
Instead, would it be a good check to see
if mode==None
then do just a np.array(x) conversion.
if mode != None,
then do the .convert(mode=mode) also

Hey, thanks. I might expand it. What sort of expansion would you expect?

Nice question. However, it won’t make much of a difference. Let’s try it.

The reason it is written the way it is written is that we want to give the user a lot of options, but we don’t want to pile up a lot of if statements because that will make our code clunky. We want to keep it neat. But yes if the overhead is too much we can prefer an if over an operation that does not do anything.

1 Like

Genius!

1 Like

I’ve just finished porting over ClassConfusion into fastai2:

And for those who are unaware of ClassConfusion, it let’s you further examine classification models (images and tabular) to see any trends. It also shows the filenames of misclassified images. Colab only at the moment (supporting regular jupyter once I have time) :slight_smile:

Here’s the old documentation to read more:
https://docs.fast.ai/widgets.class_confusion.html

9 Likes

I found some time to play with URL classification in fastai, a relevant application of RNN to cyber security. The paper I considered is called Classifying Phishing URLs Using Recurrent Neural Networks.
The problem looks like this:


where we try to predict whether a url is a phishing one or a good one.
The authors were kind enough to share their dataset, roughly 1000000 samples for each class.

Approach:
The approach starts simple by using an LSTM plus all the goodies provided by Fastai, including fit_one_cycle, lr_find, etc.


Finally training the network:

Results
Initial results seem quite interesting, with the F1-score going from 98.76 to 99.25 in our model. Similarly all the metrics cited in the paper (AUC, accuracy, precision, recall) are improved, even without the 3-fold cross-validation:


It is very interesting how far we can go using the power of fastai and a straightforward LSTM model.
Of course more involved techniques can further improve this model. Stay tuned!

10 Likes