Share your V2 projects here

Hi everyone,

Thought I would share my project here. I am working on analyzing Li-ion battery degradation through deep-learning techniques mostly using LSTM-RNN networks. Li-ion batteries have over 10 different major degradation mechanisms that may have co-dependence and together they result in battery capacity fade. However, only three factors (voltage, current, and temperature) are controllable in the operation of commercial Li-ion batteries. I plan to study how these different mechanisms interact (shown in the figure below from a research paper)

I converted cycling data from my battery experiments as input and trained an LSTM-RNN network to predict the capacity. As Jeremy had pointed out that don’t shuffle data in a time-series problem, so I used non-shuffled data for training which is the first 20% cycles and validated on the remaining 80% cycles (figure shown). Since the test condition is constant, degradation mechanisms remain more or less the same and this makes it a fairly easy problem to predict.

However, Li-ion batteries suffer from accelerated degradation when they reach the end of life (you may know by experience when your batteries suddenly start to die fast in phones or laptops). My model is unable to capture that end effect yet. I can increase the training cycles from 20% to resolve this issue. But, if the model is able to capture the long-term effect more accurately in the first 20% cycles, a major problem of the research and battery development community can be solved which is reducing battery development & testing time. Battery cycling may sometimes take months or even years which makes it a very costly process (i have personally tested some Li-ion battery samples for as long as 10 months just on one test condition). Several researchers have studied these degradation mechanisms and have put forth physical equations that govern these mechanisms. However, to use these physical equations one must be able to parameterize the cell which may often include up to 50+ variables, and often these parameters are not provided by the battery manufacturers. I was hoping that the LSTM-RNN would be somehow able to extract back these parameters from the data (I don’t know how though). The next step I thought was to embed the known physical equations in the network by modifying the loss function and defining boundary conditions (by punishing the network for predicting physically impossible results such as battery capacity increasing).

Looking forward to connecting with folks who are working with RNN networks (especially LSTM). If anyone knows or had tried physics-guided networks would be willing to help me out (I have lots of doubts), i would appreciate the help. Stay safe!

Thanks
Ravin Singh

13 Likes

Here’s the github repo with code, notebooks, data etc.

5 Likes

Tried Line art using GAN, got some amazing results. After lot of struggle I am getting proper lines around the face.

Hope you guys like it.

https://twitter.com/Vijish68859437

beard2-horz kaenu-horz

rob d-horz

15 Likes

Hi vijish hope all is well!

Those results look great. :smiley:

Can you share any tips on what you did to get good results like that?

Cheers mrfabulous1 :smiley: :smiley:

2 Likes

Sorry what did you do to solve the problem? I am getting the same error as in binder and when I “debug” it, I get:

AttributeError: Can't get attribute 'CrossEntropyLossFlat' on <module 'fastai.layers' from '/app/.heroku/python/lib/python3.6/site-packages/fastai/layers.py'>

Also, your app seems to be working where as your comment in the end says it doesn’t work.

Edit: Had the wrong python version. It works now. https://github.com/tkravichandran/First-DL-Classifier

To the gardeners among us: You wanted to get a tomato classified you saw somewhere? Check-out tomatoClassifier on GitHub. (Ok the model can classify three sorts, but hey.)

4 Likes

Hi meinzer1899 hope all is well! I had a play with your model! Good work.

Cheers mrfabulous1 :smile: :smiley:

2 Likes

Hello, I have published a notebook that attempts to use all of the fastai v2 techniques from lesson 1 to 6 (the idea behind was to help my students gain a practical insight into these lessons and the use of Deep Learning models in real life).

There is as well a post in medium that explains the project:

12 Likes

Since it’s almost Halloween, I made a Harry Potter character recognizer app. Here’s the repo. It should be able to match
people = [‘Harry Potter’, ‘Hermione Granger’, ‘Ron Weasley’,
‘Dobby the House Elf’, ‘Minerva McGonagall’, ‘Ginny Weasley’, ‘Dudley Dursley’, ‘Lord Voldemort’,
‘Luna Lovegood’, ‘Bellatrix Lestrange’, ‘Remus Lupin’]
within 15% error rate. I wanted to try something with a lot of categories. I could test with more data but it’s more accurate than I thought it would be. Bing’s image search is fun.

1 Like

So yeah, i created my first model using fastai, it’s pretty simple and straightforward, you just upload an image of the interior or exterior of a car (only works for BMW/Mercedes/Audi), and it tells you what brand it thinks it is and if it’s the interior or exterior. you can try it out here:


just click on the binder button (it takes some time to launch tho, some patience is needed).
That’s pretty mush it, also i’ll be writing about it on my first blog post here (did not write it yet):
https://igrek-code.github.io/
Here’s some pictures of it:
car_classifier_in_action_exterior car_classifier_in_action_interior
3 Likes

My first blog post using fastpages.
Performance comparison between fastai ULMFit and a basic PyTorch model for StockTwits sentiment data classification. Results are close but fastai ULMFit wins by about 3%.

7 Likes

Marcos, Congrats! When I went to access the notebooks at the link near the end of the paper (Marcosuff/projects), I got a 404. Is there another way to access the notebooks?

This is really cool! Do you have a notebook you would be willing to share?

We recently managed to deploy a FastAI trained PyTorch model in TorchServe. In case anyone runs into a similar situation and looking for a recipe:

Let me know if you found it useful.

13 Likes

I’ve added Tensorboard Projector support (currently word embeddings and image embeddings) to fastai’s tensorboard integration. Here’s a notebook that shows how to use the TensorBoardProjectorCallback to visualize image embeddings in Tensorboard Projector. You can run the notebook in Colab.

Docs: https://docs.fast.ai/callback.tensorboard

If you have any questions let me know :slight_smile: .

Florian

6 Likes

Did you try to use the attention flag enabled on the Unet? It greatly helps on perf, but I was unable to use on inference, “Illegal memory access everywhere”

Do you mind to share your code snippet and full error trace msg?

I run the following with no issue:

FastAI Training

config = unet_config(self_attention=True, act_cls=Mish)
learn = unet_learner(dls, resnet50, metrics=acc_camvid, config=config)
learn.fine_tune(20)

PyTorch Model Definition

class DynamicUnetDIY(SequentialEx):
    "Create a U-Net from a given architecture."

    def __init__(
        self,
        arch=resnet50,
        n_classes=32,
        img_size=(96, 128),
        blur=False,
        blur_final=True,
        y_range=None,
        last_cross=True,
        bottle=False,
        init=nn.init.kaiming_normal_,
        norm_type=None,
        self_attention=True, # Here
        act_cls=Mish, # Here
        n_in=3,
        cut=None,
        **kwargs
    ):
...

TorchServe

>>> time http POST http://127.0.0.1:8080/predictions/fastunet_attention @sample/street_view_of_a_small_neighborhood.png

HTTP/1.1 200
Cache-Control: no-cache; no-store, must-revalidate, private
Expires: Thu, 01 Jan 1970 00:00:00 UTC
Pragma: no-cache
connection: keep-alive
content-length: 16413
x-request-id: 35600873-8657-4998-b822-26340bf2bd1a

{
  "base64_prediction": "GhoaGhoaGhoaGh...RERERERERERERHh4eHh4eHh4eHh4e"
}

real    0m0.386s
user    0m0.309s
sys     0m0.030s

Hye, thanks for your answer and awesome tutorial.
My issue appears to be solved setting:

torch.backends.cudnn.benchmark = False

I have another question, why you reconstruct the Unet on your inference code and not just use torch.load. in my fastai code I do:

torch.save(learn.model, 'model.pt')

and then on torch just do:

model = torch.load('model.pt')

with this, the UNET comes in the file.

1 Like

Thanks! Great tips that would be very helpful to whom runs into a same issue.

As to the reconstruction, the reasons are:

  1. For TorchServe eager mode, it requires its explicit definition
--model-file MODEL_FILE
Path to python file containing model architecture. This parameter is mandatory for eager mode models. The model architecture file must contain only one class definition extended from torch.nn.modules.
  1. According to the answers below:

This way is still not bullet proof and since pytorch is still undergoing a lot of changes, I wouldn’t recommend it.

Good to know, I do think the answers is pretty old.

  1. I was able to run torchserve without passing the model definition, only passing the saved model as I showed you above. As you say, probably it is not the right way, so I converted the UNET to torchscript.
  2. Do you have any tricks on how to make inference work on FP16?

I am curious how the autonomous people do segmentation at 30 FPS with FHD images. This UNET type of models are heavy as hell, the forward pass (eval mode) for the resnet34 unet on 1 FHD image takes 15GB of VRAM!!