[Project] Stanford-Cars with fastai v1

Sorry I didn’t explain better. I’m just really tired right now.
image
Where I used to get confidence I now get these weird numbers. Is this by design? They were between 0 and 1 before.

Are you doing anything to the raw output predictions from the model? Can you post the code of your predictor script?

I’m using this code

from starlette.responses import JSONResponse, HTMLResponse, RedirectResponse
from fastai import *
from fastai.vision import *
import torch
from pathlib import Path
from io import BytesIO
import sys
import uvicorn
import aiohttp
import asyncio

app = Starlette()
path = Path("data")
classes = #truncated list of classes
learn = load_learner(Path("data/"), "export.pkl")

@app.route("/")
def form(request):
    return HTMLResponse("""
        <h3>This app will classify cars<h3>
        <form action="/upload" method="post" enctype="multipart/form-data">
            Select image to upload:
            <input type="file" name="file">
            <input type="submit" value="Upload Image">
        </form>
        Or submit a URL:
        <form action="/classify-url" method="get">
            <input type="url" name="url">
            <input type="submit" value="Fetch and analyze image">
        </form>
    """)

@app.route("/upload", methods=["POST"])
async def upload(request):
    data = await request.form()
    bytes = await (data["file"].read())
    return predict_image_from_bytes(bytes)

@app.route("/classify-url", methods=["GET"])
async def classify_url(request):
    bytes = await get_bytes(request.query_params["url"])
    return predict_image_from_bytes(bytes)

async def get_bytes(url):
    async with aiohttp.ClientSession() as session:
        async with session.get(url) as response:
            return await response.read()
        
def predict_image_from_bytes(bytes):
    img = open_image(BytesIO(bytes))
    _, class_, losses = learn.predict(img)
    return JSONResponse({
        "scores": sorted(
            zip(learn.data.classes, map(float, losses)),
            key=lambda p: p[1],
            reverse=True
        )[:5]
    })
if __name__ == "__main__":
    if "serve" in sys.argv:
        uvicorn.run(app, host="0.0.0.0", port=80)```
I've tried adding up all the confidences of all classes in each prediction and they give me seemingly random numbers

@morgan I’ve published a medium story regarding fp16 vs fp32 with my test and observations from another medium article. Sorry I couldn’t add graphs, I didn’t save the raw data when I ran the tests.

2 Likes

@morgan Lukemelas moved efficientnet’s hosting and that broke the old version. I’ve made a PR to your repo that relies on the new version and still uses mish.

1 Like

Nice, thanks for the write up and the PR!

Hello @morgan . I’m having an issue whilst using the method on another dataset. Whenever I attempt to train my model, both using EfficientNet b3 and EfficientNet b7, I always get the following error:

RuntimeError                              Traceback (most recent call last)
<ipython-input-21-96a7be495d6c> in <module>()
 18                ).to_fp16()
 19 
---> 20 fit_fc(learn, tot_epochs=40, lr=15e-4, start_pct=0.10, wd=1e-3, show_curve=False)
 21 
 22 learn.save(f'9_{exp_name}_run{run_count}')

8 frames
/content/MEfficientNet_PyTorch/efficientnet_pytorch/utils.py in forward(self, x)
132     def forward(self, x):
133         x = self.static_padding(x)
--> 134         x = F.conv2d(x, self.weight, self.bias, self.stride, self.padding, self.dilation, self.groups)
135         return x
136 

RuntimeError: "unfolded2d_copy" not implemented for 'Half'

I am running this in Google Colab.

My Google searches don’t turn anything up.

Thank you!

1 Like

It looks like the particular layer (or something along those lines) isn’t implemented for mixed precision yet (hence the half). Try turning off fp16 and it should work.

3 Likes

@muellerzr Thanks! It worked!

1 Like

I tried to deploy the model on a CPU-only environment, but I am getting the following problem when I try to do inference from the model:

RuntimeError: Input type (torch.FloatTensor) and weight type (torch.HalfTensor) should be the same

From this line:
File "/home/vdaita/classifer/MEfficientNet_PyTorch/efficientnet_pytorch/utils.py", line 134, in forward
x = F.conv2d(x, self.weight, self.bias, self.stride, self.padding, self.dilation, self.groups)

What am I doing wrong?
It doesn’t appear to say what line in my code causes the issue.

1 Like

Did you remember to convert your model to normal precision before exporting? (IE learn.to_fp32())

2 Likes