I’m on lesson two of the course and I have succesffuly trained and deployed my model to my hagging face spaces. However i am unable to interact with the gradio api that was setup.
First i tried the frontend code that was provided from the example of the course, But this gives cors issues and add mode:"cors"
doesn’t fix the issue furthermore this seems to be an outdated implementation accodrding to the gradio docs.
So i tried the implementation from the gradio docs and wrote it like this.
import { SpaceStatus, client } from "@gradio/client";
const resultSpan = document.getElementById("result") as HTMLSpanElement;
const image = document.getElementById("image") as HTMLInputElement;
const error = document.getElementById("error") as HTMLHeadingElement;
let app: any;
try {
app = await client("Mekbib/terrain", {
status_callback: (space_status: SpaceStatus) =>
(error.textContent = `Huggin face is: ${space_status}`),
});
const appInfo = await app.view_api();
console.log(appInfo)
} catch (err) {
console.log(err);
}
image.addEventListener("input", sendFile)
async function sendFile() {
if (image.files) {
const file = image.files[0];
try {
const response = await app.predict("/predict", [file]);
if (typeof response == "object" && response) {
if ("error" in response)
resultSpan.textContent = response.error as string;
else if ("data" in response)
resultSpan.textContent = response.data as string;
}
} catch (err) {
console.log(err);
}
}
}
The response i get is a non descriptive error
{
"type": "status",
"endpoint": "/predict",
"fn_index": 0,
"time": "2024-03-19T16:45:55.636Z",
"queue": true,
"message": null,
"stage": "error",
"success": false
}
but looking at the logs in my hugging face spaces there is this error
pydantic_core._pydantic_core.ValidationError: 1 validation error for FileData path Field required [type=missing, input_value={}, input_type=dict] For further information visit https://errors.pydantic.dev/2.6/v/missing
So it seems that this is a validation error due to the type of file i sent. The parameter accepts a file, buffer and a blob type. I have tried the buffer type using
const file = image.files[0];
const reader = new FileReader();
reader.readAsArrayBuffer(file);
reader.onload = async () => {
try {
const response = await app.predict("/predict", [reader.result]);
if (typeof response == "object" && response) {
if ("error" in response)
resultSpan.textContent = response.error as string;
else if ("data" in response)
resultSpan.textContent = response.data as string;
}
} catch (err) {
console.log(err);
}
but i get the same error. I have tried a Blob file by using the example from the doc
const response_0 = await fetch("https://raw.githubusercontent.com/gradio-app/gradio/main/test/test_files/bus.png");
const exampleImage = await response_0.blob();
const app = await client("Mekbib/terrain");
const result = await app.predict("/predict", [
exampleImage, // blob in 'img' Image component
]);
And i still get the same error. I have also checked that none of these are null in the debugger before they were sent to the api.
Another problem might be in the gradio interface for the model perhaps with how PILImage
expects file types?
from fastai.vision.all import *
import gradio
learn = load_learner('export.pkl')
labels = learn.dls.vocab
def predict(img):
img = PILImage.create(img)
pred,pred_idx,probs = learn.predict(img)
return {labels[i]: float(probs[i]) for i in range(len(labels))}
image = gradio.Image(height=512, width=512)
label = gradio.Label(num_top_classes=3)
gradio.Interface(fn=predict, inputs=image, outputs=label).launch(share=True)