Help wanted!: JavaScript interface to my model API

Hi gang! I’ve created a simple dog/classifier here:

I’d like to show off what can be done by using the API from a javascript app:

I’m particularly interested in showing what can be done with a static pure html/js app that just lives on github pages. Would any of you be interested in trying to build a cool app frontend to this model? Or create some similar model of your own and create a js frontend?

If there are any successful attempts, I’ll show them off at the next class! Here are the interfaces made so far:

13 Likes

I’ve also made one for pet classification (into 37 dog and cat breeds) if that inspires more ideas:

2 Likes

It is not a show off, but just a start that:

  • allows to capture image from the camera
  • displays predicted label

https://misza222.github.io/hf_api_predict/

Perhaps someone will take it from there :wink:

4 Likes

So Jeremy, I’ve made an attempt on this. I wanted to keep things as minimal as possible, because building JS apps from scratch feels like a lot of work.

Since I already have a food-101 app on HF spaces, I figured I could hack something to work on that.

Everything is contained in a single HTML+JS file (no libs, no frameworks, no old browser support etc.). You should be able to save the page, replace the apiURL defined in the HTML file to point to your pets app, save and open the file in another tab and get it work.

Here’s the URL : https://suvash.github.io/very-basic-gradio-api-app/

And yes, the entirety of the html+js file is available here for anyone who wants to extend/change/remix it :beers: . https://github.com/suvash/very-basic-gradio-api-app/blob/main/docs/index.html

6 Likes

Here’s another starter demo (source) very similar to the ones above

  • allows uploading a file (desktop) or taking a photo (mobile)
  • displays the predicted label and probability

It could use a lot of styling, and some preloaded examples.

3 Likes

100% accuracy! Love the use of the camera

Thanks gang! Here are two more:

I’ll collect them all in a list in the top post.

2 Likes

Interesting results.

I uploaded my profile image.

The pet classifier says I’m a dog with 96% confidence. Then, there is the breed classifier, which is almost certain I’m a Scottish Terrier (I’m flattered, that’s quite a noble dog breed).
The food classifier says I’m a beignet, but only with 53% confidence. Disappointing: everyone knows I’m a sweet person.

2 Likes

Here’s my sample app: fast.ai + 🤗 API Demo I’m a HTML/JS novice, but I tried to make it look pretty by incorporating a Bootstrap template. It can do multiple images at once which i think is a little unique. It should be simple to update to a new model by just changing the api url and should just work as long as the request response formatting is the same. I did notice the API sometimes errors out when uploading multiple images. It appears to be an API issue based on the error message but it could be the web app.

Edit:
I added the ability to select multiple API’s or enter your own as well as added a % next to each class prediction. You may have to ‘Hard Reload’ your browser page to see the changes to clear your browser’s cache of this page.

4 Likes

:sweat_smile: I know what you mean. I’ve been messing around with taking random photos when walking around neighbourhood with all very interesting food results. 🫠

1 Like

On iPhone/iOS 15.4.1 it only turns on the front facing camera in stream mode and there doesn’t seem to be a way to actually take a picture. Not sure why.

Here’s a nice demo that also has a text Q&A model built in:

https://gettoknowyourpet.com/

I just created another app using the brand new pyscript library accessing the Dogs/Cats hugging face API provided by Jeremy. It is hosted on GitHub pages here:
https://matdmiller.github.io/fastai-huggingface-sample-web-app1/pyscript-classifier.html
and the repo can be found here:

Pyscript enables running python code within your browser on the client side. This is different than Jupyter where the Python code is executed on a server. This product was announced yesterday by Anaconda.

The project is still in Alpha, so the developer experience can be pretty challenging at times when doing more complex things, but I’m sure it will get better over time. It’s still pretty amazing that this is possible. If you want to try this out yourself you can find more information here:

https://pyodide.org/en/stable/usage/api/python-api.html

A few things I’ve learned about pyscript and github pages so far:

  • There are a wide variety of packages that are available to use in pyscript, but not every package. For example the requests library does not work due to browser limitations as well as pytorch is not available.
  • When accessing the uploaded files in this app from python you need to use the FileReader function in javascript which is asynchronous. This forces you to have to write your python code in a more javascripty way. I tried creating a python function to fully encapsulate the asynchronous aspect by adding a while loop to wait for all of the files to be returned but that immediately crashed my browser. I also tried adding a simple sleep as a hacky work-around, but it does not seem like that function works in pyscript.
  • You must use the pyscript/pyodide functions or access the javascript fetch function when making web requests. The python requests library does not work. For this app, the API requires you to do a POST which is not supported by the synchronous functions provided by pyscript (ex: open_url) so you need to implement it asynchronously using the pyfetch function. Again I was unable to figure out how to wrap this in a python function to make it synchronous.
  • When hosting this in GitHub pages, I found I was not able to utilize the relative paths for referencing the .py files like I was able to do locally. I had to replace the relative paths with the full github pages path before pushing the repo back to GitHub to get it to work.
  • You need to run a mini web server on your local machine when testing your build locally. Simply opening the .html files in your browser from your file system does not work. You can fun a simple http server by navigating to the directory and running python3 -m http.server 8000
  • You can call python functions from directly within the html page when a button is clicked, for example, like this <button type="submit" pys-onClick="my_python_func">Button</button>. You can see instead of using the onclick flag you use pys-onClick. It does not seem like the onchange flag is implemented in pyscript yet which I would have like to use with the file selector input.
  • You can import js in your python file to access functions available in javascript and to access the DOM. For example js.document.getElementById(element_id) gets you access to DOM objects within python which you can then manipulate in python. Many of the javascript methods are available on these objects which you can then call within your python script.

Overall this seems to be a really cool project by Anaconda! I’m excited to see where they take it!

9 Likes

Since I’m not much of a developer, I try to avoid writing extra code if possible. What I liked about HF spaces + Gradio is that it lets me host a trained model with minimum effort.

I’m trying to understand in what circumstances a static HTML/JS hosted on github that still needs to talk to the HF+Gardio backend be preferable? Are there things that Gradio cannot do? I’m thinking maybe it doesn’t allow taking pictures with your camera and that’s where the extra value is for a JS app?

1 Like

Great question @mike.moloch - we’ll be discussing it in tomorrow’s lesson! :slight_smile:

4 Likes

Really happy to see a pyscript example already - thanks Mat!

4 Likes

That was fast ! :clap: :clap: :clap:

2 Likes

Is this still working? I am getting CORS policy block after loading an image of a dog.

It is not working because of the CORS issue. I believe all new spaces should not have this issue. I wasn’t able to find how to fix it. If you want to access the gradio app you can here and it does work: