Exposing DL models as api's/microservices


(Cedric Chee) #22

I have explore this area further when I was building a real-world data product recently. The design was inspired by Dave’s posts.

Application System Architecture for Data-driven Product

We know that our application user interface will demonstrate what is possible, it needs to be loosely coupled to the trained models which are doing the core predictive tasks.

In order to preserve a bright line separation of concerns, we break the overall application down into several constituent pieces. Here’s an extremely high level view of the component hierarchy:

  • The job of the prediction service (via the trained models it wraps) is to implement these core predictive tasks and expose them for use, respectively. The models themselves shouldn’t need to know about the prediction service which in turn should not need to know anything about the interface application.
  • The job of the interface backend (API) is to ferry data back and forth between the client browser and model service, handle web requests, take care of computationally intensive transformations not appropriate for frontend Javascript, and persist user-entered data to a data store. It should not need to know much about the interface frontend, but it’s main job is to relay data for frontend manipulation so it’s acceptable for this part to less abstractly generalizable than the prediction service.
  • The job of the interface frontend (UI) is to demonstrate as much value as possible by exposing functionality that the models make possible in an intuitive and attractive format.

Here’s a visual representation of this architecture:


(Even Oldridge) #23

@cedric You should take a look at clipper.ai which @QWERTY1 recently shared. It’s out of the Berkeley RISE lab and is a very well thought out framework for model serving as an API. The website doesn’t really do the framework justice in my mind and the videos are definitely worth looking at. It’s very similar to what you’ve layed out, but has a few more details outlined. It looks like you’ve thought of some other aspects as well so it may be worthwhile joining forces and contributing your ideas/work.

I’m currently trying to convince my company to adopt it for model serving so that we can work on it and help improve it but so far I’ve been very impressed with what it does and their roadmap.


(Cedric Chee) #24

Hi Even, thank you for sharing. That sounds very interesting. This is my first time hearing about clipper.ai. I have seen Polyaxon before. I have glanced through clipper.ai’s website and you are right, it’s a bit light on information. With that in mind, I head over to their codebase and have taken a quick peek at some codes/Dockerfiles there. So far, it leaves me with some impression that it’s worth looking at. So, I plan to take a more serious look at it soon and see if I can contribute in some ways if time allows.

I see. Good to hear.


(Even Oldridge) #25

Check out the video in the other link. It gives a much more solid overview. Definitely seems worth exploring in detail.


(Ryan Michael Fraser) #26

I found this video in which was presented at the AWS London Summit 2018. There are not a lot of views so I decided to share it on this post. I think it will be really useful for anyone trying to deploy their fastai models (in AWS at least):

Building, Training and Deploying Custom Algorithms Such as Fast.ai with Amazon SageMaker:


#27

thanks for your great tutorial! I’ve successfully followed your plan and able to deploy a skin mole detection web app(http://104.248.146.179/) on DigitalOcean. What bother me a little is that 1) I used ResNext50 and have to copy the model into the app folder to have it work; 2) remember to check if opencv can be imported properly in DO, have to install some libs to get it work.

p.s. github student pack includes $50 credit for DigitalOcean.
web app github: https://github.com/zeochoy/skinapp