Deploy deep learning models, with a production example for both web and mobile. Our example uses fastai. But, an output model file from any other deep learning library such as Keras or TensorFlow, etc. will work.
My push to Heroku keeps failing as the slug size is 700M while max permitted is 500M. This is because of the large pytorch (.whl) file. Is there a workaround or a fix for this issue?
I was following this guide. Under src/static/js folder I have a file named main.jsx. Inside this file, I have a code block
(async () => {
const response = await fetch('/config');
// const response = await fetch(process.env.PUBLIC_URL + `/config`);
const body = await response.json();
window.APP_CONFIG = body;
// Use the ReactDOM.render to show your component on the browser
ReactDOM.render(
<App/>,
rootElement
)
})();
It seems working fine in local environment, and even Heroku. However, after I deployed to GKE, the above code does not work. It cannot load the config which is inside a content.yaml file. Please give me advices.