It is important to get something/anything working first.
Please follow mine and other peoples previous posts on this site using !pip list.
Get your app working on your local anaconda solution first.
After completing the above it works first time On Docker Desktop, render.com. and other sites using Docker.
Below are five of my 20 odd models, I have built using the above approach and following the instructions in the teddy bear repo on GitHub.
These instructions enabled me to set up docker to run all these apps. locally, on private networks and made it easier to deploy apps in the cloud such as render.com
Image below of my five of my apps running in Docker Desktop.
Hey @anuraggoel i’ve Deployed my model in render. But it’ll keep on showing analyzing for a long time and will not so any output.
Please fix it https://github.com/abhinavsp0730/fastai-v3
PS :- I’ve stuck there for 6 days.
Need help.
I noticed that the version of fastai in your requirements.txt is 1.0.52 last time I checked we are up to version 1.0.58. However to make your app work I suggest the following.
Basically the file versions in your requirements.txt must match the file versions of the system that you created the model file on. It is normally best to do this as soon as you train the model as sometimes libraries change while you are creating models.
The three most common causes of stuck in analyzing are:
Browser issue (my classifiers don’t work on my mac using an older version of Safari browser)
There is an issue with your model file.
The library versions between the platform you trained your model on and the platform you are deploying your model on are different.
Can you print the error you have in the console on render.com
Can you print the requirements.txt file you are using on render.com
Can you print a !pip list on your development platform and on your render.com app
You only need to list the files that are in the requirements.txt file(pip list lists all libraries but we are only interested in the ones the app uses.)
If you can do this then it will make it easier to solve your problem.
Try your app with a different browser my classifiers always work in the current versions of Chrome and Firefox.
I am trying to use annoy in render, but it appears like it is missing a c++ dependency, I have tried several different ways, but I haven’t found a way to make it work. Does anyone have any Ideas how to fix this?
If I want to show the confidence level along with the result? (like I am 98% sure it’s a Tennis Racket). Is there an easy way to do it by using “outputs” from the prediction call?
How do I integrate this into my website for a while?
Does your app work in your standalone environment?
This makes a lot of difference.
I haven’t really used the console environment other than looking at errors.
I use Docker on my desktop build the app then upload it to render. I do all my debugging on the local version.
All the issues are generally resolved updating the requirements.txt and the Dockerfile.
Once my app is on render.com if I have a missing library I either add it in the requirements.txt or Download it using the Dockerfile, then redeploy the model.
eg. see commands in this file Dockerfile all terminal commands can be run within this file using the RUN command.
Have you searched this thread for “pip list”. Having bulit more than 50 model apps many of the apps issues have been resolved using this command to amend the requirements.txt in the deployed model.
Hi mrfabulous1, it works well now, I changed the pythorch and fastai version in the requirements.txt based on notebook versions.
It doen’t work the model, always predict the dogs and cats (and other stuff) as a Boxer…but it is another problem. if you have ever heard this same problem please let me know.
Hi diegobodner I am glad to hear your model is now working.
There is probably a better thread than this one for discussing your issue.
I haven’t seen your particular issue before. However I am assuming your model is supposed to predict cats and dogs if this is the case then I would go to the section of the notebook below and check the confusion matrix.
I would check my directories container images of the correct class.
Also muellerzr has an image cleaner that you could try if that is an issue.
Not sure what you mean by
other stuff
But if you mean it predicts images of other things as a boxer even if they are not cats and dogs,
then this is another feature of the classifier which people are trying to find a solution to.
See this thread here. Handle data that belongs to classes not seen in training or testing