You can always download the export.pkl model file and create your widget app in your own kernel! It doesn’t require much computation power since the training process is already finished and inferences usually don’t require much computation!
I ended up building a small flask app and deploying a model I made in colab with docker on digital ocean.
here’s the github project if you want it for reference-
I had to figure out which code was meant to be used mainly in jupyter notebook vs. a more standard python environment so I could eliminate that stuff and just get the basic model up and running so it could perform inferences on a CPU based machine.
I used WSL2 virtualization tech with ubuntu linux as my development environment because it seemed like FastAI doesn’t work with python in windows, but I might have just failed to set it up properly.
If you just want something quick and don’t care about running a full fledged http server like nginx, repl.it is a great way to get a small flask project deployed.
Unfortunately the learning curve for hosting python based apps is steeper than something like php, which works even with shared hosting. On the plus side, if you’re just looking to show off an ML model you have trained, you don’t neccessarily need to set up a database.