[FIXED] So there are a couple of changes that led to success:
Zip the CONTENTS of your application folder, NOT the parent folder. (Select all > Zip). Beanstalk acts like it understands the Dockerfile if you Zip the parent folder and upload it, but perhaps it doesn’t.
Use the Environments > Create a new environment option. If you haven’t used Elastic Beanstalk before, you will be taken through a slightly different wizard for creating your environment. I recommend creating a garbage project so you can get into the console proper, and then using Environments > Create a new environment to follow the guide. I don’t know if this was part of the fix for me, as none of the selections appeared to be materially different, but it could have been.
--------------------------------------original post below-----------
In attempting to follow the various deployment guides I am encountering the issue below. I am successful until I get to the 6th step of the image build in which the ‘server.py’ file is run.
I have verified that my export.pkl file is publicly accessible, and I have tried following the guides for Render, Elastic Beanstalk, and building from my own local Docker install. Someone posted the same issue on the github.
Any help is greatly appreciated, and I am happy to provide further information.
Traceback (most recent call last): File "app/server.py", line 48, in <module> learn = loop.run_until_complete(asyncio.gather(*tasks))[0] File "/usr/local/lib/python3.7/asyncio/base_events.py", line 587, in run_until_complete return future.result() File "app/server.py", line 35, in setup_learner learn = load_learner(path, export_file_name) File "/usr/local/lib/python3.7/site-packages/fastai/basic_train.py", line 598, in load_learner state = torch.load(source, map_location='cpu') if defaults.device == torch.device('cpu') else torch.load(source) File "/usr/local/lib/python3.7/site-packages/torch/serialization.py", line 387, in load return _load(f, map_location, pickle_module, **pickle_load_args) File "/usr/local/lib/python3.7/site-packages/torch/serialization.py", line 564, in _load magic_number = pickle_module.load(f, **pickle_load_args) _pickle.UnpicklingError: invalid load key, '\x0a'. The command '/bin/sh -c python app/server.py' returned a non-zero code: 1
And if that doesn’t work (this is just a hunch without seeing your code) but if you remove the async loading (which doesn’t look async from the call stack anyway) is it still broken?
Just looked at the guide you’re following. Are you storing your pickle on Google drive or something similar, and if so are you sure you’re using a downloadable link, not a sharing link?
When running in a local container, the export.pkl file does download.
As a general update, I can get the application to run in a local container now, but I had to modify the requirements to install the latest torch instead of the URL provided. Might not be a good practice; I don’t know, but I was encountering an error importing fastai.vision otherwise.
Elastic beanstalk still fails to build the environment due to server.py command failing.
Jumping on the bandwagon…
I tried a. AWS way - but t2micro instance became full with all pip installs
b. I tried locally on windows…there is error on - when doing pip install fastai
GCP expects creation of service account to access the resources. I have created one.
When I try gcloud app deploy" …I see 403 forbidden error - forbidden resource access error