I am trying to deploy a Python app on Google Cloud. In the app I have a dependency that is actually locally available. So to add it in project using pip. I use pip install -r requirements.txt --find-links PATH_TO_DEPENDENCY
. Everything works fine this way on local. But when I deploy it on Google Cloud using gcloud app deploy
it internally calls the pip install -r requirements.txt
due to which the local dependency is not installed and the code fails.
Is there a way to modify the internal command the gcloud uses or a way to tell the server to use that dependency from somewhere?
One option to try would be to use a virtual environment and pip freeze
to capture all requirements for your app, including their dependencies:
- pull a fresh virtual environment and snapshot the initial package content with
pip freeze > requirements.1.txt
- use your local invocation to install all the requirements and their dependencies, then get a new snapshot with
pip freeze > requirements.2.txt
- build a new
requirements.txt
containing all packages present in requirements.2.txt
but missing in requirements.1.txt
(i.e. either from the original requirements.txt
or a dependency)
Then use this new requirements.txt
for your app, which should pull all dependencies during the deployment pip install -r requirements.txt
.
Another option, a bit more tedious but which can be used for dependencies not installable via pip
, would be to build a custom runtime based on the corresponding google-supplied docker image in which you add the additional non-python dependencies your app require. From About Custom Runtimes:
Custom runtimes allow you to define new runtime environments, which might include additional components like language interpreters or application servers.
See also Building Custom Runtimes.