I have defined a few different Cloud Dataflow jobs for Python in the Google AppEngine Flex Environment. I have defined my requirements in a requirements.txt file, included my setup.py file, and everything was working just fine. My last deployment was on May 3rd, 2018. Looking through logs, I see that one of my jobs began failing on May 22nd, 2018. The job fails with a stack trace resulting from a bad import, seen below.
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py", line 582, in do_work
work_executor.execute()
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/executor.py", line 166, in execute
op.start()
File "apache_beam/runners/worker/operations.py", line 294, in apache_beam.runners.worker.operations.DoOperation.start (apache_beam/runners/worker/operations.c:10607)
def start(self):
File "apache_beam/runners/worker/operations.py", line 295, in apache_beam.runners.worker.operations.DoOperation.start (apache_beam/runners/worker/operations.c:10501)
with self.scoped_start_state:
File "apache_beam/runners/worker/operations.py", line 300, in apache_beam.runners.worker.operations.DoOperation.start (apache_beam/runners/worker/operations.c:9702)
pickler.loads(self.spec.serialized_fn))
File "/usr/local/lib/python2.7/dist-packages/apache_beam/internal/pickler.py", line 225, in loads
return dill.loads(s)
File "/usr/local/lib/python2.7/dist-packages/dill/dill.py", line 277, in loads
return load(file)
File "/usr/local/lib/python2.7/dist-packages/dill/dill.py", line 266, in load
obj = pik.load()
File "/usr/lib/python2.7/pickle.py", line 858, in load
dispatch[key](self)
File "/usr/lib/python2.7/pickle.py", line 1090, in load_global
klass = self.find_class(module, name)
File "/usr/local/lib/python2.7/dist-packages/dill/dill.py", line 423, in find_class
return StockUnpickler.find_class(self, module, name)
File "/usr/lib/python2.7/pickle.py", line 1124, in find_class
__import__(module)
File "/usr/local/lib/python2.7/dist-packages/dataflow_pipeline/tally_overages.py", line 27, in <module>
from google.cloud import pubsub
File "/usr/local/lib/python2.7/dist-packages/google/cloud/pubsub.py", line 17, in <module>
from google.cloud.pubsub_v1 import PublisherClient
File "/usr/local/lib/python2.7/dist-packages/google/cloud/pubsub_v1/__init__.py", line 17, in <module>
from google.cloud.pubsub_v1 import types
File "/usr/local/lib/python2.7/dist-packages/google/cloud/pubsub_v1/types.py", line 26, in <module>
from google.iam.v1.logging import audit_data_pb2
ImportError: No module named logging
So the main issue seems to come from the pubsub dependency relying on importing google.iam.v1.logging
, which is installed from grpc-google-iam-v1
.
Here is my requirements.txt file
Flask==0.12.2
apache-beam[gcp]==2.1.1
gunicorn==19.7.1
google-cloud-dataflow==2.1.1
google-cloud-datastore==1.3.0
pytz
google-cloud-pubsub
google-gax
grpc-google-iam-v1
googleapis-common-protos
google-cloud==0.32
six==1.10.0
protobuf
I am able to run everything locally just fine by doing the following from my project.
$ virtualenv --no-site-packages .
$ . bin/activate
$ pip install --ignore-installed -r requirements.txt
$ python main.py
No handlers could be found for logger "oauth2client.contrib.multistore_file"
INFO:werkzeug: * Running on http://0.0.0.0:8080/ (Press CTRL+C to quit)
INFO:werkzeug: * Restarting with stat
No handlers could be found for logger "oauth2client.contrib.multistore_file"
WARNING:werkzeug: * Debugger is active!
INFO:werkzeug: * Debugger PIN: 317-820-645
specifically, I am able to do the following locally just fine
$ python
>>> from google.cloud import pubsub
>>> import google.iam.v1.logging
>>> google.iam.v1.logging.__file__
'/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/google/iam/v1/logging/__init__.pyc'
So I know that the installation of the grpc-google-iam-v1
package is working just fine locally.. the required files are there.
My questions are
- Why is the install of
grpc-google-iam-v1
on the Google AppEngine Flex Environment not installing all of the files correctly? I must be missing the/site-packages/google/iam/v1/logging
directory. - Why would this randomly start failing? I didn't do any more deploys, the same code was running and working on the 21st and then it broke on the 22nd on May.
I was able to get the pipeline running again after changing the requirements.txt file to
Flask==0.12.2
apache-beam[gcp]
google-cloud-dataflow
gunicorn==19.7.1
google-cloud-datastore==1.3.0
pytz
google-cloud-pubsub
google-gax
grpc-google-iam-v1
googleapis-common-protos
google-cloud==0.32
six==1.10.0
protobuf
so simply removing the version requirements from apache-beam[gcp]
and google-cloud-dataflow
did the trick.