AppEngine Timeout with Task Queues

2019-03-31 23:34发布

I'm trying to execute a task in AppEngine through the Task Queues, but I still seem to be faced with a 60 second timeout. I'm unsure what I'm doing incorrectly, as the limit I'd think should be 10 minutes as advertised.

I have a call to urlfetch.fetch() that appears to be the culprit. My call is:

urlfetch.fetch(url, payload=query_data, method=method, deadline=300)

The tail end of my stack trace shows the method that triggers the url fetch call right before the DeadlineExceededError:

File "/base/data/home/apps/s~mips-conversion-scheduler/000-11.371629749593131630/views.py", line 81, in _get_mips_updated_data
policies_changed = InquiryClient().get_changed_policies(company_id, initial=initial).json()

When I look at the task queue information it shows:

Method/URL: POST /tasks/queue-initial-load
Dispatched time (UTC): 2013/11/14 15:18:49
Seconds late: 0.18
Seconds to process task: 59.90
Last http response code: 500
Reason to rety: AppError

My View that processes the task looks like:

class QueueInitialLoad(webapp2.RequestHandler):
def post(self):
    company = self.request.get("company")
    if company:
        company_id = self.request.get("company")
        queue_policy_load(company_id, queue_name="initialLoad", initial=True)

with the queue_policy_load being the method that triggers the urlfetch call.

Is there something obvious I'm missing that makes me limited to the 60 second timeout instead of 10 minutes?

3条回答
Melony?
2楼-- · 2019-03-31 23:42

As GAE has evolved, this answer pertains to today where the idea of "backend" instances is deprecated. GAE Apps can be configured to be Services (aka module) and run with a manual scaling policy. Doing so allows one to set longer timeouts. If you were running your app with an autoscaling policy, it will cap your urlfetch's to 60sec and your queued tasks to 10 mins: https://cloud.google.com/appengine/docs/python/an-overview-of-app-engine

查看更多
别忘想泡老子
3楼-- · 2019-03-31 23:58

The task queues have a 10min deadline but a Urlfetch call has a 1 min deadline :

maximum deadline (request handler) 60 seconds

UPDATE: the intended behaviour was to have a max of 10mins URLFetch deadline when running in a TaskQueue, see this bug.

查看更多
SAY GOODBYE
4楼-- · 2019-03-31 23:59

Might be a little too general, but here are some thoughts that might help close the loop. There are 2 kinds of task queues, push queues and pull queues. Push queue tasks execute automatically, and they are only available to your App Engine app. On the other hand, pull queue tasks wait to be leased, are available to workers outside the app, and can be batched.

If you want to configure your queue, you can do it in the queue config file. In Java, that happens in the queue.xml file, and in Python that happens in the queue.yaml file. In terms of push queues specifically, push queue tasks are processed by handlers (URLs) as POST requests. They:

  1. Are executed ASAP
  2. May cause new instances (Frontend or Backend)
  3. Have a task duration limit of 10 minutes
  4. But, they have an unlimited duration if the tasks are run on the backend

Here is a quick Python code example showing how you can add tasks to a named push queue. Have a look at the Google developers page for Task Queues if you need more information: https://developers.google.com/appengine/docs/python/taskqueue/

Adding Tasks to a Named Push Queue:

queue = taskqueue.Queue("Qname")
task = taskqueue.Task(url='/handler', params=args)
queue.add(task)

On the other hand, let's say that you wanted to use a pull queue. You could add tasks in Python to a pull queue using the following:

queue = taskqueue.Queue("Qname")
task = taskqueue.Task(payload=load, method='PULL')
queue.add(task)

You can then lease these tasks out using the following approach in Python:

queue = taskqueue.Queue("Qname")
tasks = queue.lease_tasks(how-long, how-many)

Remember that, for pull queues, if a task fails, App Engine retries it until it succeeds.

Hope that helps in terms of providing a general perspective!

查看更多
登录 后发表回答