Sql Alchemy QueuePool limit overflow

2019-02-07 20:43发布

I have a Sql Alchemy application that is returning TimeOut:

TimeoutError: QueuePool limit of size 5 overflow 10 reached, connection timed out, timeout 30

I read in a different post that this happens when I don't close the session but I don't know if this applies to my code:

I connect to the database in the init.py:

from .dbmodels import (
    DBSession,
    Base,    

engine = create_engine("mysql://" + loadConfigVar("user") + ":" + loadConfigVar("password") + "@" + loadConfigVar("host") + "/" + loadConfigVar("schema"))

#Sets the engine to the session and the Base model class
DBSession.configure(bind=engine)
Base.metadata.bind = engine

Then in another python file I'm gathering some data in two functions but using DBSession that I initialized in init.py:

from .dbmodels import DBSession
from .dbmodels import resourcestatsModel

def getFeaturedGroups(max = 1):

    try:
        #Get the number of download per resource
        transaction.commit()
        rescount = DBSession.connection().execute("select resource_id,count(resource_id) as total FROM resourcestats")

        #Move the data to an array
        resources = []
        data = {}
        for row in rescount:
            data["resource_id"] = row.resource_id
            data["total"] = row.total
            resources.append(data)

        #Get the list of groups
        group_list = toolkit.get_action('group_list')({}, {})
        for group in group_list:
            #Get the details of each group
            group_info = toolkit.get_action('group_show')({}, {'id': group})
            #Count the features of the group
            addFesturedCount(resources,group,group_info)

        #Order the FeaturedGroups by total
        FeaturedGroups.sort(key=lambda x: x["total"],reverse=True)

        print FeaturedGroups
        #Move the data of the group to the result array.
        result = []
        count = 0
        for group in FeaturedGroups:
            group_info = toolkit.get_action('group_show')({}, {'id': group["group_id"]})
            result.append(group_info)
            count = count +1
            if count == max:
                break

        return result
    except:
        return []


    def getResourceStats(resourceID):
        transaction.commit()
        return  DBSession.query(resourcestatsModel).filter_by(resource_id = resourceID).count()

The session variables are created like this:

#Basic SQLAlchemy types
from sqlalchemy import (
    Column,
    Text,
    DateTime,
    Integer,
    ForeignKey
    )
# Use SQLAlchemy declarative type
from sqlalchemy.ext.declarative import declarative_base

#
from sqlalchemy.orm import (
    scoped_session,
    sessionmaker,
    )

#Use Zope' sqlalchemy  transaction manager
from zope.sqlalchemy import ZopeTransactionExtension

#Main plugin session
DBSession = scoped_session(sessionmaker(extension=ZopeTransactionExtension()))

Because the session is created in the init.py and in subsequent code I just use it; at which point do I need to close the session? Or what else do I need to do to manage the pool size?

2条回答
做自己的国王
2楼-- · 2019-02-07 21:11

Add following method to your code. It will automatically close all unused/hanging connections and prevent bottleneck in your code. Especially if you are using following syntax Model.query.filter_by(attribute=var).first() and relationships / lazy loading.

   @app.teardown_appcontext
    def shutdown_session(exception=None):
        db.session.remove()

Documentation on this is available here: http://flask.pocoo.org/docs/1.0/appcontext/

查看更多
Emotional °昔
3楼-- · 2019-02-07 21:18

You can manage pool size by adding parameters pool_size and max_overflow in function create_engine

engine = create_engine("mysql://" + loadConfigVar("user") + ":" + loadConfigVar("password") + "@" + loadConfigVar("host") + "/" + loadConfigVar("schema"), 
                        pool_size=20, max_overflow=0)

Reference is here

You don't need to close the session, but the connection should be closed after your transaction has been done. Replace:

rescount = DBSession.connection().execute("select resource_id,count(resource_id) as total FROM resourcestats")

By:

connection = DBSession.connection()
try:
    rescount = connection.execute("select resource_id,count(resource_id) as total FROM resourcestats")
    #do something
finally:
    connection.close()

Reference is here

Also, notice that mysql's connection that have been stale is closed after a particular period of time (this period can be configured in MySQL, I don't remember the default value), so you need passing pool_recycle value to your engine creation

查看更多
登录 后发表回答