Python SQL Query Performance

2019-05-25 00:47发布

I am using jaydebeapi (Mac OS X) to query a Netezza database and performing some quick/dirty timing:

t0 = time.time()
curs.execute('''select * from table1;''')
print time.time() - t0

I created the table myself and it contains 650,000 rows and 9 columns (integers and dates).

When I run the above command it takes about 1.3 minutes to complete (averaged over 10 runs).

Then, when I try to fetch the data:

t0 = time.time()
curs.execute('''select * from table1;''')
row = curs.fetchone()
while row is not None:
    row = curs.fetchone()
print time.time() - t0

It takes about 10 minutes to complete (averaged over 10 runs).

Now, when I run the same SQL query using WinSQL (Windows 7, ODBC), it takes about 3 minutes to return the data. I can't seem to figure out why it is taking so much longer in Python and am not sure how or where to start looking.

3条回答
Evening l夕情丶
2楼-- · 2019-05-25 01:40

Are you using JayDeBeApi in combination with JPype or together with Jython? Fetching of large result sets with the JPype implementation causes some JNI calls for every single cell value which causes lot's of overhead. You should consider one of the following options:

  1. Minimize the size of your resultset. Do aggregations using SQL functions.
  2. Give the newest implementation of JPype1 a try. There have been some performance improvements.
  3. Switch your runtime to Jython (JayDeBeApi works on Jython as well)
  4. Implement the db queries and data extraction directly in Java and call the logic using JPype but with a interface not returning a large data set.
  5. Try to improve JPype and JayDeBeApi code
查看更多
混吃等死
3楼-- · 2019-05-25 01:40

You might want to use curs.fetchmany() instead of fetchone. That will optimize, somewhat, the back and forth to fetch the rows.

Something like this will even hide the fact that you are fetching many rows at a time:

def fetchYield(cursor):
        li = []
        while True:
            if not li:
                li = cursor.fetchmany()
                if not li:
                    raise StopIteration
            yield li.pop(0)

for row in fetchYield(curs):
   <do something with row>

However, I think that if a raw sql query tool takes 3 minutes to fetch the data, it is not entirely unreasonable to have your Python code take 3x as long.

查看更多
Viruses.
4楼-- · 2019-05-25 01:42

I had a similar problem and I observed an improvement using fetchall and setting the cursor arraysize parameter (detault to 1), as reported in the DB-API documentation on which JayDeBeApi is based.

cursor = conn.cursor()
cursor.arraysize = 10000
cursor.execute("select * from table1")

rows = cursor.fetchall()

# storing data in a pandas DataFrame
df = pd.DataFrame(data=rows, columns = ["C1", "C2", "C3"])

cursor.close()

I observed the following performances on a 600.000 rows fetching

arraysize = 10000 --- 509 seconds
arraysize = 1     --- 526 seconds

Nevertheless, I also observed a much greater fetching time compared, for instance, to a Java-based client using the same JDBC driver. My suggestion, as 9000 was saying, is to expend some time on your SQL query and let the database do the work, it's a faster and much more scalable solution.

查看更多
登录 后发表回答