I'm trying to figure out a way to speed up a particularly cumbersome query which aggregates some data by date across a couple of tables. The full (ugly) query is below along with an EXPLAIN ANALYZE
to show just how horrible it is.
If anyone could take a peek and see if they can spot any major issues (which is likely, I'm not a Postgres guy) that would be superb.
So here goes. The query is:
SELECT
to_char(p.period, 'DD/MM/YY') as period,
coalesce(o.value, 0) AS outbound,
coalesce(i.value, 0) AS inbound
FROM (
SELECT
date '2009-10-01' + s.day
AS period
FROM generate_series(0, date '2009-10-31' - date '2009-10-01') AS s(day)
) AS p
LEFT OUTER JOIN(
SELECT
SUM(b.body_size) AS value,
b.body_time::date AS period
FROM body AS b
LEFT JOIN
envelope e ON e.message_id = b.message_id
WHERE
e.envelope_command = 1
AND b.body_time BETWEEN '2009-10-01'
AND (date '2009-10-31' + INTERVAL '1 DAY')
GROUP BY period
ORDER BY period
) AS o ON p.period = o.period
LEFT OUTER JOIN(
SELECT
SUM(b.body_size) AS value,
b.body_time::date AS period
FROM body AS b
LEFT JOIN
envelope e ON e.message_id = b.message_id
WHERE
e.envelope_command = 2
AND b.body_time BETWEEN '2009-10-01'
AND (date '2009-10-31' + INTERVAL '1 DAY')
GROUP BY period
ORDER BY period
) AS i ON p.period = i.period
The EXPLAIN ANALYZE
can be found here: on explain.depesz.com
Any comments or questions are appreciated.
Cheers
There are always 2 things to consider when optimising queries:
A few observations:
You are performing date manipulations before you join your dates. As a general rule this will prevent a query optimser from using an index even if it exists. You should try to write your expressions in such a way that indexed columns exist unaltered on one side of the expression.
Your subqueries are filtering to the same date range as
generate_series
. This is a duplication, and it limits the optimser's ability to choose the most efficient optimisation. I suspect that may have been written in to improve performance because the optimser was unable to use an index on the date column (body_time
)?NOTE: We would actually very much like to use an index on
Body.body_time
ORDER BY
within the subqueries is at best redundant. At worst it could force the query optimiser to sort the result set before joining; and that is not necessarily good for the query plan. Rather only apply ordering right at the end for final display.Use of
LEFT JOIN
in your subqueries is inappropriate. Assuming you're using ANSI conventions forNULL
behaviour (and you should be), any outer joins toenvelope
would returnenvelope_command=NULL
, and these would consequently be excluded by the conditionenvelope_command=?
.Subqueries
o
andi
are almost identical save for theenvelope_command
value. This forces the optimser to scan the same underlying tables twice. You can use a pivot table technique to join to the data once, and split the values into 2 columns.Try the following which uses the pivot technique:
EDIT: Added filter suggested by Tom H.
I uninstalled my PostgreSQL server a couple of days ago, so you'll likely have to play around with this, but hopefully it's a good start for you.
The keys are:
If nothing else, I think that the query below is a bit clearer.
I used a calendar table in my query, but you can replace that with the generate_series as you were using it.
Also, depending on indexing, it might be better to compare the body_date with >= and < rather than pulling out the date part and comparing. I don't know enough about PostgreSQL to know how it works behind the scenes, so I would try both approaches to see which the server can optimize better. In pseudo-code you would be doing: body_date >= date (time=midnight) AND body_date < date + 1 (time=midnight).
Building on Craig Young's suggestions, here is the amended query which runs in ~1.8 seconds for the data set I'm working on. That is a slight improvement on the original ~2.0s and a huge improvement on Craig's which took ~22s.