The query:
SELECT "replays_game".*
FROM "replays_game"
INNER JOIN
"replays_playeringame" ON "replays_game"."id" = "replays_playeringame"."game_id"
WHERE "replays_playeringame"."player_id" = 50027
If I set SET enable_seqscan = off
, then it does the fast thing, which is:
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
Nested Loop (cost=0.00..27349.80 rows=3395 width=72) (actual time=28.726..65.056 rows=3398 loops=1)
-> Index Scan using replays_playeringame_player_id on replays_playeringame (cost=0.00..8934.43 rows=3395 width=4) (actual time=0.019..2.412 rows=3398 loops=1)
Index Cond: (player_id = 50027)
-> Index Scan using replays_game_pkey on replays_game (cost=0.00..5.41 rows=1 width=72) (actual time=0.017..0.017 rows=1 loops=3398)
Index Cond: (id = replays_playeringame.game_id)
Total runtime: 65.437 ms
But without the dreaded enable_seqscan, it chooses to do a slower thing:
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
Hash Join (cost=7330.18..18145.24 rows=3395 width=72) (actual time=92.380..535.422 rows=3398 loops=1)
Hash Cond: (replays_playeringame.game_id = replays_game.id)
-> Index Scan using replays_playeringame_player_id on replays_playeringame (cost=0.00..8934.43 rows=3395 width=4) (actual time=0.020..2.899 rows=3398 loops=1)
Index Cond: (player_id = 50027)
-> Hash (cost=3668.08..3668.08 rows=151208 width=72) (actual time=90.842..90.842 rows=151208 loops=1)
Buckets: 1024 Batches: 32 (originally 16) Memory Usage: 1025kB
-> Seq Scan on replays_game (cost=0.00..3668.08 rows=151208 width=72) (actual time=0.020..29.061 rows=151208 loops=1)
Total runtime: 535.821 ms
Here are the relevant indexes:
Index "public.replays_game_pkey"
Column | Type | Definition
--------+---------+------------
id | integer | id
primary key, btree, for table "public.replays_game"
Index "public.replays_playeringame_player_id"
Column | Type | Definition
-----------+---------+------------
player_id | integer | player_id
btree, for table "public.replays_playeringame"
So my question is, what am I doing wrong that Postgres is mis-estimating the relative costs of the two ways of joining? I see in the cost estimates that it thinks the hash-join will be faster. And its estimate of the cost of the index-join is off by a factor of 500.
How can I give Postgres more of a clue? I did run a VACUUM ANALYZE
immediately before running all of the above.
Interestingly, if I run this query for a player with a smaller # of games, Postgres chooses to do the index-scan + nested-loop. So something about the large # of games tickles this undesired behavior where relative estimated cost is out of line with actual estimated cost.
Finally, should I be using Postgres at all? I don't wish to become an expert in database tuning, so I'm looking for a database that will perform reasonably well with a conscientious developer's level of attention, as opposed to a dedicated DBA. I am afraid that if I stick with Postgres I will have a steady stream of issues like this that will force me to become a Postgres expert, and perhaps another DB will be more forgiving of a more casual approach.
A Postgres expert (RhodiumToad) reviewed my full database settings (http://pastebin.com/77QuiQSp) and recommended set cpu_tuple_cost = 0.1
. That gave a dramatic speedup: http://pastebin.com/nTHvSHVd
Alternatively, switching to MySQL also solved the problem pretty nicely. I have a default installation of MySQL and Postgres on my OS X box, and MySQL is 2x faster, comparing queries that are "warmed up" by repeatedly executing the query. On "cold" queries, i.e. the first time a given query is executed, MySQL is 5 to 150 times faster. The performance of cold queries is pretty important for my particular application.
The big question, as far as I'm concerned, is still outstanding -- will Postgres require more fiddling and configuration to run well than MySQL? For example, consider that none of the suggestions offered by the commenters here worked.
This is an old post, but quite helpful that I just encountered a similar issue.
Here is my finding so far. Given there are 151208 rows in the
replays_game
, the average cost of hitting an item is aboutlog(151208)=12
. Since there are3395
records inreplays_playeringame
after filtering, the average cost is12*3395
, which is rather high. Also, the planner overestimated the page cost: it assumes all rows are randomly distributed, while it is not. Should that be true, a seq scan would be much better. So basically, the query plan is trying to avoid the worst scenarios.@dsjoerg's problem is that there is no index on
replays_playeringame(game_id)
. Index scan would be always used if there is an index onreplays_playeringame(game_id)
: the cost of scanning index would become3395+12
(or something close to that).@Neil suggested to have index on
(player_id, game_id)
, which is close but not exact. The right index to have is either(game_id)
or(game_id, player_id)
.I ran sayap's testbed-code (Thanks!) , with the following modifications:
After this run, I did the same run, but scaled up tenfold: with 1M5 records (30K hard-hitters)
Currently, I am running the same test with a hundred-fold scale-up, but the initialisation is rather slow...
Results The entries in the cells are the total time in msec plus a string that denotes the chosen queryplan. (only a handfull of plans occur)
Preliminary conclusion:
"the working set" for the original query is too small: all of it fits in core, resulting in the cost of page fetches to be grossly overestimated. Setting RPC to 2 (or 1) "solves" this problem, but once the query is scaled-up, the page-costs become dominant, and RPC=4 becomes comparable or even better.
Setting work_mem to a lower value is another way to make the optimiser shift to index-scans (instead of hash+bitmap-scans). The differences I found are smaller than what Sayap reported. Maybe I have more effective_cache_size, or he forgot to prime the cache?
You might get a better execution plan using a multiple column
(player_id, game_id)
index on thereplays_playeringame
table. This avoids having to use a random page seek to look up the game id(s) for the player id.My guess is that you are using the default
random_page_cost = 4
, which is way too high, making index scan too costly.I try to reconstruct the 2 tables with this script:
With the default value of 4:
After lowering it to 2:
If using SSD, I would lower it further to 1.1.
As for your last question, I really think you should stick with postgresql. I have experience with postgresql and mssql, and I need to put in triple the effort into the later for it to perform half as well as the former.