optimize mysql count query

2020-02-26 06:17发布

Is there a way to optimize this further or should I just be satisfied that it takes 9 seconds to count 11M rows ?

devuser@xcmst > mysql --user=user --password=pass -D marctoxctransformation -e "desc record_updates"                                                                    
+--------------+----------+------+-----+---------+-------+
| Field        | Type     | Null | Key | Default | Extra |
+--------------+----------+------+-----+---------+-------+
| record_id    | int(11)  | YES  | MUL | NULL    |       | 
| date_updated | datetime | YES  | MUL | NULL    |       | 
+--------------+----------+------+-----+---------+-------+
devuser@xcmst > date; mysql --user=user --password=pass -D marctoxctransformation -e "select count(*) from record_updates where date_updated > '2009-10-11 15:33:22' "; date                         
Thu Dec  9 11:13:17 EST 2010
+----------+
| count(*) |
+----------+
| 11772117 | 
+----------+
Thu Dec  9 11:13:26 EST 2010
devuser@xcmst > mysql --user=user --password=pass -D marctoxctransformation -e "explain select count(*) from record_updates where date_updated > '2009-10-11 15:33:22' "      
+----+-------------+----------------+-------+--------------------------------------------------------+--------------------------------------------------------+---------+------+----------+--------------------------+
| id | select_type | table          | type  | possible_keys                                          | key                                                    | key_len | ref  | rows     | Extra                    |
+----+-------------+----------------+-------+--------------------------------------------------------+--------------------------------------------------------+---------+------+----------+--------------------------+
|  1 | SIMPLE      | record_updates | index | idx_marctoxctransformation_record_updates_date_updated | idx_marctoxctransformation_record_updates_date_updated | 9       | NULL | 11772117 | Using where; Using index | 
+----+-------------+----------------+-------+--------------------------------------------------------+--------------------------------------------------------+---------+------+----------+--------------------------+
devuser@xcmst > mysql --user=user --password=pass -D marctoxctransformation -e "show keys from record_updates"
+----------------+------------+--------------------------------------------------------+--------------+--------------+-----------+-------------+----------+--------+------+------------+---------+
| Table          | Non_unique | Key_name                                               | Seq_in_index | Column_name  | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment |
+----------------+------------+--------------------------------------------------------+--------------+--------------+-----------+-------------+----------+--------+------+------------+---------+
| record_updates |          1 | idx_marctoxctransformation_record_updates_date_updated |            1 | date_updated | A         |        2416 |     NULL | NULL   | YES  | BTREE      |         | 
| record_updates |          1 | idx_marctoxctransformation_record_updates_record_id    |            1 | record_id    | A         |    11772117 |     NULL | NULL   | YES  | BTREE      |         | 
+----------------+------------+--------------------------------------------------------+--------------+--------------+-----------+-------------+----------+--------+------+------------+---------+

UPDATE - my solution is here: http://code.google.com/p/xcmetadataservicestoolkit/wiki/ResumptionToken

10条回答
霸刀☆藐视天下
2楼-- · 2020-02-26 06:59

If mysql has to count 11M rows, there really isn't much of a way to speed up a simple count. At least not to get it to a sub 1 second speed. You should rethink how you do your count. A few ideas:

  1. Add an auto increment field to the table. It looks you wouldn't delete from the table, so you can use simple math to find the record count. Select the min auto increment number for the initial earlier date and the max for the latter date and subtract one from the other to get the record count. For example:

    SELECT min(incr_id) min_id FROM record_updates WHERE date_updated BETWEEN '2009-10-11 15:33:22' AND '2009-10-12 23:59:59';
    SELECT max(incr_id) max_id FROM record_updates WHERE date_updated > DATE_SUB(NOW(), INTERVAL 2 DAY);`
    
  2. Create another table summarizing the record count for each day. Then you can query that table for the total records. There would only be 365 records for each year. If you need to get down to more fine grained times, query the summary table for full days and the current table for just the record count for the start and end days. Then add them all together.

If the data isn't changing, which it doesn't seem like it is, then summary tables will be easy to maintain and update. They will significantly speed things up.

查看更多
看我几分像从前
3楼-- · 2020-02-26 07:01

There is no primary key in your table. It's possible that in this case it always scans the whole table. Having a primary key is never a bad idea.

查看更多
走好不送
4楼-- · 2020-02-26 07:03

It depends on a few things but something like this may work for you

im assuming this count never changes as it is in the past so the result can be cached somehow

count1 = "select count(*) from record_updates where date_updated <= '2009-10-11 15:33:22'"

gives you the total count of records in the table, this is an approximate value in innodb table so BEWARE, depends on engine

count2 = "select table_rows from information_schema.`TABLES` where table_schema = 'marctoxctransformation' and TABLE_NAME = 'record_updates'"

your answer

result = count2 - count1

查看更多
干净又极端
5楼-- · 2020-02-26 07:04

If the historical data is not volatile, create a summary table. There are various approaches, the one to choose will depend on how your table is updated, and how often.

For example, assuming old data is rarely/never changed, but recent data is, create a monthly summary table, populated for the previous month at the end of each month (eg insert January's count at the end of February). Once you have your summary table, you can add up the full months and the part months at the beginning and end of the range:

select count(*) 
from record_updates 
where date_updated >= '2009-10-11 15:33:22' and date_updated < '2009-11-01';

select count(*) 
from record_updates 
where date_updated >= '2010-12-00';

select sum(row_count) 
from record_updates_summary 
where date_updated >= '2009-11-01' and date_updated < '2010-12-00';

I've left it split out above for clarity but you can do this in one query:

select ( select count(*)
         from record_updates 
         where date_updated >= '2010-12-00'
               or ( date_updated>='2009-10-11 15:33:22' 
                    and date_updated < '2009-11-01' ) ) +
       ( select count(*) 
         from record_updates 
         where date_updated >= '2010-12-00' );

You can adapt this approach for make the summary table based on whole weeks or whole days.

查看更多
一夜七次
6楼-- · 2020-02-26 07:08

Instead of doing count(*), try doing count(1), like this:-

select count(1) from record_updates where date_updated > '2009-10-11 15:33:22'

I took a DB2 class before, and I remember the instructor mentioned about doing a count(1) when we just want to count number of rows in the table regardless the data because it is technically faster than count(*). Let me know if it makes a difference.

NOTE: Here's a link you might be interested to read: http://www.mysqlperformanceblog.com/2007/04/10/count-vs-countcol/

查看更多
SAY GOODBYE
7楼-- · 2020-02-26 07:11

MySQL doesn't "optimize" count(*) queries in InnoDB because of versioning. Every item in the index has to be iterated over and checked to make sure that the version is correct for display (e.g., not an open commit). Since any of your data can be modified across the database, ranged selects and caching won't work. However, you possibly can get by using triggers. There are two methods to this madness.

This first method risks slowing down your transactions since none of them can truly run in parallel: use after insert and after delete triggers to increment / decrement a counter table. Second trick: use those insert / delete triggers to call a stored procedure which feeds into an external program which similarly adjusts values up and down, or acts upon a non-transactional table. Beware that in the event of a rollback, this will result in inaccurate numbers.

If you don't need an exact numbers, check out this query:

select table_rows from information_schema.tables
where table_name = 'foo';

Example difference: count(*): 1876668, table_rows: 1899004. The table_rows value is an estimation, and you'll get a different number every time even if you database doesn't change.

For my own curiosity: do you need exact numbers that are updated every second? IF so, why?

查看更多
登录 后发表回答