select count(*) from mytable; select count(table_id) from mytable; //table_id is the primary_key
both query were running slow on a table with 10 million rows.
I am wondering why since wouldn't it easy for mysql to keep a counter that gets updated on all insert,update and delete?
and is there a way to improve this query? I used explain but didn't help much.
take a look at the following blog posts:
1) COUNT(***) vs COUNT(col)
2) Easy MySQL Performance Tips
3) Fast count(*) for InnoDB
btw, which engine do you use?
EDITED: About technique to speed up count when you need just to know if there are some amount of rows. Sorry, just was wrong with my query. So, when you need just to know, if there is e.g. 300 rows by specific condition you can try subquery:
at first you minify result set, and then count the result; it will still scan result set, but you can limit it (once more, it works when the question to DB is "is here more or less than 300 rows), and if DB contains more than 300 rows which satisfy condition that query is faster
Testing results (my table has 6.7mln rows):
1)
SELECT count(*) FROM _table_ WHERE START_DATE > '2011-02-01'
returns 4.2mln for 65.4 seconds
2)
SELECT count(*) FROM ( select 1 FROM _table_ WHERE START_DATE > '2011-02-01' LIMIT 100 ) AS result
returns 100 for 0.03 seconds
Below is result of the explain query to see what is going on there:
As cherouvim pointed out in the comments, it depends on the storage engine.
MyISAM
does keep a count of the table rows, and can keep it accurate since the only locks MyISAM supports is a table lock.InnoDB
however supports transactions, and needs to do a table scan to count the rows.http://www.mysqlperformanceblog.com/2006/12/01/count-for-innodb-tables/