Query stays “statistics” state for long time in Go

2019-09-19 17:21发布

问题:

Some queries stays "statistics" state for long time in my Google Cloud SQL Database. (MySQL 5.5)

After this post, I changed optimizer_search_depth to 0. But some queries still have long statistics time.

> select @@optimizer_search_depth;
+--------------------------+
| @@optimizer_search_depth |
+--------------------------+
|                        0 |
+--------------------------+

> show processlist;
+----+------+-----------+------+---------+------+------------+-----------------+
| Id | User | Host      | db   | Command | Time | State      | Info            |
+----+------+-----------+------+---------+------+------------+-----------------+
|  4 | root | localhost | mydb | Query   |   84 | statistics | SELECT * FROM ..|

Table and count is as below.

> describe mytable;
+----------+---------------+------+-----+---------------------+-----------------------------+
| Field    | Type          | Null | Key | Default             | Extra                       |
+----------+---------------+------+-----+---------------------+-----------------------------+
| col1     | varchar(50)   | NO   | PRI | NULL                |                             |
| col2     | varchar(50)   | NO   | PRI | NULL                |                             |
| col3     | decimal(15,4) | NO   |     | NULL                |                             |
| col4     | decimal(15,4) | NO   |     | NULL                |                             |
| col5     | decimal(15,4) | NO   |     | NULL                |                             |
| col6     | decimal(15,4) | NO   |     | NULL                |                             |
| col7     | varchar(50)   | YES  |     | NULL                |                             |
| col8     | decimal(15,4) | NO   |     | NULL                |                             |
| col9     | decimal(15,4) | NO   |     | NULL                |                             |
| col10    | varchar(8)    | NO   |     | NULL                |                             |
| col11    | varchar(30)   | NO   |     | NULL                |                             |
| col12    | timestamp     | NO   |     | 0000-00-00 00:00:00 |                             |
| col13    | timestamp     | NO   |     | CURRENT_TIMESTAMP   | on update CURRENT_TIMESTAMP |
| col14    | int(11)       | NO   |     | NULL                |                             |
+----------+---------------+------+-----+---------------------+-----------------------------+

> select count(*) from mytable;
+----------+
| count(*) |
+----------+
|   852304 |
+----------+

Query is like this.

SELECT * FROM mytable WHERE 
((col1 = 'FFP60003' AND col2 = '360' ) OR 
(col1 = 'FIU51001' AND col2 = '210' ) OR 
(col1 = 'FIU51003' AND col2 = '360' ) OR 
(col1 = 'FPC60001' AND col2 = '240' ) OR 
(col1 = 'SLU50006' AND col2 = '360' ) OR 
... (about 2000-3000 and/or) ...
(col1 = '89969' AND col2 = '270' ) ) AND col14 > 0

As shown above, query is very long. I think this is the cause of long statistics state, but my app needs this type of query.

How can I avoid long statistics issue?

[Update]

SHOW CREATE TABLE and SHOW VARIABLES LIKE '%buffer%' are as belows.

> show create table mytable\G
*************************** 1. row ***************************
       Table: mytable
Create Table: CREATE TABLE `mytable` (
  `col1` varchar(50) NOT NULL COMMENT 'col1',
  `col2` varchar(50) NOT NULL COMMENT 'col2',
  `col3` decimal(15,4) NOT NULL COMMENT 'col3',
  `col4` decimal(15,4) NOT NULL COMMENT 'col4',
  `col5` decimal(15,4) NOT NULL COMMENT 'col5',
  `col6` decimal(15,4) NOT NULL COMMENT 'col6',
  `col7` varchar(50) DEFAULT NULL COMMENT 'col7',
  `col8` decimal(15,4) NOT NULL COMMENT 'col8',
  `col9` decimal(15,4) NOT NULL COMMENT 'col9',
  `col10` varchar(8) NOT NULL COMMENT 'col10',
  `col11` varchar(30) NOT NULL COMMENT 'col11',
  `col12` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00' COMMENT 'col12',
  `col13` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT 'col13',
  `col14` int(11) NOT NULL COMMENT 'col14',
  PRIMARY KEY (`col1`,`col2`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8

 > SHOW VARIABLES LIKE '%buffer%';
+------------------------------+-----------+
| Variable_name                | Value     |
+------------------------------+-----------+
| bulk_insert_buffer_size      | 8388608   |
| innodb_buffer_pool_instances | 1         |
| innodb_buffer_pool_size      | 805306368 |
| innodb_change_buffering      | all       |
| innodb_log_buffer_size       | 8388608   |
| join_buffer_size             | 131072    |
| key_buffer_size              | 8388608   |
| myisam_sort_buffer_size      | 8388608   |
| net_buffer_length            | 16384     |
| preload_buffer_size          | 32768     |
| read_buffer_size             | 131072    |
| read_rnd_buffer_size         | 262144    |
| sort_buffer_size             | 2097152   |
| sql_buffer_result            | OFF       |
+------------------------------+-----------+

回答1:

In a 1GB server, do not have innodb_buffer_pool_size more than about 200M. Setting it to 800M will cause swapping. MySQL expects its caches to stay in RAM; when they get swapped to disk, performance suffers terribly.

Your table is probably to big to be cached entirely. So, a "table scan" will blow out cache, making the cache useless and the query will run at disk speed. Either find a way to avoid queries like that, or get more RAM.