MySQL very slow for alter table query

2019-01-21 06:44发布

Why is it taking more than an hour to simply update this table to add a column? This table has 15M rows. It has 2 indexes and a single key primary key. The ALTER TABLE query has been in "copy to tmp table" state for 1 hour 15 minutes now.

ALTER TABLE `frugg`.`item_catalog_map` 
ADD COLUMN `conversion_url` TEXT NULL DEFAULT NULL

Table:

mysql> describe item_catalog_map;
+------------------------+---------------+------+-----+---------+-------+
| Field                  | Type          | Null | Key | Default | Extra |
+------------------------+---------------+------+-----+---------+-------+
| catalog_unique_item_id | varchar(255)  | NO   | PRI | NULL    |       |
| catalog_id             | int(11)       | YES  | MUL | NULL    |       |
| item_id                | int(11)       | YES  | MUL | NULL    |       |
| price                  | decimal(10,2) | YES  |     | 0.00    |       |
+------------------------+---------------+------+-----+---------+-------+

mysql> show index from item_catalog_map;
+------------------+------------+----------------------+--------------+------------------------+-----------+-------------+----------+--------+------+------------+---------+
| Table            | Non_unique | Key_name             | Seq_in_index | Column_name            | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment |
+------------------+------------+----------------------+--------------+------------------------+-----------+-------------+----------+--------+------+------------+---------+
| item_catalog_map |          0 | PRIMARY              |            1 | catalog_unique_item_id | A         |    15485115 |     NULL | NULL   |      | BTREE      |         |
| item_catalog_map |          1 | IDX_ACD6184FCC3C66FC |            1 | catalog_id             | A         |          18 |     NULL | NULL   | YES  | BTREE      |         |
| item_catalog_map |          1 | IDX_ACD6184F126F525E |            1 | item_id                | A         |    15485115 |     NULL | NULL   | YES  | BTREE      |         |
+------------------+------------+----------------------+--------------+------------------------+-----------+-------------+----------+--------+------+------------+---------+

6条回答
兄弟一词,经得起流年.
2楼-- · 2019-01-21 07:07

MySQL’s ALTER TABLE performance can become a problem with very large tables. MySQL performs most alterations by making an empty table with the desired new structure, inserting all the data from the old table into the new one, and deleting the old table. This can take a very long time, especially if you’re short on memory and the table is large and has lots of indexes. Many people have experience with ALTER TABLE operations that have taken hours or days to complete.

Anyway if you need to proceed with alter table, maybe the following resources could help you:

查看更多
Melony?
3楼-- · 2019-01-21 07:08

If you alter several columns at once it might take more time than you expect. Instead try to alter once column at a time. I do not find the technical side of that but I experienced the problem and got through from it.

查看更多
Anthone
4楼-- · 2019-01-21 07:20

For minimize locking up of the large table that I want to alter, I do the following:

  • Create a new empty table based on the existing table and alter this new empty table.
  • Do a mysqldump of the large table such that it has one complete insert statement per record in the large table (switches -c and --skip-extended-insert)
  • Import this mysqldump into a different (empty) database with the empty renamed large_table.
  • Take a mysqldump of this new rename table from the other database and import it into the original database
  • Rename large_table and large_table_new in the original database.

    mysql> create table DATABASE_NAME.LARGE_TABLE_NEW like DATABASE_NAME.LARGE_TABLE;
    mysql> alter table DATABASE_NAME.LARGE_TABLE_NEW add column NEW_COLUMN_NAME COL_DATA_TYPE(SIZE) default null;
    
    $ mysqldump -c --no-create-info --skip-extended-insert --no-create-db -u root -p DATABASE_NAME LARGE_TABLE > LARGE_TABLE.sql
    
    mysql> create table test.LARGE_TABLE like DATABASE_NAME.LARGE_TABLE;
    
    $ mysql -u root -p -D test < LARGE_TABLE.sql
    
    mysql> rename table test.LARGE_TABLE to test.LARGE_TABLE_NEW;
    
    $ mysqldump -c --no-create-info --skip-extended-insert --no-create-db -u root -p test LARGE_TABLE_NEW > LARGE_TABLE_NEW.sql
    
    $ mysql -u root -p -D DATABASE_NAME < LARGE_TABLE_NEW.sql
    
    mysql> rename table DATABASE_NAME.LARGE_TABLE to DATABASE_NAME.LARGE_TABLE_OLD, DATABASE_NAME.LARGE_TABLE_NEW to DATABASE_NAME.LARGE_TABLE;
    
查看更多
神经病院院长
5楼-- · 2019-01-21 07:24

The Percona tools are a lifesaver for this stuff w/ big tables.

http://www.percona.com/doc/percona-toolkit/2.1/pt-online-schema-change.html

they basically:

  1. create duplicate table
  2. create trigger to sync tables
  3. bulk copy data
  4. verify
  5. swap tables

Takes forever, but who cares because this means you can change columns without downtime.

查看更多
Summer. ? 凉城
6楼-- · 2019-01-21 07:24

Your table has 15 million rows, which is something. The ALTER TABLE involves copying over all the data from the table and recreating the indexes. As a first measurement try copying the data file (item_catalog_map.MYD if it's MyISAM) in your filesystem and see how long that takes. This is the time the ALTER TABLE will at least take.

查看更多
\"骚年 ilove
7楼-- · 2019-01-21 07:25

If you don't care about downtime, my suggestion is using three separated ALTER TABLE statements. The first statement removes all existing secondary indexes. The second statement applies all column related changes. The last statement adds dropped secondary indexes back and applies other index changes.

Another two tips:

  1. Before apply index changes, execute the two following statements and change the values back to 1 after finishing the index change.

    SET unique_checks=0;
    SET foreign_key_checks=0;
    
  2. When create multiple secondary indexes, put them in one ALTER TABLE statement rather than multiple separated ALTER TABLE statements.

The following picture shows the performance difference. Approach 1 is your approach and approach 2 is my way. Approach 2 takes about 3.47% time comparing with approach 1 for a 50m table. The solution only works for MySQL (>=5.5) InnoDB engine.

enter image description here

查看更多
登录 后发表回答