How to optimize MySQL to insert millions row?

2019-01-22 04:43发布

问题:

I need to insert millions rows into the MySQL database (InnoDB engine). I have a problem with time when the tables have big sizes. Almost all time is spent on insert queries. Maybe somebody know how to optimize it?

回答1:

To import large bulk of data into InnoDB:

  1. set in MySQL configuration

    • innodb_doublewrite = 0
    • innodb_buffer_pool_size = 50%+ system memory
    • innodb_log_file_size = 512M
    • log-bin = 0
    • innodb_support_xa = 0
    • innodb_flush_log_at_trx_commit = 0
  2. Add right after transaction start:

    SET FOREIGN_KEY_CHECKS = 0;

    SET UNIQUE_CHECKS = 0;

    SET AUTOCOMMIT = 0;

  3. Set right before transaction end:

    SET UNIQUE_CHECKS = 1;

    SET FOREIGN_KEY_CHECKS = 1;



回答2:

If your talking about a large number of INSERT statements, look into something called transactions. I'm quite certain that most (if not all) languages that do SQL support transactions. They will speed up anything involving writing to the DB. An added bonus is that if something goes wrong you can rollback the changes.