import mysql data interrupted, how to resume?

2019-06-08 13:12发布

问题:

I imported a large mysql database using

mysql -uroot -ppassword dbName

the database has gone away during the process possibly due to timeout after a few days...
is there a way to resume it? or am I out of luck and need to delete the existing db and reimport?

回答1:

It might help to use a "--ignore" option on the commandline, to "resume" the import.

The semantics is that it should ignore any already imported data and only import what's not yet there.

Here's the MYSQL documentation for the ignore option: http://dev.mysql.com/doc/refman/5.0/en/mysqlimport.html#option_mysqlimport_ignore



回答2:

I'm using INSERT IGNORE INTO by stream editing (sed) my dump file like this:

nice gunzip < dumpfile.sql.gz | sed -e "s|^INSERT INTO |INSERT IGNORE INTO |g" | nice mysql -uroot -p"password" DBName


回答3:

If you know the last insertion point on your query, split your mysqldump file to just before that point, and replace insert with insert ignore. You probably don't want to insert ignore the whole dataset, as each transaction is attempted.

Also, mysql server has gone away can also be indicative of violating max_allowed_packet size.



回答4:

The "database has gone away" is usually indicative of the server crashing, check your mysql logs /var/log/mysqld.log or if not there run;

SELECT * FROM GLOBAL_VARIABLES WHERE VARIABLE_NAME = 'LOG_ERROR';

I've never had a client disconnect, even in week long runs over the network. It looks like your connecting locally so a disconnect is very unlikely.

If you want to resume, you can do the following;

  • Check the error log to see the cause of the error and fix this first
  • Grep the dump file; grep -irH 'DROP TABLE'
  • Compare the tables restored to the grep results; note the line of the last match
  • Create a new file from the last matched db (inclusive); tail --lines=+10000 database.sql > resume.sql
  • OR; as someone else stated, use the ignore-lines option in mysqlimport

Hope this helps