可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):
问题:
This is a pretty simple problem. Inserting data into the table normally works fine, except for a few times where the insert query takes a few seconds. (I am not trying to bulk insert data.) So I setup a simulation for the insert process to find out why the insert query occasionally takes more than 2 seconds to run. Joshua suggested that the index file may be being adjusted; I removed the id (primary key field), but the delay still happens.
I have a MyISAM table: daniel_test_insert
(this table starts completely empty):
create table if not exists daniel_test_insert (
id int unsigned auto_increment not null,
value_str varchar(255) not null default '',
value_int int unsigned default 0 not null,
primary key (id)
)
I insert data into it, and sometimes a insert query takes > 2 seconds to run. There are no reads on this table - Only writes, in serial, by a single threaded program.
I ran the exact same query 100,000 times to find why the query occasionall takes a long time. So far, it appears to be a random occurrence.
This query for example took 4.194 seconds (a very long time for an insert):
Query: INSERT INTO daniel_test_insert SET value_int=12345, value_str='afjdaldjsf aljsdfl ajsdfljadfjalsdj fajd as f' - ran for 4.194 seconds
status | duration | cpu_user | cpu_system | context_voluntary | context_involuntary | page_faults_minor
starting | 0.000042 | 0.000000 | 0.000000 | 0 | 0 | 0
checking permissions | 0.000024 | 0.000000 | 0.000000 | 0 | 0 | 0
Opening tables | 0.000024 | 0.001000 | 0.000000 | 0 | 0 | 0
System lock | 0.000022 | 0.000000 | 0.000000 | 0 | 0 | 0
Table lock | 0.000020 | 0.000000 | 0.000000 | 0 | 0 | 0
init | 0.000029 | 0.000000 | 0.000000 | 1 | 0 | 0
update | 4.067331 | 12.151152 | 5.298194 | 204894 | 18806 | 477995
end | 0.000094 | 0.000000 | 0.000000 | 8 | 0 | 0
query end | 0.000033 | 0.000000 | 0.000000 | 1 | 0 | 0
freeing items | 0.000030 | 0.000000 | 0.000000 | 1 | 0 | 0
closing tables | 0.125736 | 0.278958 | 0.072989 | 4294 | 604 | 2301
logging slow query | 0.000099 | 0.000000 | 0.000000 | 1 | 0 | 0
logging slow query | 0.000102 | 0.000000 | 0.000000 | 7 | 0 | 0
cleaning up | 0.000035 | 0.000000 | 0.000000 | 7 | 0 | 0
(This is an abbreviated version of the SHOW PROFILE command, I threw out the columns that were all zero.)
Now the update has an incredible number of context switches and minor page faults. Opened_Tables increases about 1 per 10 seconds on this database (not running out of table_cache space)
Stats:
MySQL 5.0.89
Hardware: 32 Gigs of ram / 8 cores @ 2.66GHz; raid 10 SCSI harddisks (SCSI II???)
I have had the hard drives and raid controller queried: No errors are being reported.
CPUs are about 50% idle.
iostat -x 5 (reports less than 10% utilization for harddisks)
top report load average about 10 for 1 minute (normal for our db machine)
Swap space has 156k used (32 gigs of ram)
I'm at a loss to find out what is causing this performance lag. This does NOT happen on our low-load slaves, only on our high load master. This also happens with memory and innodb tables. Does anyone have any suggestions? (This is a production system, so nothing exotic!)
回答1:
I have noticed the same phenomenon on my systems. Queries which normally take a millisecond will suddenly take 1-2 seconds. All of my cases are simple, single table INSERT/UPDATE/REPLACE statements --- not on any SELECTs. No load, locking, or thread build up is evident.
I had suspected that it's due to clearing out dirty pages, flushing changes to disk, or some hidden mutex, but I have yet to narrow it down.
Also Ruled Out
- Server load -- no correlation with high load
- Engine -- happens with InnoDB/MyISAM/Memory
- MySQL Query Cache -- happens whether it's on or off
- Log rotations -- no correlation in events
The only other observation I have at this point is derived from the fact I'm running the same db on multiple machines. I have a heavy read application so I'm using an environment with replication -- most of the load is on the slaves. I've noticed that even though there is minimal load on the master, the phenomenon occurs more there. Even though I see no locking issues, maybe it's Innodb/Mysql having trouble with (thread) concurrency? Recall that the updates on the slave will be single threaded.
MySQL Verion 5.1.48
Update
I think I have a lead for the problem on my case. On some of my servers, I noticed this phenomenon on more than the others. Seeing what was different between the different servers, and tweaking things around, I was lead to the MySQL innodb system variable innodb_flush_log_at_trx_commit
.
I found the doc a bit awkward to read, but innodb_flush_log_at_trx_commit
can take the values of 1,2,0:
- For 1, the log buffer is flushed to
the log file for every commit, and the log
file is flushed to disk for every commit.
- For 2, the log buffer is flushed to
the log file for every commit, and the log
file is flushed to disk approximately every 1-2 seconds.
- For 0, the log buffer is flushed to
the log file every second, and the log
file is flushed to disk every second.
Effectively, in the order (1,2,0), as reported and documented, you're supposed to get with increasing performance in trade for increased risk.
Having said that, I found that the servers with innodb_flush_log_at_trx_commit=0
were performing worse (i.e. having 10-100 times more "long updates") than the servers with innodb_flush_log_at_trx_commit=2
. Moreover, things immediately improved on the bad instances when I switched it to 2 (note you can change it on the fly).
So, my question is, what is yours set to? Note that I'm not blaming this parameter, but rather highlighting that it's context is related to this issue.
回答2:
I had this problem using INNODB tables. (and INNODB indexes are even slower to rewrite than MYISAM)
I suppose you are doing multiple other queries on some other tables, so the problem would be that MySQL has to handle disk writes in files that get larger and needs to allocate additional space to those files.
If you use MYISAM tables I strongly suggest using
LOAD DATA INFILE 'file-on-disk' INTO TABLE `tablename`
command; MYISAM is sensationally fast with this (even with primary keys) and the file can be formatted as csv and you can specify the column names (or you can put NULL as the value for the autoincrement field).
View MYSQL doc here.
回答3:
The first Tip I would give you, is to disable the autocommit functionality and than commit manually.
LOCK TABLES a WRITE;
... DO INSERTS HERE
UNLOCK TABLES;
This benefits performance because the index buffer is flushed to disk only once, after all INSERT statements have completed. Normally, there would be as many index buffer flushes as there are INSERT statements.
But propably best you can do, and if that is possible in your application, you do a bulk insert with one single select.
This is done via Vector Binding and it's the fastest way you can go.
Instead
of:
"INSERT INTO tableName values()"
DO
"INSERT INTO tableName values(),(),(),().......(n) " ,
But consider this option only if parameter vector binding is possible with your mysql driver you're using.
Otherwise I would tend to the first possibility and LOCK the table for every 1000 inserts. Don't lock it for 100k inserts, because you'l get a buffer overflow.
回答4:
Can you create one more table with 400 (not null) columns and run your test again? If the number of slow inserts became higher this could indicate MySQL is wasting time writing your records. (I dont know how it works, but he may be alocating more blocks, or moving something to avoid fragmentation.... really dont know)
回答5:
We hit exactly the same issue and reported here:
http://bugs.mysql.com/bug.php?id=62381
We are using 5.1.52 and don't have solution yet. We may need to turn QC off to avoid this perf hit.
回答6:
Read this on Myisam Performance:
http://adminlinux.blogspot.com/2010/05/mysql-allocating-memory-for-caches.html
Search for:
'The MyISAM key block size The key block size is important' (minus the single quotes), this could be what's going on. I think they fixed some of these types of issues with 5.1
回答7:
Can you check the stats on the disk subsystem? Is the I/O satuated? This sounds like internal DB work going on flushing stuff to disk/log.
回答8:
To check if your disk is behaving badly, and if you're in Windows, you can create a batch cmd file that creates 10,000 files:
@echo OFF
FOR /L %%G IN (1, 1, 10000) DO TIME /T > out%%G.txt
save it in a temp dir, like test.cmd
Enable command extensions running CMD with the /E:ON parameter
CMD.exe /E:ON
Then run your batch and see if the time between the first and the last out file differ in seconds or minutes.
On Unix/Linux you can write a similare shell script.
回答9:
By any chance is there an SSD drive in the server? Some SSD drives suffer from 'studder', which could cause your symptom.
In any case, I would try to find out if the delay is occurring in MySQL or in the disk subsystem.
What OS is your server, and what file system is the MySQL data on?
回答10:
We upgraded to MySQL 5.1 and during this event the Query cache became an issue with a lot of "Freeing items?" thread states. We then removed the query cache.
Either the upgrade to MySQL 5.1 or removing the query cache resolved this issue.
FYI, to future readers.
-daniel
回答11:
if you are using multiple insertion at one using for loop, then please take a break after every loop using PHP's sleep("time in seconds") function.