Insert random data in MySql and PHP for benchmark

2020-07-29 23:59发布

问题:

I was wondering if there is a (free) tool for mysql/php benchmark.

In particular, I would like to insert thousands of data into the MySQL database, and test the application with concurrent queries to see if it will last. This is, test the application in the worst cases.

I saw some pay tools, but none free or customizable one.

Any suggestion? or any script?

Thnx

回答1:

Insert one record into the table.

Then do:

INSERT IGNORE INTO table SELECT FLOOR(RAND()*100000) FROM table;

Then run that line several times. Each time you will double the number of rows in the table (and doubling grows VERY fast). This is a LOT faster than generating the data in PHP or other code. You can modify which columns you select RAND() from, and what the range of the numbers is. It's possible to randomly generate text too, but more work.

You can run this code from several terminals at once to test concurrent inserts. The IGNORE will ignore any primary key collisions.



回答2:

Make a loop (probably infinite) that would keep inserting data into the database and test going from there.



回答3:

for($i=1;$i=1000;$i++){
    mysql_query("INSERT INTO testing VALUES ('".$i."')");
    //do some other testing
}


回答4:

for($i=1;$i<5000;$i++){
  $query = mysql_query("INSERT INTO something VALUES ($i)");
}

replace something with your table ;D



回答5:

if you want to test concurrency you will have to thread your insert/update statements.
An easy and very simple way(without going into fork/threads and all that jazz) would be to do it in bash as follows
1. Create an executable PHP script

#!/usr/bin/php -q
<?php
/*your php code to insert/update/whatever you want to test for concurrency*/
?>

2. Call it within a for loop by appending & so it goes in the background.

#!/bin/bash
for((i=0; i<100; i++))
do
    /path/to/my/php/script.sh &;
done
wait;

You can always extend this by creating multiple php scripts having various insert/update/select queries and run them through the for loop (remember to change i<100 to higher number if you want more load. Just don't forget to add the & after you call your script. (Of course, you will need to chmod +x myscript.sh )
Edit: Added the wait statement, below this you can write other commands/stuff you may want to do after flooding your mysql db.



回答6:

I did a quick search and found the following page at MySQL documentation => http://dev.mysql.com/doc/refman/5.0/en/custom-benchmarks.html. This page contains the following interesting links:

the Open Source Database Benchmark, available at http://osdb.sourceforge.net/.

For example, you can try benchmarking packages such as SysBench and DBT2, available at http://sourceforge.net/projects/sysbench/, and http://osdldbt.sourceforge.net/#dbt2. These packages can bring a system to its knees, so be sure to use them only on your development systems.

For MySQL to be fast you should look into Memcached or Redis to cache your queries. I like Redis a lot and you can get a free (small) instance thanks to http://redistogo.com. Most of the times the READS are killing your server and not the WRITES which are less frequently(most of the times). When WRITES are frequently most of the times it is not really a big case when you lose some data. Sites which have big WRITE rates are for example Twitter or Facebook. But then again I don't think it is the end of the world if a tweet or Facebook wall post gets lost. Like I point out previously you can fix this easily by using Memcached or Redis.

If the WRITES are killing you could look into bulk insert if possible, transactional insert, delayed inserts when not using InnoDB or partitioning. If data is not really critical you could put the queries in memory first and then do bulk insert periodically. This way when you do read from MySQL you would return stale data(could be problem). But then again when you use redis you could easily store all your data in memory, but when your server crashes you can lose data, which could be big problem.