How to implement background/asynchronous write-beh

2020-07-10 10:06发布

I have a particular PHP page that, for various reasons, needs to save ~200 fields to a database. These are 200 separate insert and/or update statements. Now the obvious thing to do is reduce this number but, like I said, for reasons I won't bother going into I can't do this.

I wasn't expecting this problem. Selects seem reasonably performant in MySQL but inserts/updates aren't (it takes about 15-20 seconds to do this update, which is naturally unacceptable). I've written Java/Oracle systems that can happily do thousands of inserts/updates in the same time (in both cases running local databases; MySQL 5 vs OracleXE).

Now in something like Java or .Net I could quite easily do one of the following:

  1. Write the data to an in-memory write-behind cache (ie it would know how to persist to the database and could do so asynchronously);
  2. Write the data to an in-memory cache and use the PaaS (Persistence as a Service) model ie a listener to the cache would persist the fields; or
  3. Simply start a background process that could persist the data.

The minimal solution is to have a cache that I can simply update, which will separately go and upate the database in its own time (ie it'll return immediately after update the in-memory cache). This can either be a global cache or a session cache (although a global shared cache does appeal in other ways).

Any other solutions to this kind of problem?

10条回答
The star\"
2楼-- · 2020-07-10 10:23

mysql_query('INSERT INTO tableName VALUES(...),(...),(...),(...)')

Above given query statement is better. But we have another solution to improve the performance of insert statement.
Follow the following steps..
1. You just create a csv(comma separated delimited file)or simple txt file and write all the data that you want to insert using file writing mechanism (like FileOutputStream class in Java).
2. use this command

LOAD DATA INFILE 'data.txt' INTO TABLE table2
  FIELDS TERMINATED BY '\t';

3 if you are not clear about this command then follow the link

查看更多
疯言疯语
3楼-- · 2020-07-10 10:23

consider this:

mysql_query('start transaction');
mysql_query('INSERT INTO tableName VALUES(...)');
mysql_query('INSERT INTO tableName VALUES(...)');
mysql_query('INSERT INTO tableName VALUES(...)');
mysql_query('INSERT INTO tableName VALUES(...)');
mysql_query('INSERT INTO tableName VALUES(...)'); 
mysql_query('commit;')
查看更多
我只想做你的唯一
4楼-- · 2020-07-10 10:27

You can update your local cache (hopefully memcached) and then push the write requests through beanstalkd.

查看更多
【Aperson】
5楼-- · 2020-07-10 10:27

you can use CURL with PHP to do Asynchronous database manipulations.

One possible solution is fork each query into a separate thread but, PHP doesnot support threads. We can use PCNTL functions but it’s a bit tricky for me to use them. I prefer to use this another solution to create fork and perform asynchronous operations.

Refer this

http://gonzalo123.wordpress.com/2010/10/11/speed-up-php-scripts-with-asynchronous-database-queries/

查看更多
在下西门庆
6楼-- · 2020-07-10 10:32

How are you doing the inserts, are you doing one insert per record

mysql_query('INSERT INTO tableName VALUES(...)');
mysql_query('INSERT INTO tableName VALUES(...)');
mysql_query('INSERT INTO tableName VALUES(...)');
mysql_query('INSERT INTO tableName VALUES(...)');
mysql_query('INSERT INTO tableName VALUES(...)');

or are you using a single query

mysql_query('INSERT INTO tableName VALUES(...),(...),(...),(...)');

The later of the two options is substantially faster, and from experience the first option will cause it to take much longer as PHP must wait for the first query to finish before moving to the second and so on.

查看更多
爱情/是我丢掉的垃圾
7楼-- · 2020-07-10 10:33

You should be able to do 200 inserts relatively quickly, but it will depend on lots of factors. If you are using a transactional engine and doing each one in its own transaction, don't - that creates way too much I/O.

If you are using a non-transactional engine, it's a bit trickier. Using a single multi-row insert is likely to be better as the flushing policy of MySQL means that it won't need to flush its changes after each row.

You really want to be able to reproduce this on your production-spec development box and analyse exactly why it's happening. It should not be difficult to stop.

Of course, another possibility is that your inserts are slow because of extreme sized tables or large numbers of indexes - in which case you should scale your database server appropriately. Inserting lots of rows into a table whose indexes don't fit into RAM (or doesn't have RAM correctly configured to be used for caching those indexes) generally gets pretty smelly.

BUT don't try to look for a way of complicating your application when there is a way of easily turning it instead, keeping the current algorithm.

查看更多
登录 后发表回答