I want to run multiple cURL tasks in background using PHP on Ubuntu. There are a few ways, but I'm not sure which one I should choose.
Way 1: use OS's cURL
<?php
require_once('database.php');
$db = new Database; // SQLite3 database
$query = $db->query("SELECT * FROM users");
while ($user = $query->fetchArray(SQLITE3_ASSOC)) {
exec("nohup curl --url http://example.com/?id=".$user['id']." &");
}
?>
Way 2: http://www.paul-norman.co.uk/2009/06/asynchronous-curl-requests
<?php
require_once('database.php');
$db = new Database; // SQLite3 database
$query = $db->query("SELECT * FROM users");
while ($user = $query->fetchArray(SQLITE3_ASSOC)) {
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, "http://example.com/?id=".$user['id']);
curl_setopt($ch, CURLOPT_FRESH_CONNECT, true);
curl_setopt($ch, CURLOPT_TIMEOUT_MS, 1);
curl_exec($ch);
curl_close($ch);
}
?>
I don't understand how way 2 works. Can someone explain it? What way should I choose if I have 100 000 - 500 000 users? Please note that cURL can do its job for about 2 - 8 seconds. I'm not sure way 2 will work, because if it can do its job for a few seconds and the timeout is set to 1 ms - will the connection be stopped when it isn't done?
EDIT: Way 2 doesn't work for me, because higher timeout is needed. Way 1 may cause your computer work much slower. I did this: when I need the ID of all users I don't do 1000 requests but I send the database (SQLite) to my another server. Then, it looks for the IDs.
Your first code will run the curl request in background, i.e tle loop will run N number of times issuing N curl request and the PHP script will complete (irrespective of curl request).
The Second Option will issue a CURL Request and the web server will send you status 504 (Timeout), however the script still runs in the background -- it just stops sending data to the client.
Still I am not sure with 1ms timeout because the connect_time might be higher.