PHP CURL timing out but CLI CURL works

2020-04-14 12:10发布

问题:

I am seeing a very bizarre problem with a PHP application I am building.

I have 2 virtual hosts on my development server (windows 7 64-bit) sometestsite.com and endpoint.sometestsite.com.

In my hosts file, I configured sometestsite.com and endpoint.sometestsite.com to point to 127.0.0.1.

Everything works when the server was running Apache 2.4.2 with PHP 5.4.9 as a fcgi module.

I then removed Apache and installed nginx-1.2.5 (windows build). I got php-cgi.exe running as a service and everything seems to work fine.

The problem is that a CURL call from sometestsite.com to endpoint.sometestsite.com that previously worked would time out.

I then moved that piece of code by itself to a small PHP file for testing:

$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, 'http://endpoint.sometestsite.com/test');
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_POSTFIELDS, array('provider' => urlencode('provider'),
'key' => urlencode('asdf')));
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);

//Execute and get the data back
$result = curl_exec($ch);

var_dump($result);

This is what I receive in the PHP logs:

PHP Fatal error:  Maximum execution time of 30 seconds exceeded in D:\www\test5.php on line 22
PHP Stack trace:
PHP   1. {main}() D:\www\test5.php:0

However, if I run the same request using CLI CURL (via Git Bash), it works fine:

$ curl -X POST 'http://endpoint.sometestsite.com/test' -d'provider=provider&key=asdf'
{"test": "OK"}

This is quite strange as the PHP is exactly the same version and has the same configuration as when Apache was used.

I am not sure if this is a web server configuration issue or a problem with PHP's CURL yet.

Can anyone provide some insight/past experiences as to why this is happening?

回答1:

Nginx does not spawn your php-cgi.exe processes for you. If you came from Apache like me and used mod_fcgid, you will find that you have many php-cgi.exe processes in the system.

Because Nginx does not spawn the PHP process for you, you will need to start the process yourself. In my case, I have php-cgi.exe -b 127.0.0.1:9000 running as a service automatically. Nginx then pushes all requests for PHP to the PHP handler and receives a response.

Problem: PHP-FPM does not work on windows (as of 5.4.9). FPM is a neat little process manager that sits in the background and manages the spawning and killing of PHP processes when processing requests.

Because this is not possible, on Windows, we can only serve 1 request at a time, similar to the problem experienced here.

In my case, the following happens: Call a page in my application on sometestsite.com which makes a call to php-cgi.exe on 127.0.0.1:9000. Inside, a CURL request calls a page on endpoint.sometestsite.com. However, we are unable to spawn any new PHP processes to serve this second request. The original php-cgi.exe is blocked by serving the request that is running the CURL request. So, we have a deadlock and everything then times out.

The solution I used (it is pretty much a hack) is to use this python script to spawn 10 PHP processes.

You then use an upstream block in nginx (as per the docs for the script) to tell nginx that there are 10 processes available.

Things then worked perfectly.

Having said that, please do not ever use this in production (you are probably better off running nginx and php-fpm on Linux anyway). If you have a busy site, 10 processes may not be enough. However, it can be hard to know how many processes you need.

However, if you do insist on running nginx with php on windows, consider running PHP-FPM within Cygwin as per this tutorial.



回答2:

Be sure that you run script on console from same user that used for run cgi process. If they not same - they may have different permissions. For me problem was in firewall rules that disallow open external connections for owner of cgi process.