PHP and mod_fcgid: ap_pass_brigade failed in handl

2020-05-19 06:14发布

问题:

This has been asked and answered before https://stackoverflow.com/a/12686252/219116 but, the solution there is not working for me.

mod_fcgid config

<IfModule mod_fcgid.c>
  AddHandler    fcgid-script .fcgi
  FcgidIPCDir /var/run/mod_fcgid/
  FcgidProcessTableFile /var/run/mod_fcgid/fcgid_shm

  FcgidIdleTimeout 60
  FcgidProcessLifeTime 120
  FcgidMaxRequestsPerProcess 500
  FcgidMaxProcesses 150
  FcgidMaxProcessesPerClass 144
  FcgidMinProcessesPerClass 0
  FcgidConnectTimeout 30
  FcgidIOTimeout 600
  FcgidIdleScanInterval 10
  FcgidMaxRequestLen 269484032

</IfModule>

php-cgi script

#!/bin/bassh
export PHPRC=/var/www/vhosts/example.com/etc/
export PHP_FCGI_MAX_REQUESTS=5000
exec /usr/bin/php-cgi

System details

  • CentOS Linux release 7.1.1503 (Core)
  • httpd-2.4.6-31.el7.centos.x86_64
  • mod_fcgid-2.3.9-4.el7.x86_64
  • php56u-cli-5.6.12-1.ius.centos7.x86_64

So my FcgidMaxRequestsPerProcess is set to 500 and my PHP_FCGI_MAX_REQUESTS is set to 10x that as suggested in the previous answers and the Apache documentation. And yet I still get these errors

[Thu Nov 19 18:16:48.197238 2015] [fcgid:warn] [pid 6468:tid 139726677858048]
(32)Broken pipe: [client X.X.X.X:41098] mod_fcgid: ap_pass_brigade failed in handle_request_ipc function

回答1:

The warning has nothing to do with any of the Fcgidxxx options and is simply caused by client's closing their side of the connection before the server gets a chance to respond.

From the actual source:

/* Now pass any remaining response body data to output filters */
if ((rv = ap_pass_brigade(r->output_filters, brigade_stdout)) != APR_SUCCESS) {
    if (!APR_STATUS_IS_ECONNABORTED(rv)) {
        ap_log_rerror(APLOG_MARK, APLOG_WARNING, rv, r,
                      "mod_fcgid: ap_pass_brigade failed in "
                      "handle_request_ipc function");
    }

    return HTTP_INTERNAL_SERVER_ERROR;
}

Credit goes to Avian's Blog who found out about it.



回答2:

I am also getting the same problem about a year back then I have tried many things and in last I have done some of the hit and run things after documentation reading and my problem is gone. First the important things required to be set as:

FcgidBusyTimeout     300 [default]
FcgidBusyScanInterval    120 [default]

The purpose of this directive is to terminate hung applications. The default timeout may need to be increased for applications that can take longer to process the request. Because the check is performed at the interval defined by FcgidBusyScanInterval, request handling may be allowed to proceed for a longer period of time

FcgidProcessLifeTime     3600 [default]

Idle application processes which have existed for greater than this time will be terminated, if the number of processses for the class exceeds FcgidMinProcessesPerClass.

This process lifetime check is performed at the frequency of the configured FcgidIdleScanInterval.

FcgidZombieScanInterval   3 [seconds default]

The module checks for exited FastCGI applications at this interval. During this period of time, the application may exist in the process table as a zombie (on Unix).

Note : All the above options Decrease or increase according to your application process time or needs or apply to specific vhost.

But My Problem resolve by this option:

Above options have tweaked my server but after some time the errors seems comming again but the error is really resolve by this:

 FcgidOutputBufferSize   65536 [default]

I have change it to

 FcgidOutputBufferSize   0

This is the maximum amount of response data the module will read from the FastCGI application before flushing the data to the client. This will flush the data instantly not waiting to have 64KB of bytes, which really helps me to flush out process more fast.

Other issues I got

if 500 Error coming from Nginx timing out. The fix:

/etc/nginx/nginx.conf

keepalive_timeout  125;
proxy_read_timeout 125;
proxy_connect_timeout 125;
fastcgi_read_timeout 125;

Intermittently I would get the MySQL "MySQL server has gone away" error, which required one more tweak: /etc/my.conf

wait_timeout = 120

Then, just for funsies, I went ahead and upped my PHP memory limit, just in case: /etc/php.ini

memory_limit = 256M

Using SuExec

mod_fastcgi doesn't work at all under SuExec on Apache 2.x. I had nothing but trouble from it (it also had numerous other issues in our testing). The real cause of your problem is SuExec

In my case that was a startup for me, I starting Apache, mod_fcgid spawns exactly 5 processes for each vhost. Now, when using a simple upload script and submitting a file larger than 4-8KB all of those child processes are killed at once for the specific vhost the script was executed on.

It might be possible to make debug build or crank up logging in mod_fcgid which might give a clue.

I tried mod_fastcgi in the meantime for 1 year and I too can say with many others that SuExec is nothing but troublesome and runs not smoothly at all, in every case.



回答3:

This error can occur when asynchronous requests are used by a website. These do not directly show up as an erroneous result on a web page, but they trigger the execution of PHP scripts. If such scripts fail during their execution and do not return a result, this or similar strange errors are logged. What you need to do is to identify the JavaScript (AJAX) calls to your PHP scripts and find out why the execution of these scripts is failing.

Clients that close the connection before waiting for the server to respond. The client is indeed closing it, but it is doing so, because it does not receive a response from the AJAX call, and that again is caused by a faulty script of the website.