Is there an accepted maximum allowed size for HTTP headers? If so, what is it? If not, is this something that\'s server specific or is the accepted standard to allow headers of any size?
问题:
回答1:
No, HTTP does not define any limit. However most web servers do limit size of headers they accept. For example in Apache default limit is 8KB, in IIS it\'s 16K. Server will return 413 Entity Too Large
error if headers size exceeds that limit.
Related question: How big can a user agent string get?
回答2:
As vartec says above, the HTTP spec does not define a limit, however many servers do by default. This means, practically speaking, the lower limit is 8K. For most servers, this limit applies to the sum of the request line and ALL header fields (so keep your cookies short).
- Apache 2.0, 2.2: 8K
- nginx: 4K - 8K
- IIS: varies by version, 8K - 16K
- Tomcat: varies by version, 8K - 48K (?!)
It\'s worth noting that nginx uses the system page size by default, which is 4K on most systems. You can check with this tiny program:
pagesize.c:
#include <unistd.h>
#include <stdio.h>
int main() {
int pageSize = getpagesize();
printf(\"Page size on your system = %i bytes\\n\", pageSize);
return 0;
}
Compile with gcc -o pagesize pagesize.c
then run ./pagesize
. My ubuntu server from Linode dutifully informs me the answer is 4k.
回答3:
HTTP does not place a predefined limit on the length of each header field or on the length of the header section as a whole, as described in Section 2.5. Various ad hoc limitations on individual header field length are found in practice, often depending on the specific field semantics.
HTTP Header values are restricted by server implementations. Http specification doesn\'t restrict header size.
A server that receives a request header field, or set of fields, larger than it wishes to process MUST respond with an appropriate 4xx (Client Error) status code. Ignoring such header fields would increase the server\'s vulnerability to request smuggling attacks (Section 9.5).
Most servers will return 413 Entity Too Large
or appropriate 4xx error when this happens.
A client MAY discard or truncate received header fields that are larger than the client wishes to process if the field semantics are such that the dropped value(s) can be safely ignored without changing the message framing or response semantics.
Uncapped HTTP header size keeps the server exposed to attacks and can bring down its capacity to serve organic traffic.
Source
回答4:
I also found that in some cases the reason for 502/400 in case of many headers could be because of a large number of headers without regard to size. from the docs
tune.http.maxhdr Sets the maximum number of headers in a request. When a request comes with a number of headers greater than this value (including the first line), it is rejected with a \"400 Bad Request\" status code. Similarly, too large responses are blocked with \"502 Bad Gateway\". The default value is 101, which is enough for all usages, considering that the widely deployed Apache server uses the same limit. It can be useful to push this limit further to temporarily allow a buggy application to work by the time it gets fixed. Keep in mind that each new header consumes 32bits of memory for each session, so don\'t push this limit too high.
https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#3.2-tune.http.maxhdr