After reading about the Cache-Control
field of the HTTP header,
I understand that the Cache-Control
field in the HTTP response header (server to client) specifies the directives for the intermediate proxy servers/client browser on how to handle the response, by sending different values for the Cache-Control
field: private
, public
, no-cache
, or no-store
in the response header.
But I don't get why do we need to sent the Cache-Control
attribute in the request header (client to server)?
In addition to the above answer,
There might be a setup where cache chaining is implemented. In that case if the request comes to first cache where it is not satisfied, it might go to further chained cache.
Thus in order to get the response always from the server we include cache-control in request headers. This will insure that response is always from the server.
A client can send a
Cache-Control
header in a request in order to request specific caching behavior, such as revalidation, from the origin server and any intermediate proxy servers along the request path.Cache-Control: no-cache
is generally used in a request header (sent from web browser to server) to force validation of the resource in the intermediate proxies. If the client doesn't send this request to the server, intermediate proxies will return a copy of the content if it is fresh (has not expired according toExpire
ormax-age
fields).Cache-Control
directs these proxies to revalidate the copy even if it is fresh.