Amazon CloudFront Latency

2020-01-22 12:34发布

I am experimenting with AWS S3 and CloudFront for a web application that I am developing.

In the app I'm letting users upload files to the S3 bucket (using the AWS SDK) and make it available via CloudFront CDN, but the issue is even when the files are uploaded and ready in the S3 bucket it takes about a minute or 2 to be available in the CloudFront CDN url, is this normal?

3条回答
Melony?
2楼-- · 2020-01-22 13:00

CloudFront attempts to fetch uncached content from the origin server in real time. There is no "replication delay" or similar issue because CloudFront is a pull-through CDN. Each CloudFront edge location knows only about your site's existence and configuration; it doesn't know about your content until it receives requests for it. When that happens, the CloudFront edge fetches the requested content from the origin server, and caches it as appropriate, for serving subsequent requests.

The issue that's occurring here is related to a concept sometimes called "negative caching" -- caching the fact that a request won't work -- which is typically done to avoid hammering the origin of whatever's being cached with requests that are likely to fail anyway.

By default, when your origin returns an HTTP 4xx or 5xx status code, CloudFront caches these error responses for five minutes and then submits the next request for the object to your origin to see whether the problem that caused the error has been resolved and the requested object is now available.

— http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/custom-error-pages.html

If the browser, or anything else, tries to download the file from that particular CloudFront edge before the upload into S3 is complete, S3 will return an error, and CloudFront -- at that edge location -- will cache that error and remember, for the next 5 minutes, not to bother trying again.

Not to worry, though -- this timer is configurable, so if the browser is doing this under the hood and outside your control, you should still be able to fix it.

You can specify the error-caching duration—the Error Caching Minimum TTL—for each 4xx and 5xx status code that CloudFront caches. For a procedure, see Configuring Error Response Behavior.

— http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/custom-error-pages.html


To configure this in the console:

  • When viewing the distribution configuration, click the Error Pages tab.

  • For each error where you want to customize the timing, begin by clicking Create Custom Error Response.

  • Choose the error code you want to modify from the drop-down list, such as 403 (Forbidden) or 404 (Not Found) -- your bucket configuration determines which code S3 returns for missing objects, so if you aren't sure, change 403 then repeat the process and change 404.

  • Set Error Caching Minimum TTL (seconds) to 0

  • Leave Customize Error Response set to No (If set to Yes, this option enables custom response content on errors, which is not what you want. Activating this option is outside the scope of this question.)

  • Click Create. This takes you back to the previous view, where you'll see Error Caching Minimum TTL for the code you just defined.

Repeat these steps for each HTTP response code you want to change from the default behavior (which is the 300 second hold time, discussed above).

When you've made all the changes you want, return to the main CloudFront console screen where the distributions are listed. Wait for the distribution state to change from In Progress to Deployed (typically about 20 minutes for the changes to be pushed out to all the edges) and test.

查看更多
孤傲高冷的网名
3楼-- · 2020-01-22 13:00

As observed in your comment, it seems that google chrome is messing up with your upload/preview strategy:

  1. Chrome is requesting the URL that currently doesn't have the content.
  2. the request is cached by cloudfront with invalid response
  3. you upload the file to S3
  4. when preview the uploaded file the cloudfront answers with the cached response (step 2).
  5. after the cloudfront cache expires, cloudfront hits origin and the problem can no longer be reproducible.
查看更多
我欲成王,谁敢阻挡
4楼-- · 2020-01-22 13:17

Are these new files being written to S3 for the first time, or are they updates to existing files? S3 provides read-after-write consistency for new objects, and given CloudFront's pull model you should not be having this issue with new files written to S3. If you are, then I would open a ticket with AWS.

If these are updates to existing files, then you have both S3 eventual consistency and CloudFront cache expiration to deal with. Both of which could cause this sort of behavior.

查看更多
登录 后发表回答