Streaming large files in a java servlet

2019-01-03 02:00发布

I am building a java server that needs to scale. One of the servlets will be serving images stored in Amazon S3.

Recently under load, I ran out of memory in my VM and it was after I added the code to serve the images so I'm pretty sure that streaming larger servlet responses is causing my troubles.

My question is : is there any best practice in how to code a java servlet to stream a large (>200k) response back to a browser when read from a database or other cloud storage?

I've considered writing the file to a local temp drive and then spawning another thread to handle the streaming so that the tomcat servlet thread can be re-used. This seems like it would be io heavy.

Any thoughts would be appreciated. Thanks.

标签: java java-io
8条回答
疯言疯语
2楼-- · 2019-01-03 02:44

I've seen a lot of code like john-vasilef's (currently accepted) answer, a tight while loop reading chunks from one stream and writing them to the other stream.

The argument I'd make is against needless code duplication, in favor of using Apache's IOUtils. If you are already using it elsewhere, or if another library or framework you're using is already depending on it, it's a single line that is known and well-tested.

In the following code, I'm streaming an object from Amazon S3 to the client in a servlet.

import java.io.InputStream;
import java.io.OutputStream;
import org.apache.commons.io.IOUtils;

InputStream in = null;
OutputStream out = null;

try {
    in = object.getObjectContent();
    out = response.getOutputStream();
    IOUtils.copy(in, out);
} finally {
    IOUtils.closeQuietly(in);
    IOUtils.closeQuietly(out);
}

6 lines of a well-defined pattern with proper stream closing seems pretty solid.

查看更多
We Are One
3楼-- · 2019-01-03 02:44

In addition to what John suggested, you should repeatedly flush the output stream. Depending on your web container, it is possible that it caches parts or even all of your output and flushes it at-once (for example, to calculate the Content-Length header). That would burn quite a bit of memory.

查看更多
聊天终结者
4楼-- · 2019-01-03 02:44

If you can structure your files so that the static files are separate and in their own bucket, the fastest performance today can likely be achieved by using the Amazon S3 CDN, CloudFront.

查看更多
老娘就宠你
5楼-- · 2019-01-03 02:46

I agree strongly with both toby and John Vasileff--S3 is great for off loading large media objects if you can tolerate the associated issues. (An instance of own app does that for 10-1000MB FLVs and MP4s.) E.g.: No partial requests (byte range header), though. One has to handle that 'manually', occasional down time, etc..

If that is not an option, John's code looks good. I have found that a byte buffer of 2k FILEBUFFERSIZE is the most efficient in microbench marks. Another option might be a shared FileChannel. (FileChannels are thread-safe.)

That said, I'd also add that guessing at what caused an out of memory error is a classic optimization mistake. You would improve your chances of success by working with hard metrics.

  1. Place -XX:+HeapDumpOnOutOfMemoryError into you JVM startup parameters, just in case
  2. take use jmap on the running JVM (jmap -histo <pid>) under load
  3. Analyize the metrics (jmap -histo out put, or have jhat look at your heap dump). It very well may be that your out of memory is coming from somewhere unexpected.

There are of course other tools out there, but jmap & jhat come with Java 5+ 'out of the box'

I've considered writing the file to a local temp drive and then spawning another thread to handle the streaming so that the tomcat servlet thread can be re-used. This seems like it would be io heavy.

Ah, I don't think you can't do that. And even if you could, it sounds dubious. The tomcat thread that is managing the connection needs to in control. If you are experiencing thread starvation then increase the number of available threads in ./conf/server.xml. Again, metrics are the way to detect this--don't just guess.

Question: Are you also running on EC2? What are your tomcat's JVM start up parameters?

查看更多
淡お忘
6楼-- · 2019-01-03 02:52

Why wouldn't you just point them to the S3 url? Taking an artifact from S3 and then streaming it through your own server to me defeats the purpose of using S3, which is to offload the bandwidth and processing of serving the images to Amazon.

查看更多
一夜七次
7楼-- · 2019-01-03 02:57

When possible, you should not store the entire contents of a file to be served in memory. Instead, aquire an InputStream for the data, and copy the data to the Servlet OutputStream in pieces. For example:

ServletOutputStream out = response.getOutputStream();
InputStream in = [ code to get source input stream ];
String mimeType = [ code to get mimetype of data to be served ];
byte[] bytes = new byte[FILEBUFFERSIZE];
int bytesRead;

response.setContentType(mimeType);

while ((bytesRead = in.read(bytes)) != -1) {
    out.write(bytes, 0, bytesRead);
}

// do the following in a finally block:
in.close();
out.close();

I do agree with toby, you should instead "point them to the S3 url."

As for the OOM exception, are you sure it has to do with serving the image data? Let's say your JVM has 256MB of "extra" memory to use for serving image data. With Google's help, "256MB / 200KB" = 1310. For 2GB "extra" memory (these days a very reasonable amount) over 10,000 simultaneous clients could be supported. Even so, 1300 simultaneous clients is a pretty large number. Is this the type of load you experienced? If not, you may need to look elsewhere for the cause of the OOM exception.

Edit - Regarding:

In this use case the images can contain sensitive data...

When I read through the S3 documentation a few weeks ago, I noticed that you can generate time-expiring keys that can be attached to S3 URLs. So, you would not have to open up the files on S3 to the public. My understanding of the technique is:

  1. Initial HTML page has download links to your webapp
  2. User clicks on a download link
  3. Your webapp generates an S3 URL that includes a key that expires in, lets say, 5 minutes.
  4. Send an HTTP redirect to the client with the URL from step 3.
  5. The user downloads the file from S3. This works even if the download takes more than 5 minutes - once a download starts it can continue through completion.
查看更多
登录 后发表回答