I'm investigating a performance problem with Jetty 6.1.26. Jetty appears to use Transfer-Encoding: chunked
, and depending on the buffer size used, this can be very slow when transferring locally.
I've created a small Jetty test application with a single servlet that demonstrates the issue.
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.io.OutputStream;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import org.mortbay.jetty.Server;
import org.mortbay.jetty.nio.SelectChannelConnector;
import org.mortbay.jetty.servlet.Context;
public class TestServlet extends HttpServlet {
@Override
protected void doGet(HttpServletRequest req, HttpServletResponse resp)
throws ServletException, IOException {
final int bufferSize = 65536;
resp.setBufferSize(bufferSize);
OutputStream outStream = resp.getOutputStream();
FileInputStream stream = null;
try {
stream = new FileInputStream(new File("test.data"));
int bytesRead;
byte[] buffer = new byte[bufferSize];
while( (bytesRead = stream.read(buffer, 0, bufferSize)) > 0 ) {
outStream.write(buffer, 0, bytesRead);
outStream.flush();
}
} finally {
if( stream != null )
stream.close();
outStream.close();
}
}
public static void main(String[] args) throws Exception {
Server server = new Server();
SelectChannelConnector ret = new SelectChannelConnector();
ret.setLowResourceMaxIdleTime(10000);
ret.setAcceptQueueSize(128);
ret.setResolveNames(false);
ret.setUseDirectBuffers(false);
ret.setHost("0.0.0.0");
ret.setPort(8080);
server.addConnector(ret);
Context context = new Context();
context.setDisplayName("WebAppsContext");
context.setContextPath("/");
server.addHandler(context);
context.addServlet(TestServlet.class, "/test");
server.start();
}
}
In my experiment, I'm using a 128MB test file that the servlet returns to the client, which connects using localhost. Downloading this data using a simple test client written in Java (using URLConnection
) takes 3.8 seconds, which is very slow (yes, it's 33MB/s, which doesn't sound slow, except that this is purely local and the input file was cached; it should be much faster).
Now here's where it gets strange. If I download the data with wget, which is a HTTP/1.0 client and therefore doesn't support chunked transfer encoding, it only takes 0.1 seconds. That's a much better figure.
Now when I change bufferSize
to 4096, the Java client takes 0.3 seconds.
If I remove the call to resp.setBufferSize
entirely (which appears to use a 24KB chunk size), the Java client now takes 7.1 seconds, and wget is suddenly equally slow!
Please note I'm not in any way an expert with Jetty. I stumbled across this problem while diagnosing a performance problem in Hadoop 0.20.203.0 with reduce task shuffling, which transfers files using Jetty in a manner much like the reduced sample code, with a 64KB buffer size.
The problem reproduces both on our Linux (Debian) servers and on my Windows machine, and with both Java 1.6 and 1.7, so it appears to depend solely on Jetty.
Does anyone have any idea what could be causing this, and if there's something I can do about it?