I'm using a Java socket, connected to a server.
If I send a HEADER http request, how can I measure the response time from the server? Must I use a provided java timer, or is there an easier way?
I'm looking for a short answer, I don't want to use other protocols etc. Obviously do I neither want to have a solution that ties my application to a specific OS. Please people, IN-CODE solutions only.
I would say it depends on what exact interval you are trying measure, the amount of time from the last byte of the request that you send until the first byte of the response that you receive? Or until the entire response is received? Or are you trying to measure the server-side time only?
If you're trying to measure the server side processing time only, you're going to have a difficult time factoring out the amount of time spent in network transit for your request to arrive and the response to return. Otherwise, since you're managing the request yourself through a Socket, you can measure the elapsed time between any two moments by checking the System timer and computing the difference. For example:
public void sendHttpRequest(byte[] requestData, Socket connection) {
long startTime = System.nanoTime();
writeYourRequestData(connection.getOutputStream(), requestData);
byte[] responseData = readYourResponseData(connection.getInputStream());
long elapsedTime = System.nanoTime() - startTime;
System.out.println("Total elapsed http request/response time in nanoseconds: " + elapsedTime);
}
This code would measure the time from when you begin writing out your request to when you finish receiving the response, and print the result (assuming you have your specific read/write methods implemented).
curl -s -w "%{time_total}\n" -o /dev/null http://server:3000
You can use time and curl and time on the command-line. The -I argument for curl instructs it to only request the header.
time curl -I 'http://server:3000'
Something like this might do the trick
import java.io.IOException;
import org.apache.commons.httpclient.HttpClient;
import org.apache.commons.httpclient.HttpMethod;
import org.apache.commons.httpclient.URIException;
import org.apache.commons.httpclient.methods.HeadMethod;
import org.apache.commons.lang.time.StopWatch;
//import org.apache.commons.lang3.time.StopWatch
public class Main {
public static void main(String[] args) throws URIException {
StopWatch watch = new StopWatch();
HttpClient client = new HttpClient();
HttpMethod method = new HeadMethod("http://stackoverflow.com/");
try {
watch.start();
client.executeMethod(method);
} catch (IOException e) {
e.printStackTrace();
} finally {
watch.stop();
}
System.out.println(String.format("%s %s %d: %s", method.getName(), method.getURI(), method.getStatusCode(), watch.toString()));
}
}
HEAD http://stackoverflow.com/ 200: 0:00:00.404
Maybe I'm missing something, but why don't you just use:
// open your connection
long start = System.currentTimeMillis();
// send request, wait for response (the simple socket calls are all blocking)
long end = System.currentTimeMillis();
System.out.println("Round trip response time = " + (end-start) + " millis");
Use AOP to intercept calls to the socket and measure the response time.
@Aspect
@Profile("performance")
@Component
public class MethodsExecutionPerformance {
private final Logger logger = LoggerFactory.getLogger(getClass());
@Pointcut("execution(* it.test.microservice.myService.service.*.*(..))")
public void serviceMethods() {
}
@Around("serviceMethods()")
public Object monitorPerformance(ProceedingJoinPoint proceedingJoinPoint) throws Throwable {
StopWatch stopWatch = new StopWatch(getClass().getName());
stopWatch.start();
Object output = proceedingJoinPoint.proceed();
stopWatch.stop();
logger.info("Method execution time\n{}", stopWatch.prettyPrint());
return output;
}
}
In this way, you can calculate the real response time of your service independent of network speed.