Precision vs. accuracy of System.nanoTime()

2019-01-07 20:54发布

问题:

The documentation for System.nanoTime() says the following (emphasis mine).

This method can only be used to measure elapsed time and is not related to any other notion of system or wall-clock time. The value returned represents nanoseconds since some fixed but arbitrary time (perhaps in the future, so values may be negative). This method provides nanosecond precision, but not necessarily nanosecond accuracy. No guarantees are made about how frequently values change.

As I see it, this can be interpreted in two different ways:

  1. The sentence in bold above refers to individual return values. Then, precision and accuracy are to be understood in the numerical sense. That is, precision refers to the number of significant digits - the position of truncation, and accuracy is whether the number is the correct one (such as described in the top answer here What is the difference between 'precision' and 'accuracy'? )

  2. The sentence in bold above refers to the capability of the method itself. Then, precision and accuracy are to be understood as illustrated by the dartboard analogy ( http://en.wikipedia.org/wiki/Precision_vs._accuracy#Accuracy_versus_precision:_the_target_analogy ). So, low accuracy, high precision => the wrong value is repeatedly hit with a high precision: imagining that physical time stands still, consecutive calls of nanoTime() returns the same numerical value, but it is off from the actual elapsed time since the reference time by some constant offset.

Which interpretation is the correct one? My point is, interpretation 2 would mean that a measure of time difference using nanoTime() (by subtracting two return values) would be correct to the nanosecond (since the constant error/offset in the measurement would be eliminated), while interpretation 1 wouldn't guarantee that kind of compliance between measurements and thus wouldn't necessarily imply a high precision of time difference measurements.


Updated 4/15/13: The Java 7 documentation for System.nanoTime() has been updated to address the possible confusion with the previous wording.

Returns the current value of the running Java Virtual Machine's high-resolution time source, in nanoseconds.

This method can only be used to measure elapsed time and is not related to any other notion of system or wall-clock time. The value returned represents nanoseconds since some fixed but arbitrary origin time (perhaps in the future, so values may be negative). The same origin is used by all invocations of this method in an instance of a Java virtual machine; other virtual machine instances are likely to use a different origin.

This method provides nanosecond precision, but not necessarily nanosecond resolution (that is, how frequently the value changes) - no guarantees are made except that the resolution is at least as good as that of currentTimeMillis().

Differences in successive calls that span greater than approximately 292 years (263 nanoseconds) will not correctly compute elapsed time due to numerical overflow.

The values returned by this method become meaningful only when the difference between two such values, obtained within the same instance of a Java virtual machine, is computed.

回答1:

The first interpretation is correct. On most systems the three least-significant digits will always be zero. This in effect gives microsecond accuracy, but reports it at the fixed precision level of a nanosecond.

In fact, now that I look at it again, your second interpretation is also a valid description of what is going on, maybe even more so. Imagining freezed time, the report will be always the same wrong number of nanoseconds, but correct if understood as the integer number of microseconds.



回答2:

In Clojure command line, I get:

user=> (- (System/nanoTime) (System/nanoTime))
0
user=> (- (System/nanoTime) (System/nanoTime))
0
user=> (- (System/nanoTime) (System/nanoTime))
-641
user=> (- (System/nanoTime) (System/nanoTime))
0
user=> (- (System/nanoTime) (System/nanoTime))
-642
user=> (- (System/nanoTime) (System/nanoTime))
-641
user=> (- (System/nanoTime) (System/nanoTime))
-641

So essentially, nanoTime doesn't get updated every nanosecond, contrary to what one might intuitively expect from its precision. In Windows systems, it's using the QueryPerformanceCounter API under the hood (according to this article), which in practice seems to give about 640 ns resolution (in my system!).

Note that nanoTime can't, by itself, have any accuracy at all, since its absolute value is arbitrary. Only the difference between successive nanoTime calls is meaningful. The (in)accuracy of that difference is in the ballpark of 1 microsecond.



回答3:

One quite interesting feature of the difference between System.currentTimeMillis() & System.nanoTime() is that System.nanoTime() does NOT change with the wall clock. I run code on a Windows virtual machine that has heavy time drift. System.currentTimeMillis() can jump back or forward by 1-2 seconds each time as NTP corrects that drift, making accurate time stamps meaningless. (Windows 2003, 2008 VPS editions)

System.nanoTime() is not, however, affected by changing wall clock time so you can take a time retrieved over NTP and apply a correction based on System.nanoTime() since NTP was checked last and you have a far more accurate time than System.currentTimeMillis() in adverse wall clock conditions

This is of course counter-intuitive, but useful to know



回答4:

If someone like me comes and reads this question again and again and again to still kind of understand it, here is a simpler (I hope) explanation.

Precision is about how many digits you retain. Each of the:

long start = System.nanoTime();
long end   = System.nanoTime();

is going to be a precise number (lots of digits).

Since accuracy is measured only compared to something, an individual call to System.nanoTime makes no sense since it's value is quite arbitrary and does not depend on something that we can measure. The only way to distinguish it's accuracy is to two different calls of it, thus:

 long howMuch = end - start;

is not going to have a nano-second accuracy. And in fact on my machine the difference is 0.2 - 0.3 micro-seconds.



标签: java nanotime