TimeSpan.FromSeconds
takes a double, and can represent values down to 100 nanoseconds, however this method inexplicably rounds the time to whole milliseconds.
Given that I've just spent half an hour to pinpoint this (documented!) behaviour, knowing why this might be the case would make it easier to put up with the wasted time.
Can anyone suggest why this seemingly counter-productive behaviour is implemented?
TimeSpan.FromSeconds(0.12345678).TotalSeconds
// 0.123
TimeSpan.FromTicks((long)(TimeSpan.TicksPerSecond * 0.12345678)).TotalSeconds
// 0.1234567
I think the explanation is there: TimeSpan structure incorrectly handles values close to min and max value
And it looks like it's not going to change any time soon :-)
On the rights of a speculation..
TimeSpan.MaxValue.TotalMilliseconds
is equat to 922337203685477. The number that has 15 digits.double
is precise to 15 digits.TimeSpan.FromSeconds
,TimeSpan.FromMinutes
etc. all go through conversion to milliseconds expressed indouble
(then to ticks then toTimeSpan
which is not interesting now)So, when you are creating
TimeSpan
that will be close toTimeSpan.MaxValue
(orMinValue
) the conversion will be precise to milliseconds only.So the probable answer to the question "why" is: to have the same precision all the times.
Further thing to think about is whether the job could have been done better if conversions were done through firstly converting value to ticks expressed in
long
.FromSeconds uses private method Interval
0x3e8 == 1000
Interval method multiplay value on that const and then cast to long (see last row):
As result we have precision with 3 (x1000) signs. Use reflector to investigate
As you've found out yourself, it's a documented feature. It's described in the documentation of TimeSpan:
The reason for this is probably because a double is not that accurate at all. It is always a good idea to do some rounding when comparing doubles, because it might just be a very tiny bit larger or smaller than you'd expect. That behaviour could actually provide you with some unexpected nanoseconds when you try to put in whole milliseconds. I think that is the reason they chose to round the value to whole milliseconds and discard the smaller digits.
Imagine you're the developer responsible for designing the
TimeSpan
type. You've got all the basic functionality in place; it all seems to be working great. Then one day some beta tester comes along and shows you this code:Why does that output
False
? the tester asks you. Even though you understand why this happened (the loss of precision in adding togetherx
andy
), you have to admit it does seem a bit strange from a client perspective. Then he throws this one at you:That one outputs
True
! The tester is understandably skeptical.At this point you have a decision to make. Either you can allow an arithmetic operation between
TimeSpan
values that have been constructed fromdouble
values to yield a result whose precision exceeds the accuracy of thedouble
type itself—e.g., 100000000000000.5 (16 significant figures)—or you can, you know, not allow that.So you decide, you know what, I'll just make it so that any method that uses a
double
to construct aTimeSpan
will be rounded to the nearest millisecond. That way, it is explicitly documented that converting from adouble
to aTimeSpan
is a lossy operation, absolving me in cases where a client sees weird behavior like this after converting fromdouble
toTimeSpan
and hoping for an accurate result.I'm not necessarily arguing that this is the "right" decision here; clearly, this approach causes some confusion on its own. I'm just saying that a decision needed to be made one way or the other, and this is what was apparently decided.