Regarding Unix (POSIX) time, Wikipedia says:
Due to its handling of leap seconds, it is neither a linear representation of time nor a true representation of UTC.
But the Unix date
command does not seem to be aware of them actually
$ date -d '@867715199' --utc
Mon Jun 30 23:59:59 UTC 1997
$ date -d '@867715200' --utc
Tue Jul 1 00:00:00 UTC 1997
While there should be a leap second there at Mon Jun 30 23:59:60 UTC 1997
.
Does this mean that only the date
command ignores leap seconds, while the concept of Unix time doesn't?
Unix time is easy to work with, but some timestamps are not real times, and some timestamps are not unique times.
That is, there are some duplicate timestamps representing two different seconds in time, because in unix time the sixtieth second might have to repeat itself (as there can't be a sixty-first second). Theoretically, they could also be gaps in the future because the sixtieth second doesn't have to exist, although no skipping leap seconds have been issued so far.
Rationale for unix time: it's defined so that it's easy to work with. Adding support for leap seconds to the standard libraries is very tricky. For example, you want to represent 1 Jan 2050 in a database. No-one on earth knows how many seconds away that date is in UTC! The date can't be stored as a UTC timestamp, because the IAU doesn't know how many leap seconds we'll have to add in the next decades (they're as good as random). So how can a programmer do date arithmetic when the length of time which will elapse between any two dates in the future isn't know until a year or two before? Unix time is simple: we know the timestamp of 1 Jan 2050 already (namely, 80 years * #of seconds in a year). UTC is extremely hard to work with all year round, whereas unix time is only hard to work with in the instant a leap second occurs.
For what it's worth, I've never met a programmer who agrees with leap seconds. They should clearly be abolished.
The number of seconds per day are fixed with Unix timestamps.
So it cannot represent leap seconds. The OS will slow down the clock to accommodate for this. The leap seconds is simply not existent as far a Unix timestamps are concerned.
Since both of the other answers contain lots of misleading information, I'll throw this in.
Thomas is right that the number of Unix Epoch timestamp seconds per day are fixed. What this means is that on days where there is a leap second, the second right before midnight (the 61st second of the UTC minute before midnight) is given the same timestamp as the previous second.
That timestamp is "replayed", if you will. So the same unix timestamp will be used for two real-world seconds. This also means that if you're getting fraction unix epochs, the whole second will repeat.
X86399.0
,X86399.5
,X86400.0
,X86400.5
,X86400.0
,X86400.5
, thenX86401.0
.So unix time can't unambiguously represent leap seconds - the leap second timestamp is also the timestamp for the previous real-world second.