I searched in linux box and saw it being typedef to
typedef __time_t time_t;
But could not find the __time_t definition.
I searched in linux box and saw it being typedef to
typedef __time_t time_t;
But could not find the __time_t definition.
Standards
William Brendel quoted Wikipedia, but I prefer it from the horse's mouth.
C99 N1256 standard draft 7.23.1/3 "Components of time" says:
and 6.2.5/18 "Types" says:
POSIX 7 sys_types.h says:
where
[CX]
is defined as:It is an extension because it makes a stronger guarantee: floating points are out.
gcc one-liner
No need to create a file as mentioned by Quassnoi:
On Ubuntu 15.10 GCC 5.2 the top two lines are:
Command breakdown with some quotes from
man gcc
:-E
: "Stop after the preprocessing stage; do not run the compiler proper."-xc
: Specify C language, since input comes from stdin which has no file extension.-include file
: "Process file as if "#include "file"" appeared as the first line of the primary source file."-
: input from stdintime_t
is justtypedef
for 8 bytes (long long/__int64
) which all compilers and OS's understand. Back in the days, it used to be just forlong int
(4 bytes) but not now. If you look at thetime_t
incrtdefs.h
you will find both implementations but the OS will uselong long
.Robust code does not care what the type is.
C species
time_t
to be a real type likedouble, long long, int64_t, int
, etc.It even could be
unsigned
as the return values from many time function indicating error is not-1
, but(time_t)(-1)
- This implementation choice is uncommon.The point is that the "need-to-know" the type is rare. Code should be written to avoid the need.
Yet a common "need-to-know" occurs when code wants to print the raw
time_t
. Casting to the widest integer type will accommodate most modern cases.Casting to a
double
orlong double
will work too, yet could provide inexact decimal outputThe answer is definitely implementation-specific. To find out definitively for your platform/compiler, just add this output somewhere in your code:
If the answer is 4 (32 bits) and your data is meant to go beyond 2038, then you have 25 years to migrate your code.
Your data will be fine if you store your data as a string, even if it's something simple like:
Then just read it back the same way (fread, fscanf, etc. into an int), and you have your epoch offset time. A similar workaround exists in .Net. I pass 64-bit epoch numbers between Win and Linux systems with no problem (over a communications channel). That brings up byte-ordering issues, but that's another subject.
To answer paxdiablo's query, I'd say that it printed "19100" because the program was written this way (and I admit I did this myself in the '80's):
The
printf
statement prints the fixed string "Year is: 19" followed by a zero-padded string with the "years since 1900" (definition oftm->tm_year
). In 2000, that value is 100, obviously."%02d"
pads with two zeros but does not truncate if longer than two digits.The correct way is (change to last line only):
New question: What's the rationale for that thinking?
Typically you will find these underlying implementation-specific typedefs for gcc in the
bits
orasm
header directory. For me, it's/usr/include/x86_64-linux-gnu/bits/types.h
.You can just grep, or use a preprocessor invocation like that suggested by Quassnoi to see which specific header.
time_t
is of typelong int
on 64 bit machines, else it islong long int
.You could verify this in these header files:
time.h
:/usr/include
types.h
andtypesizes.h
:/usr/include/x86_64-linux-gnu/bits
(The statements below are not one after another. They could be found in the resp. header file using Ctrl+f search.)
1)In
time.h
2)In
types.h
3)In
typesizes.h
4) Again in
types.h