#include <stdio.h>
static inline unsigned long long tick()
{
unsigned long long d;
__asm__ __volatile__ ("rdtsc" : "=A" (d) );
return d;
}
int main()
{
long long res;
res=tick();
res=tick()-res;
printf("%d",res);
return 0;
}
I have compiled this code with gcc with -O0 -O1 -O2 -O3 optimizations. And I always get 2000-2500 cycles. Can anyone explain the reason for this output? How to spend these cycles?
First function "tick" is wrong. This is right.
Another version of function "tick"
static __inline__ unsigned long long tick()
{
unsigned hi, lo;
__asm__ __volatile__ ("rdtsc" : "=a"(lo), "=d"(hi));
return ( (unsigned long long)lo)|( ((unsigned long long)hi)<<32 );
}
This is assembly code for -O3
.file "rdtsc.c"
.section .rodata.str1.1,"aMS",@progbits,1
.LC0:
.string "%d"
.text
.p2align 4,,15
.globl main
.type main, @function
main:
leal 4(%esp), %ecx
andl $-16, %esp
pushl -4(%ecx)
pushl %ebp
movl %esp, %ebp
subl $40, %esp
movl %ecx, -16(%ebp)
movl %ebx, -12(%ebp)
movl %esi, -8(%ebp)
movl %edi, -4(%ebp)
#APP
# 6 "rdtsc.c" 1
rdtsc
# 0 "" 2
#NO_APP
movl %edx, %edi
movl %eax, %esi
#APP
# 6 "rdtsc.c" 1
rdtsc
# 0 "" 2
#NO_APP
movl %eax, %ecx
movl %edx, %ebx
subl %esi, %ecx
sbbl %edi, %ebx
movl %ecx, 4(%esp)
movl %ebx, 8(%esp)
movl $.LC0, (%esp)
call printf
movl -16(%ebp), %ecx
xorl %eax, %eax
movl -12(%ebp), %ebx
movl -8(%ebp), %esi
movl -4(%ebp), %edi
movl %ebp, %esp
popl %ebp
leal -4(%ecx), %esp
ret
.size main, .-main
.ident "GCC: (Debian 4.3.2-1.1) 4.3.2"
.section .note.GNU-stack,"",@progbits
This is CPU
processor : 0
vendor_id : GenuineIntel
cpu family : 15
model : 4
model name : Intel(R) Xeon(TM) CPU 3.00GHz
stepping : 3
cpu MHz : 3000.105
cache size : 2048 KB
fdiv_bug : no
hlt_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 5
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss constant_tsc up pebs bts pni
bogomips : 6036.62
clflush size : 64
I've tried your code on several Linux distros running on different Intel CPUs (admittedly all more recent than the Pentium 4 HT 630 you appear to be using). In all those tests I got values between 25 and 50 cycles.
My only hypothesis that's consistent with all the evidence is that you're running your operating system inside a virtual machine rather than on bare metal, and TSC is getting virtualized.
There are any number of reasons to get a large number:
- The OS did a context switch, and your process got put to sleep.
- A disk seek occurred, and your process got put to sleep.
- …any of a slew of reasons as to why your process might get ignored.
Note that rdtsc
is not particularly reliable for timing without work, because:
- Processor speeds can change, and thus, the length of a cycle (when measured in seconds) changes.
- Different processors may have different values for the TSC for a given instant in time.
Most operatings systems have a high-precision clock or timing method. clock_gettime
on Linux for example, particularly the monotonic clocks. (Understand too the difference between a wall-clock and a monotonic clock: a wall clock can move backwards — even in UTC.) On Windows, I think the recommendation is QueryHighPerformanceCounter
. Typically these clocks provide more than enough accuracy for most needs.
Also, looking at the assembly, it looks like you're only getting 32-bits of the answer: I don't see %edx
getting saved after rdtsc
.
Running your code, I get timings from 120-150 ns for clock_gettime
using CLOCK_MONOTONIC
, and 70-90 cycles for rdtsc (~20 ns at full speed, but I suspect the processor is clocked down, and that's really about 50 ns). (On a laptopdesktop (darn SSH, forgot which machine I was on!) that is at about a constant 20% CPU use) Sure your machine isn't bogged down?
It looks like your OS disabled execution of RDTSC in user space. And your application has to switch to kernel and back, which takes a lot of cycles.
This is from the Intel Software Developer’s Manual:
When in protected or virtual 8086 mode, the time stamp disable (TSD) flag in
register CR4 restricts the use of the RDTSC instruction as follows. When the TSD flag
is clear, the RDTSC instruction can be executed at any privilege level; when the flag
is set, the instruction can only be executed at privilege level 0. (When in real-address
mode, the RDTSC instruction is always enabled.)
Edit:
Answering aix's comment, I explain, why TSD is most likely the reason here.
I know only these possibilities for a program to perform a single instruction longer than usual:
- Running under some emulator,
- using self-modified code,
- context switch,
- kernel switch.
First 2 reasons cannot usually delay execution for more than a few hundred cycles. 2000-2500 cycles are more typical for context/kernel switch. But it is practically impossible to catch a context switch several times on the same place. So it should be kernel switch. Which means that either program is running under debugger or RDTSC is not allowed in user mode.
The most likely reason for OS to disable RDTSC may be security. There were attempts to use RDTSC to crack encryption programs.
Instruction cache miss? (this is my guess)
Also, possibly,
Switch to hypervisor in a virtualized system?
Remnants of program bootstrap (including network activity on same CPU)?
To Thanatos: On systems more recent than 2008, rdtsc() is a wall clock and does not vary with frequency steps.
Can you try this little code?
int main()
{
long long res;
fflush(stdout); // chnage the exact timing of stdout, in case there is something to write in a ssh connection, together with its interrupts
for (int pass = 0; pass < 2; pass++)
{
res=tick();
res=tick()-res;
}
printf("%d",res); // ignore result on first pass, display the result on second pass.
return 0;
}
Just an idea - maybe these two rdtsc instructions are executed on different cores? rdtsc values may slightly vary across cores.