Weird program latency behavior on VM

2019-07-16 07:21发布

问题:

I wrote a program to read 256KB array to get 1ms latency. The program is pretty simple and attached. However, when I run it on VM on Xen, I found that the latency is not stable. It has the following pattern: The time unit is ms.

    #totalCycle CyclePerLine  totalms
    22583885 5513 6.452539
    3474342 848 0.992669
    3208486 783 0.916710
    25848572 6310 7.385306
    3225768 787 0.921648
    3210487 783 0.917282
    25974700 6341 7.421343
    3244891 792 0.927112
    3276027 799 0.936008
    25641513 6260 7.326147
    3531084 862 1.008881
    3233687 789 0.923911
    22397733 5468 6.399352
    3523403 860 1.006687
    3586178 875 1.024622
    26094384 6370 7.455538
    3540329 864 1.011523
    3812086 930 1.089167
    25907966 6325 7.402276

I'm thinking some process is doing something and it's like an event-driven process. Does any one encounter this before? or anyone can point out the potential process/services that could make this happen?

Below is my program. I run it for 1000 times. Each time got the one line of the result above.

#include <iostream>
#include <cstdio>
#include <cstdlib>
#include <string>
#include <ctime>

using namespace std;

#if defined(__i386__)
static __inline__ unsigned long long rdtsc(void)
{
    unsigned long long int x;
    __asm__ volatile (".byte 0x0f, 0x31" : "=A" (x));
    return x;
}
#elif defined(__x86_64__)
static __inline__ unsigned long long rdtsc(void)
{
    unsigned hi, lo;
    __asm__ __volatile__ ("rdtsc" : "=a"(lo), "=d"(hi));
    return ( (unsigned long long)lo)|( ((unsigned long long)hi)<<32 );
}
#endif

#define CACHE_LINE_SIZE 64

#define WSS 24567 /* 24 Mb */
#define NUM_VARS WSS * 1024 / sizeof(long)

#define KHZ 3500000

// ./a.out memsize(in KB)
int main(int argc, char** argv)
{
    unsigned long wcet = atol(argv[1]);
    unsigned long mem_size_KB = 256;  // mem size in KB
    unsigned long mem_size_B  = mem_size_KB * 1024; // mem size in Byte
    unsigned long count       = mem_size_B / sizeof(long);
    unsigned long row         = mem_size_B / CACHE_LINE_SIZE;
    int           col         = CACHE_LINE_SIZE / sizeof(long);

    unsigned long long start, finish, dur1;
    unsigned long temp;

    long *buffer;
    buffer = new long[count];

    // init array
    for (unsigned long i = 0; i < count; ++i)
        buffer[i] = i;

    for (unsigned long i = row-1; i >0; --i) {
        temp = rand()%i;
        swap(buffer[i*col], buffer[temp*col]);
    }

    // warm the cache again
    temp = buffer[0];
    for (unsigned long i = 0; i < row-1; ++i) {
        temp = buffer[temp];
    }

    // First read, should be cache hit
    temp = buffer[0];
    start = rdtsc();
    int sum = 0;
    for(int wcet_i = 0; wcet_i < wcet; wcet_i++)
    {
        for(int j=0; j<21; j++)
        {
            for (unsigned long i = 0; i < row-1; ++i) {
                if (i%2 == 0) sum += buffer[temp];
                else sum -= buffer[temp];
                temp = buffer[temp];
            }
        }
    }
    finish = rdtsc();
    dur1 = finish-start;

    // Res
    printf("%lld %lld %.6f\n", dur1, dur1/row, dur1*1.0/KHZ);
    delete[] buffer;
    return 0;
}

回答1:

The use of the RDTSC instruction in a virtual machine is complicated. It is likely that the hypervisor (Xen) is emulating the RDTSC instruction by trapping it. Your fastest runs show around 800 cycles/cache line, which is very, very, slow... the only explanation is that the RDTSC results in a trap that is handled by the hypervisor, that overhead is a performance bottleneck. I'm not sure about the even longer time that you see periodically, but given that the RDTSC is being trapped, all timing bets are off.

You can read more about it here

http://xenbits.xen.org/docs/4.2-testing/misc/tscmode.txt

Instructions in the rdtsc family are non-privileged, but privileged software may set a cpuid bit to cause all rdtsc family instructions to trap. This trap can be detected by Xen, which can then transparently "emulate" the results of the rdtsc instruction and return control to the code following the rdtsc instruction

By the way, that article is wrong in that the hypervisor doesn't set a cpuid bit to cause RDTSC to trap, it is bit #2 in Control Register 4 (CR4.TSD):

http://en.wikipedia.org/wiki/Control_register#CR4



标签: c++ linux kvm xen