Why is MSVC compiler wierdly slower than gcc on Li

2019-05-31 22:05发布

I couldn't figure out why the execution time for the following code snippet varies significantly on Windows(MSVC++)Virtual machine, Linux(GCC)virtual machine and Mac(xCode) physical machine.

#include <iostream>
#include <ctime>
#include <ratio>
#include <chrono>

using namespace std;
using namespace std::chrono;


int main()
{
    const int TIMES = 100;
    const int STARS = 1000;

    steady_clock::time_point t1;// = steady_clock::now();
    steady_clock::time_point t2;// = steady_clock::now();
    int totalCountMicro = 0;
    int totalCountMilli = 0;

    for(int i = 0; i < TIMES; i++) {
        t1 = steady_clock::now();
        for (int j = 0; j< STARS; j++) cout << "*";
        t2 = steady_clock::now();
        cout << endl;
        totalCountMilli += duration_cast<duration<int, milli>>(t2 - t1).count();
        totalCountMicro += duration_cast<duration<int, micro>>(t2 - t1).count();
    }

    cout << "printing out " << STARS << " stars " << TIMES << " times..." << endl;
    cout << "takes " << (totalCountMilli / TIMES) << " milliseconds on average." << endl;
    cout << "takes " << (totalCountMicro / TIMES) << " microseconds on average." << endl;

    getchar();

    return 0;
}

The code above tries to print 1000 stars 100 times and calculates the average time that it has taken for printing 1000 stars.

The result are:

Windows virtual machine:

  • compiler:MSVC
  • 33554 microseconds

  • compiler:GCC

  • 40787 microseconds

linux virtual machine:

  • compiler: GCC
  • 39 microseconds

OSX physical machine:

  • compiler:xcode C++
  • 173 microseconds

First thought was that it could be the problem of virtual machine, but as the linux virtual machine done it pretty fast, I believe it probably could be some other reasons that I don't know.

Any thoughts or comments will be highly appreciated!

0条回答
登录 后发表回答