Performance difference of iostream console output

2019-02-23 15:29发布

given the following very simple for loop:

int main (void) {
    for (int i = 0 ; i < 1000000; i++) {
         std::cout<<i<<std::endl;
    }
}

running this code on a clean Windows 8 professional using Microsoft visual studio 2012 takes about 15 secs for every 100k prints.

On mac os x , using the same computer , it takes barely 3 secs for xcode to output 1 mill lines.

I'm almost 100% sure that it has nothing with the performance and it is just something that is related to the output mechanics or something.

Can someone confirm this..? just to know that my windows & visual studio are fine.

3条回答
淡お忘
2楼-- · 2019-02-23 15:43

This depends on external factors. Like the terminal application being used. For example, on OS X and Linux, you can bypass the terminal and run it with:

./program > /dev/null

It completes in about 0.2 seconds.

I/O in standard C++ is a blocking operation. That means the program "freezes" while it waits for the OS to process the output. In the end, if the terminal application isn't that fast, this will result in the program being "frozen" in a waiting state quite a lot.

查看更多
在下西门庆
3楼-- · 2019-02-23 15:44

std::endl flush the line. It is quite expensive to do this.

Try to do :

std::cout << i << '\n';

In most other usual interactive I/O scenarios, std::endl is redundant when used with std::cout because any input from std::cin, output to std::cerr, or program termination forces a call to std::cout.flush().

Use of std::endl in place of '\n', encouraged by some sources, may significantly degrade output performance.

Source


EDIT : Output operation are costly and depend on external factors. This is why it is slow here. For example, the terminal application being used can be factor of some performance issues.

You can avoid that by redirecting the output to /dev/null/ :

./a.out > /dev/null

On output performance, you can read this : http://codeforces.com/blog/entry/5217

查看更多
仙女界的扛把子
4楼-- · 2019-02-23 15:56

Note that this is more speculation on my part, but still:

What I suspect is that the difference (btw. Windows / OSX) in overall runtime of you're little test-program hasn't got anything to do with the code produced by the respective compiler.

From my experience with console output on windows, I strongly suspect the "bottleneck" here is shoveling the character data from your program to the Windows Console and cmd.exe displaying it.

It could simply be that the console/shell/bash on OSX is much faster accepting the output of the program than the Windows Console.

What you can try is to redirect the output of this program to a file (by using redirection when starting it on the CLI test.exe > output.txt) and see whether you measure any difference this way.

查看更多
登录 后发表回答