Is there a significant overhead associated with calling OutputDebugString in release build?
相关问题
- the application was unable to start correctly 0xc0
- Faster loop: foreach vs some (performance of jsper
- Inheritance impossible in Windows Runtime Componen
- how to get running process information in java?
- Why wrapping a function into a lambda potentially
相关文章
- 如何让cmd.exe 执行 UNICODE 文本格式的批处理?
- 怎么把Windows开机按钮通过修改注册表指向我自己的程序
- how do I log requests and responses for debugging
- Warning : HTML 1300 Navigation occured?
- Bundling the Windows Mono runtime with an applicat
- Windows 8.1 How to fix this obsolete code?
- Why windows 64 still makes use of user32.dll etc?
- DOM penalty of using html attributes
Measured - 10M calls take about 50 seconds. I think it's significant overhead for unused functionality.
Using a macro can help get rid of this in release build:
Not only removes the calls, but also the parameters evaluation and the text strings are entirely removed and you'll not see them in the binary file.
Why not measure it yourself? Compile the following code, run it & time it. Then remove the call to OutputDebugString, recompile and rerun. Should take abut three minutes of you time.
I had read in an article that OutPutDebugString internally does few interesting things:
Even if the debugger is not attached ( in release mode) there is significant cost involved in using OutputDebugstring with the usage of various kernel objects.
Performance hit is very evident if you write a sample code and test.
I've not seen a problem in dozens of server-side release mode apps over the years, all of which have built-in metrics. You can get the impression that it's slow because most of the debug-catcher applications you can find (DBWIN32 et al) are pretty slow at throwing the data up onto the screen, which gives the impression of lag.
Of course all of our applications have this output disabled by default, but it is useful to be able to turn it on in the field, since you can then view debug output from several applications, serialised in something like DBWin32. This can be a very useful debugging technique for bugs which involve communicating applications.
I was curious about this topic so I did some research.
I've posted the results, the source code and project files so that you can repeat the tests for your setup. Covers running a release mode app without anything monitoring OutputDebugString and then with Visual Studio 6, Visual Studio 2005 and Visual Studio 2010 monitoring OutputDebugString to see what differences in performance there are for each version of Visual Studio.
Interesting results, Visual Studio 2010 processes OutputDebugString information up to 7x slower than Visual Studio 6.
Full article here: Whats the cost of OutputDebugString?
I'm writing this long after this question has been answered, but the given answers miss a certain aspect:
OutputDebugString can be quite fast when no one is listening to its output. However, having a listener running in the background (be it DbgView, DBWin32, Visual Studio etc.) can make it more than 10 times slower (much more in MT environment). The reason being those listeners hook the report event, and their handling of the event is done within the scope of the OutputDebugString call. Moreover, if several threads call OutputDebugString concurrently, they will be synchronized. For more, see Watch out: DebugView (OutputDebugString) & Performance.
As a side note, I think that unless you're running a real-time application, you should not be that worried about a facility that takes 50 seconds to run 10M calls. If your log contains 10M entries, the 50 seconds wasted are the least of your problems, now that you have to somehow analyze the beast. A 10K log sounds much more reasonable, and creating that will take only 0.05 seconds as per sharptooth's measurement.
So, if your output is within a reasonable size, using OutputDebugString should not hurt you that much. However, have in mind a slowdown will occur once someone on the system starts listening to this output.