As Jeff Atwood asked: "What’s your logging philosophy? Should all code be littered with .logthis()
and .logthat()
calls? Or do you inject logging after the fact somehow?"
相关问题
- I want to trace logs using a Macro multi parameter
- Error message 'No handlers could be found for
- convert logback.xml to log4j.properties
- Django management command doesn't show logging
- apache modules ap_log_perror is at a different lev
相关文章
- how do I log requests and responses for debugging
- Android Studio doesn't display logs by package
- Stacktrace does not print in Glassfish 4.1 Cluster
- Out of curiosity — why don't logging APIs impl
- Laravel log file based on date
- Java -How to get logger to work in shutdown hook?
- Codeigniter not logging
- Is there any way to remove the information line fr
I work with safety critical real-time systems and logging is often the only way to catch rare bugs that only turn up every 53rd tuesday when it's a full moon, if you catch my drift. This kind of makes you obsessive about the subject, so I'll apologise now if I start to froth at the mouth.
I design systems which are capable of logging pretty much everything, but I don't turn everything on by default. The debug information is sent to a hidden debug dialog which timestamps it and outputs it to a listbox (limited to around 500 lines before deletion), and the dialog allows me to stop it, save it to a log file automatically, or divert it to an attached debugger such as DBWin32. That diversion allows me to see the debug output from multiple applications all neatly serialized, which can be a life saver sometimes. The log files are automatically purged every N days. I used to use numeric logging levels (the higher you set the level, the more you capture):
but this is too inflexible - as you work your way towards a bug it's much more efficient to be able to focus logging in on exactly what you need without having to wade through tons of detritus, and it may be one particular kind of transaction or operation that causes the error. If that requires you to turn everything on, you're just making your own job harder. You need something finer-grained.
So now I'm in the process of switching to logging based on a flag system. Everything that gets logged has a flag detailing what kind of operation it is, and there's a set of checkboxes allowing me to define what gets logged. Typically that list looks like this:
This logging system ships with the release build, turned on and saving to file by default. It's too late to find out you should have been logging AFTER the bug has occurred, if that bug only occurs once every six months on average and you have no way of reproducing it.
The software typically ships with ERROR, BASIC, STATE_CHANGE and EXCEPTION turned on, but this can be changed in the field via the debug dialog (or a registry/ini/cfg setting, where these things get saved).
Oh and one thing - my debug system generates one file per day. Your requirements may be different. But make sure your debug code starts every file with the date, version of the code you're running, and if possible some marker for the customer ID, location of the system or whatever. You can get a mish-mash of log files coming in from the field, and you need some record of what came from where and what version of the system they were running that's actually in the data itself, and you can't trust the customer/field engineer to tell you what version they've got - they may just tell you what version they THINK they've got. Worse, they may report the exe version that's on the disk, but the old version is still running because they forgot to reboot after replacing. Have your code tell you itself.
That's my brain dumped...
My logging philosophy is pretty easily summarized in four parts:
Auditing, or business logic logging
Program logging
Performance logging
Security logging
I agree with Adam, but I also would consider logging things of interest or things that you can demonstrate as achievements as a kind of proof of them happening.
If you're writing a program that will be used by many people, it's best to have some kind of mechanism to choose what will be logged and what won't. One argument in favor of .logthis() functions is that they can be an excellent replacement for inline comments in some instances (if done properly).
Plus, it helps you narrow down EXACTLY where an error is occurring.
I take what I consider a traditional approach; some logging, surrounded by conditional defines. For production builds, I turn off the defines.
I use logging as a way to narrow down issues that do not reproduce in our unit tests let alone repeating the same steps provided by the user: those rare glitches that only show up on some very remote hardware (and sometimes, albeit very rarely, even caused by a driver or third party library glitch outside of our control).
I agree with the comment that this should all be caught by our testing procedure, but it's difficult to find a million+ LOC codebase that demands very low-level, performance-critical code to ever meet those requirements. I don't work in mission-critical software but I work in the graphics industry where we're often having to do everything from implementing memory allocators to utilizing GPU code to SIMD.
Even with very modular, loosely-coupled or even completely decoupled code, the system interactions can lead to very complex inputs and outputs with behavior varying between platforms where occasionally we have that rogue edge case which eludes our tests. Modular black boxes can be very simple but the interactions between them can get very complex and lead to the occassional unanticipated edge case.
As an example of a case where logging saved my butt, one time I had this odd user with a prototype Intel machine that was crashing. We listed the minimum requirements for machines which should support SSE 4, but this particular machine met those minimum requirements and still did not support Streaming SIMD extensions past SSE 3 in spite of being a 16-core machine. Discovering that quickly was made possible by looking at his log which showed precisely the line number where the SSE 4 instructions were used. None of us in our team could reproduce the issue let alone a single other user that participated in verifying the report. Ideally we should have written code for older SIMD versions or at least did some branching and checking to make sure the hardware supported the minimum requirements, but we wanted to make a firm assumption communicated through the minimum hardware requirements for simplicity and economy. Here, perhaps, it's arguable that it was our minimum system requirements that had the "glitch".
Given the way I use logging here, we tend to get fairly large logs. However, the goal is not readability -- what's typically important is the last line of a log sent in with a report when the user experiences a crash of some sort that none of us on the team (let alone few other users in the world) can reproduce.
Nevertheless, one trick I employ regularly to avoid excessive log spamming is that it is often reasonable to assume that a piece of code which executes once successfully will also do so subsequently (not a hard guarantee, but often a reasonable assumption). So I often employ a
log_once
kind of function for granular functions to avoid the overhead of paying the cost of logging every time it is called.I don't sprinkle log outputs all over the place (I might if I had the time). Typically I reserve them most for areas which seem the most dangerous: code invoking GLSL shaders, e.g. (GPU vendors vary wildly here in terms of capability and even how they compile the code), code using SIMD intrinsics, very low-level code, code that inevitably has to rely on OS-specific behavior, low-level code making assumptions about the representation of PODs (ex: code that assumes 8 bits to a byte) -- the kind of cases where we would likewise sprinkle a lot of assertions and sanity checks as well as write the most number of unit tests. Typically this is enough, and logging has saved my butt many times where I would have otherwise taken an unreproducible issue and would have had to take blind stabs at the problem, requiring many iteratons bouncing attempts at a solution to the one user in the world who could reproduce the problem.