可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):
问题:
Edit: Since it appears that there\'s either no solution, or I\'m doing something so non-standard that nobody knows - I\'ll revise my question to also ask: What is the best way to accomplish logging when a python app is making a lot of system calls?
My app has two modes. In interactive mode, I want all output to go to the screen as well as to a log file, including output from any system calls. In daemon mode, all output goes to the log. Daemon mode works great using os.dup2()
. I can\'t find a way to \"tee\" all output to a log in interactive mode, without modifying each and every system call.
In other words, I want the functionality of the command line \'tee\' for any output generated by a python app, including system call output.
To clarify:
To redirect all output I do something like this, and it works great:
# open our log file
so = se = open(\"%s.log\" % self.name, \'w\', 0)
# re-open stdout without buffering
sys.stdout = os.fdopen(sys.stdout.fileno(), \'w\', 0)
# redirect stdout and stderr to the log file opened above
os.dup2(so.fileno(), sys.stdout.fileno())
os.dup2(se.fileno(), sys.stderr.fileno())
The nice thing about this is that it requires no special print calls from the rest of the code. The code also runs some shell commands, so it\'s nice not having to deal with each of their output individually as well.
Simply, I want to do the same, except duplicating instead of redirecting.
At first thought, I thought that simply reversing the dup2
\'s should work. Why doesn\'t it? Here\'s my test:
import os, sys
### my broken solution:
so = se = open(\"a.log\", \'w\', 0)
sys.stdout = os.fdopen(sys.stdout.fileno(), \'w\', 0)
os.dup2(sys.stdout.fileno(), so.fileno())
os.dup2(sys.stderr.fileno(), se.fileno())
###
print(\"foo bar\")
os.spawnve(\"P_WAIT\", \"/bin/ls\", [\"/bin/ls\"], {})
os.execve(\"/bin/ls\", [\"/bin/ls\"], os.environ)
The file \"a.log\" should be identical to what was displayed on the screen.
回答1:
Since you\'re comfortable spawning external processes from your code, you could use tee
itself. I don\'t know of any Unix system calls that do exactly what tee
does.
import subprocess, os, sys
# Unbuffer output
sys.stdout = os.fdopen(sys.stdout.fileno(), \'w\', 0)
tee = subprocess.Popen([\"tee\", \"log.txt\"], stdin=subprocess.PIPE)
os.dup2(tee.stdin.fileno(), sys.stdout.fileno())
os.dup2(tee.stdin.fileno(), sys.stderr.fileno())
print \"\\nstdout\"
print >>sys.stderr, \"stderr\"
os.spawnve(\"P_WAIT\", \"/bin/ls\", [\"/bin/ls\"], {})
os.execve(\"/bin/ls\", [\"/bin/ls\"], os.environ)
You could also emulate tee
using the multiprocessing package (or use processing if you\'re using Python 2.5 or earlier).
回答2:
I had this same issue before and found this snippet very useful:
class Tee(object):
def __init__(self, name, mode):
self.file = open(name, mode)
self.stdout = sys.stdout
sys.stdout = self
def __del__(self):
sys.stdout = self.stdout
self.file.close()
def write(self, data):
self.file.write(data)
self.stdout.write(data)
def flush(self):
self.file.flush()
from: http://mail.python.org/pipermail/python-list/2007-May/438106.html
回答3:
The print
statement will call the write()
method of any object you assign to sys.stdout.
I would spin up a small class to write to two places at once...
import sys
class Logger(object):
def __init__(self):
self.terminal = sys.stdout
self.log = open(\"log.dat\", \"a\")
def write(self, message):
self.terminal.write(message)
self.log.write(message)
sys.stdout = Logger()
Now the print
statement will both echo to the screen and append to your log file:
# prints \"1 2\" to <stdout> AND log.dat
print \"%d %d\" % (1,2)
This is obviously quick-and-dirty. Some notes:
- You probably ought to parametize the log filename.
- You should probably revert sys.stdout to
<stdout>
if you
won\'t be logging for the duration of the program.
- You may want the ability to write to multiple log files at once, or handle different log levels, etc.
These are all straightforward enough that I\'m comfortable leaving them as exercises for the reader. The key insight here is that print
just calls a \"file-like object\" that\'s assigned to sys.stdout
.
回答4:
What you really want is logging
module from standard library. Create a logger and attach two handlers, one would be writing to a file and the other to stdout or stderr.
See Logging to multiple destinations for details
回答5:
Here is another solution, which is more general than the others -- it supports splitting output (written to sys.stdout
) to any number of file-like objects. There\'s no requirement that __stdout__
itself is included.
import sys
class multifile(object):
def __init__(self, files):
self._files = files
def __getattr__(self, attr, *args):
return self._wrap(attr, *args)
def _wrap(self, attr, *args):
def g(*a, **kw):
for f in self._files:
res = getattr(f, attr, *args)(*a, **kw)
return res
return g
# for a tee-like behavior, use like this:
sys.stdout = multifile([ sys.stdout, open(\'myfile.txt\', \'w\') ])
# all these forms work:
print \'abc\'
print >>sys.stdout, \'line2\'
sys.stdout.write(\'line3\\n\')
NOTE: This is a proof-of-concept. The implementation here is not complete, as it only wraps methods of the file-like objects (e.g. write
), leaving out members/properties/setattr, etc. However, it is probably good enough for most people as it currently stands.
What I like about it, other than its generality, is that it is clean in the sense it doesn\'t make any direct calls to write
, flush
, os.dup2
, etc.
回答6:
As described elsewhere, perhaps the best solution is to use the logging module directly:
import logging
logging.basicConfig(level=logging.DEBUG, filename=\'mylog.log\')
logging.info(\'this should to write to the log file\')
However, there are some (rare) occasions where you really want to redirect stdout. I had this situation when I was extending django\'s runserver command which uses print: I didn\'t want to hack the django source but needed the print statements to go to a file.
This is a way of redirecting stdout and stderr away from the shell using the logging module:
import logging, sys
class LogFile(object):
\"\"\"File-like object to log text using the `logging` module.\"\"\"
def __init__(self, name=None):
self.logger = logging.getLogger(name)
def write(self, msg, level=logging.INFO):
self.logger.log(level, msg)
def flush(self):
for handler in self.logger.handlers:
handler.flush()
logging.basicConfig(level=logging.DEBUG, filename=\'mylog.log\')
# Redirect stdout and stderr
sys.stdout = LogFile(\'stdout\')
sys.stderr = LogFile(\'stderr\')
print \'this should to write to the log file\'
You should only use this LogFile implementation if you really cannot use the logging module directly.
回答7:
I wrote a tee()
implementation in Python that should work for most cases, and it works on Windows also.
https://github.com/pycontribs/tendo
Also, you can use it in combination with logging
module from Python if you want.
回答8:
(Ah, just re-read your question and see that this doesn\'t quite apply.)
Here is a sample program that makes uses the python logging module. This logging module has been in all versions since 2.3. In this sample the logging is configurable by command line options.
In quite mode it will only log to a file, in normal mode it will log to both a file and the console.
import os
import sys
import logging
from optparse import OptionParser
def initialize_logging(options):
\"\"\" Log information based upon users options\"\"\"
logger = logging.getLogger(\'project\')
formatter = logging.Formatter(\'%(asctime)s %(levelname)s\\t%(message)s\')
level = logging.__dict__.get(options.loglevel.upper(),logging.DEBUG)
logger.setLevel(level)
# Output logging information to screen
if not options.quiet:
hdlr = logging.StreamHandler(sys.stderr)
hdlr.setFormatter(formatter)
logger.addHandler(hdlr)
# Output logging information to file
logfile = os.path.join(options.logdir, \"project.log\")
if options.clean and os.path.isfile(logfile):
os.remove(logfile)
hdlr2 = logging.FileHandler(logfile)
hdlr2.setFormatter(formatter)
logger.addHandler(hdlr2)
return logger
def main(argv=None):
if argv is None:
argv = sys.argv[1:]
# Setup command line options
parser = OptionParser(\"usage: %prog [options]\")
parser.add_option(\"-l\", \"--logdir\", dest=\"logdir\", default=\".\", help=\"log DIRECTORY (default ./)\")
parser.add_option(\"-v\", \"--loglevel\", dest=\"loglevel\", default=\"debug\", help=\"logging level (debug, info, error)\")
parser.add_option(\"-q\", \"--quiet\", action=\"store_true\", dest=\"quiet\", help=\"do not log to console\")
parser.add_option(\"-c\", \"--clean\", dest=\"clean\", action=\"store_true\", default=False, help=\"remove old log file\")
# Process command line options
(options, args) = parser.parse_args(argv)
# Setup logger format and output locations
logger = initialize_logging(options)
# Examples
logger.error(\"This is an error message.\")
logger.info(\"This is an info message.\")
logger.debug(\"This is a debug message.\")
if __name__ == \"__main__\":
sys.exit(main())
回答9:
To complete John T answer: https://stackoverflow.com/a/616686/395687
I added __enter__
and __exit__
methods to use it as a context manager with the with
keyword, which gives this code
class Tee(object):
def __init__(self, name, mode):
self.file = open(name, mode)
self.stdout = sys.stdout
sys.stdout = self
def __del__(self):
sys.stdout = self.stdout
self.file.close()
def write(self, data):
self.file.write(data)
self.stdout.write(data)
def __enter__(self):
pass
def __exit__(self, _type, _value, _traceback):
pass
It can then be used as
with Tee(\'outfile.log\', \'w\'):
print(\'I am written to both stdout and outfile.log\')
回答10:
I know this question has been answered repeatedly, but for this I\'ve taken the main answer from John T\'s answer and modified it so it contains the suggested flush and followed its linked revised version. I\'ve also added the enter and exit as mentioned in cladmi\'s answer for use with the with statement. In addition, the documentation mentions to flush files using os.fsync()
so I\'ve added that as well. I don\'t know if you really need that but its there.
import sys, os
class Logger(object):
\"Lumberjack class - duplicates sys.stdout to a log file and it\'s okay\"
#source: https://stackoverflow.com/q/616645
def __init__(self, filename=\"Red.Wood\", mode=\"a\", buff=0):
self.stdout = sys.stdout
self.file = open(filename, mode, buff)
sys.stdout = self
def __del__(self):
self.close()
def __enter__(self):
pass
def __exit__(self, *args):
self.close()
def write(self, message):
self.stdout.write(message)
self.file.write(message)
def flush(self):
self.stdout.flush()
self.file.flush()
os.fsync(self.file.fileno())
def close(self):
if self.stdout != None:
sys.stdout = self.stdout
self.stdout = None
if self.file != None:
self.file.close()
self.file = None
You can then use it
with Logger(\'My_best_girlie_by_my.side\'):
print(\"we\'d sing sing sing\")
or
Log=Logger(\'Sleeps_all.night\')
print(\'works all day\')
Log.close()
回答11:
another solution using logging module:
import logging
import sys
log = logging.getLogger(\'stdxxx\')
class StreamLogger(object):
def __init__(self, stream, prefix=\'\'):
self.stream = stream
self.prefix = prefix
self.data = \'\'
def write(self, data):
self.stream.write(data)
self.stream.flush()
self.data += data
tmp = str(self.data)
if \'\\x0a\' in tmp or \'\\x0d\' in tmp:
tmp = tmp.rstrip(\'\\x0a\\x0d\')
log.info(\'%s%s\' % (self.prefix, tmp))
self.data = \'\'
logging.basicConfig(level=logging.INFO,
filename=\'text.log\',
filemode=\'a\')
sys.stdout = StreamLogger(sys.stdout, \'[stdout] \')
print \'test for stdout\'
回答12:
None of the answers above really seems to answer the problem posed. I know this is an old thread, but I think this problem is a lot simpler than everyone is making it:
class tee_err(object):
def __init__(self):
self.errout = sys.stderr
sys.stderr = self
self.log = \'logfile.log\'
log = open(self.log,\'w\')
log.close()
def write(self, line):
log = open(self.log,\'a\')
log.write(line)
log.close()
self.errout.write(line)
Now this will repeat everything to the normal sys.stderr handler and your file. Create another class tee_out
for sys.stdout
.
回答13:
As per a request by @user5359531 in the comments under @John T\'s answer, here\'s a copy of the referenced post to the revised version of the linked discussion in that answer:
Issue of redirecting the stdout to both file and screen
Gabriel Genellina gagsl-py2 at yahoo.com.ar
Mon May 28 12:45:51 CEST 2007
Previous message: Issue of redirecting the stdout to both file and screen
Next message: Formal interfaces with Python
Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
En Mon, 28 May 2007 06:17:39 -0300, 人言落日是天涯,望极天涯不见家
<kelvin.you at gmail.com> escribió:
> I wanna print the log to both the screen and file, so I simulatered a
> \'tee\'
>
> class Tee(file):
>
> def __init__(self, name, mode):
> file.__init__(self, name, mode)
> self.stdout = sys.stdout
> sys.stdout = self
>
> def __del__(self):
> sys.stdout = self.stdout
> self.close()
>
> def write(self, data):
> file.write(self, data)
> self.stdout.write(data)
>
> Tee(\'logfile\', \'w\')
> print >>sys.stdout, \'abcdefg\'
>
> I found that it only output to the file, nothing to screen. Why?
> It seems the \'write\' function was not called when I *print* something.
You create a Tee instance and it is immediately garbage collected. I\'d
restore sys.stdout on Tee.close, not __del__ (you forgot to call the
inherited __del__ method, btw).
Mmm, doesn\'t work. I think there is an optimization somewhere: if it looks
like a real file object, it uses the original file write method, not yours.
The trick would be to use an object that does NOT inherit from file:
import sys
class TeeNoFile(object):
def __init__(self, name, mode):
self.file = open(name, mode)
self.stdout = sys.stdout
sys.stdout = self
def close(self):
if self.stdout is not None:
sys.stdout = self.stdout
self.stdout = None
if self.file is not None:
self.file.close()
self.file = None
def write(self, data):
self.file.write(data)
self.stdout.write(data)
def flush(self):
self.file.flush()
self.stdout.flush()
def __del__(self):
self.close()
tee=TeeNoFile(\'logfile\', \'w\')
print \'abcdefg\'
print \'another line\'
tee.close()
print \'screen only\'
del tee # should do nothing
--
Gabriel Genellina
回答14:
I\'m writing a script to run cmd-line scripts. ( Because in some cases, there just is no viable substitute for a Linux command -- such as the case of rsync. )
What I really wanted was to use the default python logging mechanism in every case where it was possible to do so, but to still capture any error when something went wrong that was unanticipated.
This code seems to do the trick. It may not be particularly elegant or efficient ( although it doesn\'t use string+=string, so at least it doesn\'t have that particular potential bottle-
neck ). I\'m posting it in case it gives someone else any useful ideas.
import logging
import os, sys
import datetime
# Get name of module, use as application name
try:
ME=os.path.split(__file__)[-1].split(\'.\')[0]
except:
ME=\'pyExec_\'
LOG_IDENTIFIER=\"uuu___( o O )___uuu \"
LOG_IDR_LENGTH=len(LOG_IDENTIFIER)
class PyExec(object):
# Use this to capture all possible error / output to log
class SuperTee(object):
# Original reference: http://mail.python.org/pipermail/python-list/2007-May/442737.html
def __init__(self, name, mode):
self.fl = open(name, mode)
self.fl.write(\'\\n\')
self.stdout = sys.stdout
self.stdout.write(\'\\n\')
self.stderr = sys.stderr
sys.stdout = self
sys.stderr = self
def __del__(self):
self.fl.write(\'\\n\')
self.fl.flush()
sys.stderr = self.stderr
sys.stdout = self.stdout
self.fl.close()
def write(self, data):
# If the data to write includes the log identifier prefix, then it is already formatted
if data[0:LOG_IDR_LENGTH]==LOG_IDENTIFIER:
self.fl.write(\"%s\\n\" % data[LOG_IDR_LENGTH:])
self.stdout.write(data[LOG_IDR_LENGTH:])
# Otherwise, we can give it a timestamp
else:
timestamp=str(datetime.datetime.now())
if \'Traceback\' == data[0:9]:
data=\'%s: %s\' % (timestamp, data)
self.fl.write(data)
else:
self.fl.write(data)
self.stdout.write(data)
def __init__(self, aName, aCmd, logFileName=\'\', outFileName=\'\'):
# Using name for \'logger\' (context?), which is separate from the module or the function
baseFormatter=logging.Formatter(\"%(asctime)s \\t %(levelname)s \\t %(name)s:%(module)s:%(lineno)d \\t %(message)s\")
errorFormatter=logging.Formatter(LOG_IDENTIFIER + \"%(asctime)s \\t %(levelname)s \\t %(name)s:%(module)s:%(lineno)d \\t %(message)s\")
if logFileName:
# open passed filename as append
fl=logging.FileHandler(\"%s.log\" % aName)
else:
# otherwise, use log filename as a one-time use file
fl=logging.FileHandler(\"%s.log\" % aName, \'w\')
fl.setLevel(logging.DEBUG)
fl.setFormatter(baseFormatter)
# This will capture stdout and CRITICAL and beyond errors
if outFileName:
teeFile=PyExec.SuperTee(\"%s_out.log\" % aName)
else:
teeFile=PyExec.SuperTee(\"%s_out.log\" % aName, \'w\')
fl_out=logging.StreamHandler( teeFile )
fl_out.setLevel(logging.CRITICAL)
fl_out.setFormatter(errorFormatter)
# Set up logging
self.log=logging.getLogger(\'pyExec_main\')
log=self.log
log.addHandler(fl)
log.addHandler(fl_out)
print \"Test print statement.\"
log.setLevel(logging.DEBUG)
log.info(\"Starting %s\", ME)
log.critical(\"Critical.\")
# Caught exception
try:
raise Exception(\'Exception test.\')
except Exception,e:
log.exception(str(e))
# Uncaught exception
a=2/0
PyExec(\'test_pyExec\',None)
Obviously, if you\'re not as subject to whimsy as I am, replace LOG_IDENTIFIER with another string that you\'re not like to ever see someone write to a log.
回答15:
If you wish to log all output to a file AND output it to a text file then you can do the following. It\'s a bit hacky but it works:
import logging
debug = input(\"Debug or not\")
if debug == \"1\":
logging.basicConfig(level=logging.DEBUG, filename=\'./OUT.txt\')
old_print = print
def print(string):
old_print(string)
logging.info(string)
print(\"OMG it works!\")
EDIT: Note that this does not log errors unless you redirect sys.stderr to sys.stdout
EDIT2: A second issue is that you have to pass 1 argument unlike with the builtin function.
EDIT3: See the code before to write stdin and stdout to console and file with stderr only going to file
import logging, sys
debug = input(\"Debug or not\")
if debug == \"1\":
old_input = input
sys.stderr.write = logging.info
def input(string=\"\"):
string_in = old_input(string)
logging.info(\"STRING IN \" + string_in)
return string_in
logging.basicConfig(level=logging.DEBUG, filename=\'./OUT.txt\')
old_print = print
def print(string=\"\", string2=\"\"):
old_print(string, string2)
logging.info(string)
logging.info(string2)
print(\"OMG\")
b = input()
print(a) ## Deliberate error for testing
回答16:
I wrote a full replacement for sys.stderr
and just duplicated the code renaming stderr
to stdout
to make it also available to replace sys.stdout
.
To do this I create the same object type as the current stderr
and stdout
, and forward all methods to the original system stderr
and stdout
:
import os
import sys
import logging
class StdErrReplament(object):
\"\"\"
How to redirect stdout and stderr to logger in Python
https://stackoverflow.com/questions/19425736/how-to-redirect-stdout-and-stderr-to-logger-in-python
Set a Read-Only Attribute in Python?
https://stackoverflow.com/questions/24497316/set-a-read-only-attribute-in-python
\"\"\"
is_active = False
@classmethod
def lock(cls, logger):
\"\"\"
Attach this singleton logger to the `sys.stderr` permanently.
\"\"\"
global _stderr_singleton
global _stderr_default
global _stderr_default_class_type
# On Sublime Text, `sys.__stderr__` is set to None, because they already replaced `sys.stderr`
# by some `_LogWriter()` class, then just save the current one over there.
if not sys.__stderr__:
sys.__stderr__ = sys.stderr
try:
_stderr_default
_stderr_default_class_type
except NameError:
_stderr_default = sys.stderr
_stderr_default_class_type = type( _stderr_default )
# Recreate the sys.stderr logger when it was reset by `unlock()`
if not cls.is_active:
cls.is_active = True
_stderr_write = _stderr_default.write
logger_call = logger.debug
clean_formatter = logger.clean_formatter
global _sys_stderr_write
global _sys_stderr_write_hidden
if sys.version_info <= (3,2):
logger.file_handler.terminator = \'\\n\'
# Always recreate/override the internal write function used by `_sys_stderr_write`
def _sys_stderr_write_hidden(*args, **kwargs):
\"\"\"
Suppress newline in Python logging module
https://stackoverflow.com/questions/7168790/suppress-newline-in-python-logging-module
\"\"\"
try:
_stderr_write( *args, **kwargs )
file_handler = logger.file_handler
formatter = file_handler.formatter
terminator = file_handler.terminator
file_handler.formatter = clean_formatter
file_handler.terminator = \"\"
kwargs[\'extra\'] = {\'_duplicated_from_file\': True}
logger_call( *args, **kwargs )
file_handler.formatter = formatter
file_handler.terminator = terminator
except Exception:
logger.exception( \"Could not write to the file_handler: %s(%s)\", file_handler, logger )
cls.unlock()
# Only create one `_sys_stderr_write` function pointer ever
try:
_sys_stderr_write
except NameError:
def _sys_stderr_write(*args, **kwargs):
\"\"\"
Hides the actual function pointer. This allow the external function pointer to
be cached while the internal written can be exchanged between the standard
`sys.stderr.write` and our custom wrapper around it.
\"\"\"
_sys_stderr_write_hidden( *args, **kwargs )
try:
# Only create one singleton instance ever
_stderr_singleton
except NameError:
class StdErrReplamentHidden(_stderr_default_class_type):
\"\"\"
Which special methods bypasses __getattribute__ in Python?
https://stackoverflow.com/questions/12872695/which-special-methods-bypasses-getattribute-in-python
\"\"\"
if hasattr( _stderr_default, \"__abstractmethods__\" ):
__abstractmethods__ = _stderr_default.__abstractmethods__
if hasattr( _stderr_default, \"__base__\" ):
__base__ = _stderr_default.__base__
if hasattr( _stderr_default, \"__bases__\" ):
__bases__ = _stderr_default.__bases__
if hasattr( _stderr_default, \"__basicsize__\" ):
__basicsize__ = _stderr_default.__basicsize__
if hasattr( _stderr_default, \"__call__\" ):
__call__ = _stderr_default.__call__
if hasattr( _stderr_default, \"__class__\" ):
__class__ = _stderr_default.__class__
if hasattr( _stderr_default, \"__delattr__\" ):
__delattr__ = _stderr_default.__delattr__
if hasattr( _stderr_default, \"__dict__\" ):
__dict__ = _stderr_default.__dict__
if hasattr( _stderr_default, \"__dictoffset__\" ):
__dictoffset__ = _stderr_default.__dictoffset__
if hasattr( _stderr_default, \"__dir__\" ):
__dir__ = _stderr_default.__dir__
if hasattr( _stderr_default, \"__doc__\" ):
__doc__ = _stderr_default.__doc__
if hasattr( _stderr_default, \"__eq__\" ):
__eq__ = _stderr_default.__eq__
if hasattr( _stderr_default, \"__flags__\" ):
__flags__ = _stderr_default.__flags__
if hasattr( _stderr_default, \"__format__\" ):
__format__ = _stderr_default.__format__
if hasattr( _stderr_default, \"__ge__\" ):
__ge__ = _stderr_default.__ge__
if hasattr( _stderr_default, \"__getattribute__\" ):
__getattribute__ = _stderr_default.__getattribute__
if hasattr( _stderr_default, \"__gt__\" ):
__gt__ = _stderr_default.__gt__
if hasattr( _stderr_default, \"__hash__\" ):
__hash__ = _stderr_default.__hash__
if hasattr( _stderr_default, \"__init__\" ):
__init__ = _stderr_default.__init__
if hasattr( _stderr_default, \"__init_subclass__\" ):
__init_subclass__ = _stderr_default.__init_subclass__
if hasattr( _stderr_default, \"__instancecheck__\" ):
__instancecheck__ = _stderr_default.__instancecheck__
if hasattr( _stderr_default, \"__itemsize__\" ):
__itemsize__ = _stderr_default.__itemsize__
if hasattr( _stderr_default, \"__le__\" ):
__le__ = _stderr_default.__le__
if hasattr( _stderr_default, \"__lt__\" ):
__lt__ = _stderr_default.__lt__
if hasattr( _stderr_default, \"__module__\" ):
__module__ = _stderr_default.__module__
if hasattr( _stderr_default, \"__mro__\" ):
__mro__ = _stderr_default.__mro__
if hasattr( _stderr_default, \"__name__\" ):
__name__ = _stderr_default.__name__
if hasattr( _stderr_default, \"__ne__\" ):
__ne__ = _stderr_default.__ne__
if hasattr( _stderr_default, \"__new__\" ):
__new__ = _stderr_default.__new__
if hasattr( _stderr_default, \"__prepare__\" ):
__prepare__ = _stderr_default.__prepare__
if hasattr( _stderr_default, \"__qualname__\" ):
__qualname__ = _stderr_default.__qualname__
if hasattr( _stderr_default, \"__reduce__\" ):
__reduce__ = _stderr_default.__reduce__
if hasattr( _stderr_default, \"__reduce_ex__\" ):
__reduce_ex__ = _stderr_default.__reduce_ex__
if hasattr( _stderr_default, \"__repr__\" ):
__repr__ = _stderr_default.__repr__
if hasattr( _stderr_default, \"__setattr__\" ):
__setattr__ = _stderr_default.__setattr__
if hasattr( _stderr_default, \"__sizeof__\" ):
__sizeof__ = _stderr_default.__sizeof__
if hasattr( _stderr_default, \"__str__\" ):
__str__ = _stderr_default.__str__
if hasattr( _stderr_default, \"__subclasscheck__\" ):
__subclasscheck__ = _stderr_default.__subclasscheck__
if hasattr( _stderr_default, \"__subclasses__\" ):
__subclasses__ = _stderr_default.__subclasses__
if hasattr( _stderr_default, \"__subclasshook__\" ):
__subclasshook__ = _stderr_default.__subclasshook__
if hasattr( _stderr_default, \"__text_signature__\" ):
__text_signature__ = _stderr_default.__text_signature__
if hasattr( _stderr_default, \"__weakrefoffset__\" ):
__weakrefoffset__ = _stderr_default.__weakrefoffset__
if hasattr( _stderr_default, \"mro\" ):
mro = _stderr_default.mro
def __init__(self):
\"\"\"
Override any super class `type( _stderr_default )` constructor, so we can
instantiate any kind of `sys.stderr` replacement object, in case it was already
replaced by something else like on Sublime Text with `_LogWriter()`.
Assures all attributes were statically replaced just above. This should happen in case
some new attribute is added to the python language.
This also ignores the only two methods which are not equal, `__init__()` and `__getattribute__()`.
\"\"\"
different_methods = (\"__init__\", \"__getattribute__\")
attributes_to_check = set( dir( object ) + dir( type ) )
for attribute in attributes_to_check:
if attribute not in different_methods \\
and hasattr( _stderr_default, attribute ):
base_class_attribute = super( _stderr_default_class_type, self ).__getattribute__( attribute )
target_class_attribute = _stderr_default.__getattribute__( attribute )
if base_class_attribute != target_class_attribute:
sys.stderr.write( \" The base class attribute `%s` is different from the target class:\\n%s\\n%s\\n\\n\" % (
attribute, base_class_attribute, target_class_attribute ) )
def __getattribute__(self, item):
if item == \'write\':
return _sys_stderr_write
try:
return _stderr_default.__getattribute__( item )
except AttributeError:
return super( _stderr_default_class_type, _stderr_default ).__getattribute__( item )
_stderr_singleton = StdErrReplamentHidden()
sys.stderr = _stderr_singleton
return cls
@classmethod
def unlock(cls):
\"\"\"
Detach this `stderr` writer from `sys.stderr` and allow the next call to `lock()` create
a new writer for the stderr.
\"\"\"
if cls.is_active:
global _sys_stderr_write_hidden
cls.is_active = False
_sys_stderr_write_hidden = _stderr_default.write
class StdOutReplament(object):
\"\"\"
How to redirect stdout and stderr to logger in Python
https://stackoverflow.com/questions/19425736/how-to-redirect-stdout-and-stderr-to-logger-in-python
Set a Read-Only Attribute in Python?
https://stackoverflow.com/questions/24497316/set-a-read-only-attribute-in-python
\"\"\"
is_active = False
@classmethod
def lock(cls, logger):
\"\"\"
Attach this singleton logger to the `sys.stdout` permanently.
\"\"\"
global _stdout_singleton
global _stdout_default
global _stdout_default_class_type
# On Sublime Text, `sys.__stdout__` is set to None, because they already replaced `sys.stdout`
# by some `_LogWriter()` class, then just save the current one over there.
if not sys.__stdout__:
sys.__stdout__ = sys.stdout
try:
_stdout_default
_stdout_default_class_type
except NameError:
_stdout_default = sys.stdout
_stdout_default_class_type = type( _stdout_default )
# Recreate the sys.stdout logger when it was reset by `unlock()`
if not cls.is_active:
cls.is_active = True
_stdout_write = _stdout_default.write
logger_call = logger.debug
clean_formatter = logger.clean_formatter
global _sys_stdout_write
global _sys_stdout_write_hidden
if sys.version_info <= (3,2):
logger.file_handler.terminator = \'\\n\'
# Always recreate/override the internal write function used by `_sys_stdout_write`
def _sys_stdout_write_hidden(*args, **kwargs):
\"\"\"
Suppress newline in Python logging module
https://stackoverflow.com/questions/7168790/suppress-newline-in-python-logging-module
\"\"\"
try:
_stdout_write( *args, **kwargs )
file_handler = logger.file_handler
formatter = file_handler.formatter
terminator = file_handler.terminator
file_handler.formatter = clean_formatter
file_handler.terminator = \"\"
kwargs[\'extra\'] = {\'_duplicated_from_file\': True}
logger_call( *args, **kwargs )
file_handler.formatter = formatter
file_handler.terminator = terminator
except Exception:
logger.exception( \"Could not write to the file_handler: %s(%s)\", file_handler, logger )
cls.unlock()
# Only create one `_sys_stdout_write` function pointer ever
try:
_sys_stdout_write
except NameError:
def _sys_stdout_write(*args, **kwargs):
\"\"\"
Hides the actual function pointer. This allow the external function pointer to
be cached while the internal written can be exchanged between the standard
`sys.stdout.write` and our custom wrapper around it.
\"\"\"
_sys_stdout_write_hidden( *args, **kwargs )
try:
# Only create one singleton instance ever
_stdout_singleton
except NameError:
class StdOutReplamentHidden(_stdout_default_class_type):
\"\"\"
Which special methods bypasses __getattribute__ in Python?
https://stackoverflow.com/questions/12872695/which-special-methods-bypasses-getattribute-in-python
\"\"\"
if hasattr( _stdout_default, \"__abstractmethods__\" ):
__abstractmethods__ = _stdout_default.__abstractmethods__
if hasattr( _stdout_default, \"__base__\" ):
__base__ = _stdout_default.__base__
if hasattr( _stdout_default, \"__bases__\" ):
__bases__ = _stdout_default.__bases__
if hasattr( _stdout_default, \"__basicsize__\" ):
__basicsize__ = _stdout_default.__basicsize__
if hasattr( _stdout_default, \"__call__\" ):
__call__ = _stdout_default.__call__
if hasattr( _stdout_default, \"__class__\" ):
__class__ = _stdout_default.__class__
if hasattr( _stdout_default, \"__delattr__\" ):
__delattr__ = _stdout_default.__delattr__
if hasattr( _stdout_default, \"__dict__\" ):
__dict__ = _stdout_default.__dict__
if hasattr( _stdout_default, \"__dictoffset__\" ):
__dictoffset__ = _stdout_default.__dictoffset__
if hasattr( _stdout_default, \"__dir__\" ):
__dir__ = _stdout_default.__dir__
if hasattr( _stdout_default, \"__doc__\" ):
__doc__ = _stdout_default.__doc__
if hasattr( _stdout_default, \"__eq__\" ):
__eq__ = _stdout_default.__eq__
if hasattr( _stdout_default, \"__flags__\" ):
__flags__ = _stdout_default.__flags__
if hasattr( _stdout_default, \"__format__\" ):
__format__ = _stdout_default.__format__
if hasattr( _stdout_default, \"__ge__\" ):
__ge__ = _stdout_default.__ge__
if hasattr( _stdout_default, \"__getattribute__\" ):
__getattribute__ = _stdout_default.__getattribute__
if hasattr( _stdout_default, \"__gt__\" ):
__gt__ = _stdout_default.__gt__
if hasattr( _stdout_default, \"__hash__\" ):
__hash__ = _stdout_default.__hash__
if hasattr( _stdout_default, \"__init__\" ):
__init__ = _stdout_default.__init__
if hasattr( _stdout_default, \"__init_subclass__\" ):
__init_subclass__ = _stdout_default.__init_subclass__
if hasattr( _stdout_default, \"__instancecheck__\" ):
__instancecheck__ = _stdout_default.__instancecheck__
if hasattr( _stdout_default, \"__itemsize__\" ):
__itemsize__ = _stdout_default.__itemsize__
if hasattr( _stdout_default, \"__le__\" ):
__le__ = _stdout_default.__le__
if hasattr( _stdout_default, \"__lt__\" ):
__lt__ = _stdout_default.__lt__
if hasattr( _stdout_default, \"__module__\" ):
__module__ = _stdout_default.__module__
if hasattr( _stdout_default, \"__mro__\" ):
__mro__ = _stdout_default.__mro__
if hasattr( _stdout_default, \"__name__\" ):
__name__ = _stdout_default.__name__
if hasattr( _stdout_default, \"__ne__\" ):
__ne__ = _stdout_default.__ne__
if hasattr( _stdout_default, \"__new__\" ):
__new__ = _stdout_default.__new__
if hasattr( _stdout_default, \"__prepare__\" ):
__prepare__ = _stdout_default.__prepare__
if hasattr( _stdout_default, \"__qualname__\" ):
__qualname__ = _stdout_default.__qualname__
if hasattr( _stdout_default, \"__reduce__\" ):
__reduce__ = _stdout_default.__reduce__
if hasattr( _stdout_default, \"__reduce_ex__\" ):
__reduce_ex__ = _stdout_default.__reduce_ex__
if hasattr( _stdout_default, \"__repr__\" ):
__repr__ = _stdout_default.__repr__
if hasattr( _stdout_default, \"__setattr__\" ):
__setattr__ = _stdout_default.__setattr__
if hasattr( _stdout_default, \"__sizeof__\" ):
__sizeof__ = _stdout_default.__sizeof__
if hasattr( _stdout_default, \"__str__\" ):
__str__ = _stdout_default.__str__
if hasattr( _stdout_default, \"__subclasscheck__\" ):
__subclasscheck__ = _stdout_default.__subclasscheck__
if hasattr( _stdout_default, \"__subclasses__\" ):
__subclasses__ = _stdout_default.__subclasses__
if hasattr( _stdout_default, \"__subclasshook__\" ):
__subclasshook__ = _stdout_default.__subclasshook__
if hasattr( _stdout_default, \"__text_signature__\" ):
__text_signature__ = _stdout_default.__text_signature__
if hasattr( _stdout_default, \"__weakrefoffset__\" ):
__weakrefoffset__ = _stdout_default.__weakrefoffset__
if hasattr( _stdout_default, \"mro\" ):
mro = _stdout_default.mro
def __init__(self):
\"\"\"
Override any super class `type( _stdout_default )` constructor, so we can
instantiate any kind of `sys.stdout` replacement object, in case it was already
replaced by something else like on Sublime Text with `_LogWriter()`.
Assures all attributes were statically replaced just above. This should happen in case
some new attribute is added to the python language.
This also ignores the only two methods which are not equal, `__init__()` and `__getattribute__()`.
\"\"\"
different_methods = (\"__init__\", \"__getattribute__\")
attributes_to_check = set( dir( object ) + dir( type ) )
for attribute in attributes_to_check:
if attribute not in different_methods \\
and hasattr( _stdout_default, attribute ):
base_class_attribute = super( _stdout_default_class_type, self ).__getattribute__( attribute )
target_class_attribute = _stdout_default.__getattribute__( attribute )
if base_class_attribute != target_class_attribute:
sys.stdout.write( \" The base class attribute `%s` is different from the target class:\\n%s\\n%s\\n\\n\" % (
attribute, base_class_attribute, target_class_attribute ) )
def __getattribute__(self, item):
if item == \'write\':
return _sys_stdout_write
try:
return _stdout_default.__getattribute__( item )
except AttributeError:
return super( _stdout_default_class_type, _stdout_default ).__getattribute__( item )
_stdout_singleton = StdOutReplamentHidden()
sys.stdout = _stdout_singleton
return cls
@classmethod
def unlock(cls):
\"\"\"
Detach this `stdout` writer from `sys.stdout` and allow the next call to `lock()` create
a new writer for the stdout.
\"\"\"
if cls.is_active:
global _sys_stdout_write_hidden
cls.is_active = False
_sys_stdout_write_hidden = _stdout_default.write
To use this you can just call StdErrReplament::lock(logger)
and StdOutReplament::lock(logger)
passing the logger you want to use to send the output text. For example:
import os
import sys
import logging
current_folder = os.path.dirname( os.path.realpath( __file__ ) )
log_file_path = os.path.join( current_folder, \"my_log_file.txt\" )
file_handler = logging.FileHandler( log_file_path, \'a\' )
file_handler.formatter = logging.Formatter( \"%(asctime)s %(name)s %(levelname)s - %(message)s\", \"%Y-%m-%d %H:%M:%S\" )
log = logging.getLogger( __name__ )
log.setLevel( \"DEBUG\" )
log.addHandler( file_handler )
log.file_handler = file_handler
log.clean_formatter = logging.Formatter( \"\", \"\" )
StdOutReplament.lock( log )
StdErrReplament.lock( log )
log.debug( \"I am doing usual logging debug...\" )
sys.stderr.write( \"Tests 1...\\n\" )
sys.stdout.write( \"Tests 2...\\n\" )
Running this code, you will see on the screen:
And on the file contents:
If you would like to also see the contents of the log.debug
calls on the screen, you will need to add a stream handler to your logger. On this case it would be like this:
import os
import sys
import logging
class ContextFilter(logging.Filter):
\"\"\" This filter avoids duplicated information to be displayed to the StreamHandler log. \"\"\"
def filter(self, record):
return not \"_duplicated_from_file\" in record.__dict__
current_folder = os.path.dirname( os.path.realpath( __file__ ) )
log_file_path = os.path.join( current_folder, \"my_log_file.txt\" )
stream_handler = logging.StreamHandler()
file_handler = logging.FileHandler( log_file_path, \'a\' )
formatter = logging.Formatter( \"%(asctime)s %(name)s %(levelname)s - %(message)s\", \"%Y-%m-%d %H:%M:%S\" )
file_handler.formatter = formatter
stream_handler.formatter = formatter
stream_handler.addFilter( ContextFilter() )
log = logging.getLogger( __name__ )
log.setLevel( \"DEBUG\" )
log.addHandler( file_handler )
log.addHandler( stream_handler )
log.file_handler = file_handler
log.stream_handler = stream_handler
log.clean_formatter = logging.Formatter( \"\", \"\" )
StdOutReplament.lock( log )
StdErrReplament.lock( log )
log.debug( \"I am doing usual logging debug...\" )
sys.stderr.write( \"Tests 1...\\n\" )
sys.stdout.write( \"Tests 2...\\n\" )
Which would output like this when running:
While it would still saving this to the file my_log_file.txt
:
When disabling this with StdErrReplament:unlock()
, it will only restore the standard behavior of the stderr
stream, as the attached logger cannot be never detached because someone else can have a reference to its older version. This is why it is a global singleton which can never dies. Therefore, in case of reloading this module with imp
or something else, it will never recapture the current sys.stderr
as it was already injected on it and have it saved internally.