In Python, what is the best way to write to a UTF-8 encoded file with platform-dependent newlines? the solution would ideally work quite transparently in a program that does a lot of printing in Python 2. (Information about Python 3 is welcome too!)
In fact, the standard way of writing to a UTF-8 file seems to be codecs.open('name.txt', 'w'). However, the documentation indicates that
(…) no automatic conversion of '\n' is done on reading and writing.
because the file is actually opened in binary mode. So, how to write to a UTF-8 file with proper platform-dependent newlines?
Note: The 't' mode seems to actually do the job (codecs.open('name.txt', 'wt')) with Python 2.6 on Windows XP, but is this documented and guaranteed to work?
Presuming Python 2.7.1 (that's the docs that you quoted): The 'wt' mode is not documented (the ONLY mode documented is 'r'), and does not work -- the codecs module appends 'b' to the mode, which causes it to fail:
Avoid the codecs module and DIY:
Update about Python 3.x:
It appears the codecs.open() has the same deficiency (won't write platform-specific line terminator). However built-in open(), which has an
encoding
arg, is happy to do it:Update about Python 2.6
The docs say the same as the 2.7 docs. The difference is that the "bludgeon into binary mode" hack of appending "b" to the mode arg failed in 2.6 because "wtb" wasn't detected as as an invalid mode, the file was opened in text mode, and appears to work as you wanted, not as documented:
Are you looking for
os.linesep
? http://www.python.org/doc//current/library/os.html#os.linesepIn Python 2, why not encode explicitly?
Both embedded newlines, and those emitted by
print
, will be converted to the appropriate platform newline.