There is a trend of discouraging setting sys.setdefaultencoding('utf-8')
in Python 2. Can anybody list real examples of problems with that? Arguments like it is harmful
or it hides bugs
don't sound very convincing.
UPDATE: Please note that this question is only about utf-8
, it is not about changing default encoding "in general case".
Please give some examples with code if you can.
Real-word example #1
It doesn't work in unit tests.
The test runner (
nose
,py.test
, ...) initializessys
first, and only then discovers and imports your modules. By that time it's too late to change default encoding.By the same virtue, it doesn't work if someone runs your code as a module, as their initialisation comes first.
And yes, mixing
str
andunicode
and relying on implicit conversion only pushes the problem further down the line.Because you don't always want to have your strings automatically decoded to Unicode, or for that matter your Unicode objects automatically encoded to bytes. Since you are asking for a concrete example, here is one:
Take a WSGI web application; you are building a response by adding the product of an external process to a list, in a loop, and that external process gives you UTF-8 encoded bytes:
That's great and fine and works. But then your co-worker comes along and adds a new feature; you are now providing labels too, and these are localised:
You tested this in English and everything still works, great!
However, the
translations.get_label()
library actually returns Unicode values and when you switch locale, the labels contain non-ASCII characters.The WSGI library writes out those results to the socket, and all the Unicode values get auto-encoded for you, since you set
setdefaultencoding()
to UTF-8, but the length you calculated is entirely wrong. It'll be too short as UTF-8 encodes everything outside of the ASCII range with more than one byte.All this is ignoring the possibility that you are actually working with data in a different codec; you could be writing out Latin-1 + Unicode, and now you have an incorrect length header and a mix of data encodings.
Had you not used
sys.setdefaultencoding()
an exception would have been raised and you knew you had a bug, but now your clients are complaining about incomplete responses; there are bytes missing at the end of the page and you don't quite know how that happened.Note that this scenario doesn't even involve 3rd party libraries that may or may not depend on the default still being ASCII. The
sys.setdefaultencoding()
setting is global, applying to all code running in the interpreter. How sure are you there are no issues in those libraries involving implicit encoding or decoding?That Python 2 encodes and decodes between
str
andunicode
types implicitly can be helpful and safe when you are dealing with ASCII data only. But you really need to know when you are mixing Unicode and byte string data accidentally, rather than plaster over it with a global brush and hope for the best.The original poster asked for code which demonstrates that the switch is harmful—except that it "hides" bugs unrelated to the switch.
Summary of conclusions
Based on both experience and evidence I've collected, here are the conclusions I've arrived at.
Setting the defaultencoding to UTF-8 nowadays is safe, except for specialised applications, handling files from non unicode ready systems.
The "official" rejection of the switch is based on reasons no longer relevant for a vast majority of end users (not library providers), so we should stop discouraging users to set it.
Working in a model that handles Unicode properly by default is far better suited for applications for inter-systems communications than manually working with unicode APIs.
Effectively, modifying the default encoding very frequently avoids a number of user headaches in the vast majority of use cases. Yes, there are situations in which programs dealing with multiple encodings will silently misbehave, but since this switch can be enabled piecemeal, this is not a problem in end-user code.
More importantly, enabling this flag is a real advantage is users' code, both by reducing the overhead of having to manually handle Unicode conversions, cluttering the code and making it less readable, but also by avoiding potential bugs when the programmer fails to do this properly in all cases.
Since these claims are pretty much the exact opposite of Python's official line of communication, I think the an explanation for these conclusions is warranted.
Examples of successfully using a modified defaultencoding in the wild
Dave Malcom of Fedora believed it is always right. He proposed, after investigating risks, to change distribution wide def.enc.=UTF-8 for all Fedora users.
Hard fact presented though why Python would break is only the hashing behavior I listed, which is never picked up by any other opponent within the core community as a reason to worry about or even by the same person, when working on user tickets.
Resume of Fedora: Admittedly, the change itself was described as "wildly unpopular" with the core developers, and it was accused of being inconsistent with previous versions.
There are 3000 projects alone at openhub doing it. They have a slow search frontend, but scanning over it, I estimate 98% are using UTF-8. Nothing found about nasty surprises.
There are 18000(!) github master branches with it changed.
While the change is "unpopular" at the core community its pretty popular in the user base. Though this could be disregarded, since users are known to use hacky solutions, I don't think this is a relevant argument due to my next point.
There are only 150 bugreports total on GitHub due to this. At a rate of effectively 100%, the change seems to be positive, not negative.
To summarize the existing issues people have run into, I've scanned through all of the aforementioned tickets.
Chaging def.enc. to UTF-8 is typically introduced but not removed in the issue closing process, most often as a solution. Some bigger ones excusing it as temporary fix, considering the "bad press" it has, but far more bug reporters are justglad about the fix.
A few (1-5?) projects modified their code doing the type conversions manually so that they did not need to change the default anymore.
In two instances I see someone claiming that with def.enc. set to UTF-8 leads to a complete lack of output entirely, without explaining the test setup. I could not verify the claim, and I tested one and found the opposite to be true.
One claims his "system" might depend on not changing it but we do not learn why.
One (and only one) had a real reason to avoid it: ipython either uses a 3rd party module or the test runner modified their process in an uncontrolled way (it is never disputed that a def.enc. change is advocated by its proponents only at interpreter setup time, i.e. when 'owning' the process).
I found zero indication that the different hashes of 'é' and u'é' causes problems in real-world code.
Python does not "break"
After changing the setting to UTF-8, no feature of Python covered by unit tests is working any differently than without the switch. The switch itself, though, is not tested at all.
It is advised on bugs.python.org to frustrated users
Examples here, here or here (often connected with the official line of warning)
The first one demonstrates how established the switch is in Asia (compare also with the github argument).
Ian Bicking published his support for always enabling this behavior.
Martijn Fassen, while refuting Ian, admitted that ASCII might have been wrong in the first place.
In Python3, they don't "practice what they preach"
While opposing any def.enc. change so harshly because of environment dependent code or implicitness, a discussion here revolves about Python3's problems with its 'unicode sandwich' paradigm and the corresponding required implicit assumptions.
Further they created possibilities to write valid Python3 code like:
First of all: Many opponents of changing default enc argue that its dumb because its even changing ascii comparisons
I think its fair to make clear that, compliant with the original question, I see nobody advocating anything else than deviating from Ascii to UTF-8.
The setdefaultencoding('utf-16') example seems to be always just brought forward by those who oppose changing it ;-)
With m = {'a': 1, 'é': 2} and the file 'out.py':
Then:
[*]: Result assumes the same é. See below on that.
While looking at those operations, changing the default encoding in your program might not look too bad, giving you results 'closer' to having Ascii only data.
Regarding the hashing ( in ) and len() behaviour you get the same then in Ascii (more on the results below). Those operations also show that there are significant differences between unicode and byte strings - which might cause logical errors if ignored by you.
As noted already: It is a process wide option so you just have one shot to choose it - which is the reason why library developers should really never ever do it but get their internals in order so that they do not need to rely on python's implicit conversions. They also need to clearly document what they expect and return and deny input they did not write the lib for (like the normalize function, see below).
=> Writing programs with that setting on makes it risky for others to use the modules of your program in their code, at least without filtering input.
Note: Some opponents claim that def.enc. is even a system wide option (via sitecustomize.py) but latest in times of software containerisation (docker) every process can be started in its perfect environment w/o overhead.
Regarding the hashing and len() behaviour:
It tells you that even with a modified def.enc. you still can't be ignorant about the types of strings you process in your program. u'' and '' are different sequences of bytes in the memory - not always but in general.
So when testing make sure your program behaves correctly also with non Ascii data.
Some say the fact that hashes can become unequal when data values change - although due to implicit conversions the '==' operations remain equal - is an argument against changing def.enc.
I personally don't share that since the hashing behaviour just remains the same as w/o changing it. Have yet to see a convincing example of undesired behaviour due to that setting in a process I 'own'.
All in all, regarding setdefaultencoding("utf-8"): The answer regarding if its dumb or not should be more balanced.
It depends. While it does avoid crashes e.g. at str() operations in a log statement - the price is a higher chance for unexpected results later since wrong types make it longer into code whose correct functioning depends on a certain type.
In no case it should be the alternative to learning the difference between byte strings and unicode strings for your own code.
Lastly, setting default encoding away from Ascii does not make your life any easier for common text operations like len(), slicing and comparisons - should you assume than (byte)stringyfying everything with UTF-8 on resolves problems here.
Unfortunately it doesn't - in general.
The '==' and len() results are far more complex problem than one might think - but even with the same type on both sides.
W/o def.enc. changed, "==" fails always for non Ascii, like shown in the table. With it, it works - sometimes:
Unicode did standardise around a million symbols of the world and gave them a number - but there is unfortunately NOT a 1:1 bijection between glyphs displayed to a user in output devices and the symbols they are generated from.
To motivate you research this: Having two files, j1, j2 written with the same program using the same encoding, containing user input:
Result: 2.7.9 José José False (!)
Using print as a function in Py2 you see the reason: Unfortunately there are TWO ways to encode the same character, the accented 'e':
What a stupid codec you might say but its not the fault of the codec. Its a problem in unicode as such.
So even in Py3:
Result: 3.4.2 José José False (!)
=> Independent of Py2 and Py3, actually independent of any computing language you use: To write quality software you probably have to "normalise" all user input. The unicode standard did standardise normalisation. In Python 2 and 3 the unicodedata.normalize function is your friend.
One thing we should know is
so if we change default encoding, there will be all kinds of incompatible issues. eg:
More examples:
That said, I remember there is some blog suggesting use unicode whenever possible, and only bit string when deal with I/O. I think if your follow this convention, life will be much easier. More solutions can be found: