(→Current status: add link to gmane's archive of the python-dev discussion) |
|||
(4 intermediate revisions by 2 users not shown) | |||
Line 22: | Line 22: | ||
* Targeted release: [[Releases/13 | Fedora 13 ]] | * Targeted release: [[Releases/13 | Fedora 13 ]] | ||
* Last updated: 2010-01-20 | * Last updated: 2010-01-20 | ||
* Percentage of completion: withdrawn by owner | * Percentage of completion: <b>withdrawn by owner</b> | ||
The upstream python community has requested that I not make this change, so I'm withdrawing this feature proposal. It's not clear to me how to do that through our feature process; the only available exit-states seem to be "Complete" and "Incomplete". | <b>The upstream python community has requested that I not make this change, so I'm withdrawing this feature proposal.</b> It's not clear to me how to do that through our feature process; the only available exit-states seem to be "Complete" and "Incomplete". | ||
(unfortunately the python-dev list archives | (unfortunately the python-dev list archives for that period seem corrupted; the gmane archive for the thread is [http://thread.gmane.org/gmane.comp.python.devel/109914 here]). | ||
== Detailed Description == | == Detailed Description == | ||
Line 308: | Line 308: | ||
* See [[Talk:Features/PythonEncodingUsesSystemLocale]] <!-- This adds a link to the "discussion" tab associated with your page. This provides the ability to have ongoing comments or conversation without bogging down the main feature page --> | * See [[Talk:Features/PythonEncodingUsesSystemLocale]] <!-- This adds a link to the "discussion" tab associated with your page. This provides the ability to have ongoing comments or conversation without bogging down the main feature page --> | ||
[[Category:FeaturePageIncomplete]] | |||
[[Category: | |||
<!-- When your feature page is completed and ready for review --> | <!-- When your feature page is completed and ready for review --> | ||
<!-- remove Category:FeaturePageIncomplete and change it to Category:FeatureReadyForWrangler --> | <!-- remove Category:FeaturePageIncomplete and change it to Category:FeatureReadyForWrangler --> | ||
<!-- After review, the feature wrangler will move your page to Category:FeatureReadyForFesco... if it still needs more work it will move back to Category:FeaturePageIncomplete--> | <!-- After review, the feature wrangler will move your page to Category:FeatureReadyForFesco... if it still needs more work it will move back to Category:FeaturePageIncomplete--> | ||
<!-- A pretty picture of the page category usage is at: https://fedoraproject.org/wiki/Features/Policy/Process --> | <!-- A pretty picture of the page category usage is at: https://fedoraproject.org/wiki/Features/Policy/Process --> |
Latest revision as of 19:17, 7 March 2011
Change Python 2's encoding to use the system locale
Summary
Make Fedora's C implementation of Python 2 use a locale-aware default string encoding (generally "UTF-8"), rather than hardcoding "ascii", thus avoiding exceptions of the form
UnicodeEncodeError: 'ascii' codec can't encode characters in position ...: ordinal not in range(128)
when running scripts in shell pipelines and cron jobs.
Owner
- Name: Dave Malcolm
- Email: <dmalcolm@redhat.com>
Current status
- Targeted release: Fedora 13
- Last updated: 2010-01-20
- Percentage of completion: withdrawn by owner
The upstream python community has requested that I not make this change, so I'm withdrawing this feature proposal. It's not clear to me how to do that through our feature process; the only available exit-states seem to be "Complete" and "Incomplete".
(unfortunately the python-dev list archives for that period seem corrupted; the gmane archive for the thread is here).
Detailed Description
This was originally requested as bug 243541.
Python's site.py
includes this fragment of code:
def setencoding(): """Set the string encoding used by the Unicode implementation. The default is 'ascii', but if you're willing to experiment, you can change this.""" encoding = "ascii" # Default value set by _PyUnicode_Init() if 0: # Enable to support locale aware default string encodings. import locale loc = locale.getdefaultlocale() if loc[1]: encoding = loc[1] if 0: # Enable to switch off string to Unicode coercion and implicit # Unicode to string conversion. encoding = "undefined" if encoding != "ascii": # On Non-Unicode builds this will raise an AttributeError... sys.setdefaultencoding(encoding) # Needs Python Unicode build !
It is proposed to change the first conditional to if 1:
in our CPython 2 build, so that Fedora's Python by default reads the locale from the environment and uses that encoding. This will generally mean UTF-8
is used, rather than ascii
.
Background
CPython's "default encoding"
The C implementation of Python 2 has two ways it can represent text strings:
- the classic legacy
str
object in which each character is represented as a single byte in an undefined character set. This is represented internally as astruct PyStringObject
unicode
objects where each character is represented as either 16-bit or 32-bit word in the Unicode character set (UCS). This is represented internally as astruct PyUnicodeObject
. We use UCS4 (32-bit) in Fedora's builds of Python.
Python 2 will encode and decode between unicode objects and str objects based on what Python believes the character set and character encoding are for the str object.
CPython 2's implementation has an internal read-only variable called unicode_default_encoding
which is returned by sys.getdefaultencoding()
(for brevity sake I'm going to refer to this variable as default_encoding). Whenever Python passes a string to an external API or receives a string from an external API, e.g. any string ultimately passed to a C function and the C binding has not explicitly specified its encode/decode requirements then Python consults the unicode_default_encoding variable to decide how to encode/decode that string. That means any time you print a string, open a file, call a function in a CPython binding it is subject to the default encoding.
(In Python 3, the str
object became a struct PyUnicodeObject
, and struct PyStringObject
became a bytes
object)
The unicode_default_encoding
is set in site.py
to ascii
for historical reasons. Then site.py
makes the default_encoding read-only by removing it from the sys
module name space. This means you cannot call sys.setdefaultencoding()
without generating an exception. This also means Python's default encoding is locked to ascii
.
The reason for this appears to be an optimization within CPython: at the C level a struct PyUnicodeObject
actually carries two copies of the string:
- its UCS-{2,4} representation (this is the
Py_UNICODE *str
field), and - its encoded representation after encoding it according to the value in the global
unicode_default_encoding
variable; this is thePyObject *defenc
field.
Think of this as a cached value of the string in the default encoding. The first time a unicode object is subject to encode/decode it caches the encoded value of the string to avoid having to encode/decode every time the unicode object needs to accessed in its encoded form. This cached value is invalidated when the unicode string content changes but there is no mechanism to invalidate it when the default encoding changes (hence, I believe, the restrictions on changing the default encoding, and the possibility that any struct PyUnicodeObject
instances created prior to the modification of the default encoding may exhibit incorrect behavior with respect to encoding).
In Python 3, the default value of unicode_default_encoding
is "utf-8" (this has been in the py3k branch of CPython's implementation since revision 55108); we do not plan to touch site.py for python3.
The system locale's encoding
In Fedora there is the notion of the "locale", embodying various localization parameters for the whole operating system. From the perspective of the "operating system locale", there is an "encoding", separate to that of the CPython runtime. From this perspective of operating system locale, our default encoding is UTF-8. This is normally set via login scripts in /etc/profile.d
. The user if they wish may choose to override the system default.
In both instances the default language and encoding is exported via an environment variable:
[david@brick ~]$ echo $LANG en_US.utf8
It's possible to query this locale information from Python using the locale
module:
>>> import locale >>> print locale.getdefaultlocale() ('en_US', 'UTF8')
The encoding of stdout/stderr/stdin varies with TTY-connectivity
To add to the confusion, Py_InitializeEx sets up the encoding of each of stdout, stderr, stdin to the default locale encoding (typically UTF-8), _provided_ they are connected to a tty:
#0 PyFile_SetEncodingAndErrors (f=0xb7fc5020, enc=0x80edc28 "UTF-8", errors=0x0) at Objects/fileobject.c:458 #1 0x04fbdd49 in Py_InitializeEx (install_sigs=<value optimized out>) at Python/pythonrun.c:322 #2 0x04fbe29e in Py_Initialize () at Python/pythonrun.c:359 #3 0x04fc9886 in Py_Main (argc=<value optimized out>, argv=<value optimized out>) at Modules/main.c:512 #4 0x080485c7 in main (argc=<value optimized out>, argv=<value optimized out>) at Modules/python.c:23
so that the python interpreter run interactively from a terminal uses UTF-8 for the standard streams:
>>> sys.getdefaultencoding() 'ascii' >>> sys.stdin.encoding 'UTF-8' >>> sys.stdout.encoding 'UTF-8' >>> sys.stderr.encoding 'UTF-8'
This means that a simple case (printing lower case greek alpha, beta, gamma) works when run directly:
[david@brick ~]$ python -c 'print u"\u03b1\u03b2\u03b3"' αβγ
...but fails if you pipe it to a file or redirected into "less", despite the fact that the system locale is UTF-8, and thus "less" expects UTF-8 data:
[david@brick ~]$ python -c 'print u"\u03b1\u03b2\u03b3"' > foo.txt Traceback (most recent call last): File "<string>", line 1, in <module> UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-2: ordinal not in range(128) [david@brick ~]$ python -c 'print u"\u03b1\u03b2\u03b3"' | less Traceback (most recent call last): File "<string>", line 1, in <module> UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-2: ordinal not in range(128)
PyGTK and Pango
A significant "gotcha" here is that the pango
Python module forces the global default encoding variable to be 'utf-8'. It can do this because it's implemented in CPython where there are no restrictions; it directly calls PyUnicode_SetDefaultEncoding
/* set the default python encoding to utf-8 */ PyUnicode_SetDefaultEncoding("utf-8");
Let's take a little test drive and see things in action for ourselves:
$ python Python 2.5.1 (r251:54863, Jun 15 2008, 18:24:51) [GCC 4.3.0 20080428 (Red Hat 4.3.0-8)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> sys.getdefaultencoding() 'ascii' >>> sys.setdefaultencoding('utf-8') Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'module' object has no attribute 'setdefaultencoding' >>> import pango >>> sys.getdefaultencoding() 'utf-8'
This hidden global side-effect can be particularly confusing, since the module is typically imported implicitly by other modules (e.g. by the gtk
module)
This was first introduced in pygtk in a 2000-10-25 commit, and was moved from the pygtk module to the pango module in a 2006-04-01 commit in response to https://bugzilla.gnome.org/show_bug.cgi?id=328031
site.py
Looking over the source history in upstream's Subversion:
- the site.py hook to set the default encoding from the locale was added on June 7th 2000 in rev 15634:
'Added support to set the default encoding of strings at startup time to the values defined by the C locale...'
- the code was disabled by default 5 weeks later on July 15th 2000 in rev 16374 by effbot (Fredrik Lundh):
-- changed default encoding to "ascii". you can still change the default via site.py...:
- and the code was optimized two months later on Sept 18th 2000 in rev 17513, to only set it if it's changed:
Looking over upstream mailing list archives for this period:
- Python-Dev changing the locale.py interface?: Fredrik Lundh <effbot@telia.com>
- followed by: "ascii default encoding":
- http://mail.python.org/pipermail/python-dev/2000-July/006724.html
(unfortunately side-tracked into a debate of "deprecated" vs "depreciated"); I may have missed some of the discussion though.
sys.setdefaultencoding
The function sys.setdefaultencoding
is defined in Python/sysmodule.c, it calls
PyUnicode_SetDefaultEncoding(encoding) on the string "encoding"
PyUnicode_SetDefaultEncoding is defined in Objects/unicodeobject.c; it has this code:
/* Make sure the encoding is valid. As side effect, this also loads the encoding into the codec registry cache. */ v = _PyCodec_Lookup(encoding);
then copies the encoding into the buffer: "unicode_default_encoding"; this buffer supplies the return value for PyUnicode_GetDefaultEncoding(), which is used in many places inside the unicode implementation, plus in bytearrayobject.c: bytearray_decode() and in stringobject.c: PyString_AsDecodedObject() and PyString_AsEncodedObject() so it would seem that there's at least some risk in changing this setting.
ASCII vs UTF-8
UTF-8 is identical by design to ASCII when the set of characters is composed only from the ASCII character set: code points 0-127 are all represented in UTF-8 as bytes 0-127, identical to ASCII. So any string which was encodable in "ascii" will also be encodable in "utf-8", and the encodings will be byte-for-byte identical. Data containing bytes in the range 128-255 were not valid "ascii", and attempts to decode them to unicode would have failed.
An internationalized application is highly likely to store and emit characters outside of code points 0-127. With the current setting, scripts that do so will work when run directly at a TTY (since sys.stdout then has UTF-8 encoding), but will fail with a UnicodeEncodeError
when run as a cronjob, or as part of a shell pipeline.
Applications which used i18n unicode strings previously could only have worked correctly if they were manually encoding to UTF-8 on every output call, they should also see no regression. Applications which load unicode strings from translation catalogs would never have worked correctly and will now work.
Note, the only way existing applications could have worked correctly is:
- They load unicode strings and manually convert to UTF-8 on output. Fixing the correct default encoding will remove the need for manual conversion on every output call.
- They load their i18n strings from a message catalog in UTF-8 format. This is typically specified as the codeset parameter in
gettext.bind_textdomain_codeset()
orgettext.install()
. In this case the strings loaded from the catalog are not <unicode> instances, but are normal python <str> instances. When gettext is told to return strings via _() using the UTF-8 codeset python represents them as 'str' not 'unicode', in other words they are sequences of octets. When output the default encoding is not applied because they are not unicode strings, rather they are vanilla strings. Thus output works in our environment because their entire lifetime in python is as UTF-8. - They imported pango at a suitably early placed during the running of the script, which internally rewrote the default encoding to be UTF-8.
However, there are many good reasons to work with i18n strings as <unicode> instances, not byte sequences within <str> instances which happen to be represented as UTF-8 (e.g. can't count the number of characters, can't concatenate, etc.). Thus applications should be able to represent their i18n strings as unicode (internally as UCS-4) and output correctly with correct translation to UTF-8 automatically applied by python, not manually.
(adapted from jdennis's comments on https://bugzilla.redhat.com/show_bug.cgi?id=243541)
The PyArg_ and Py_BuildValue APIs
There are numerous Python modules which wrap libraries, some modules provided as part of the core python package, and some from add-on rpms.
In order to wrap the libraries, the module implementations must convert data between struct PyObject
instances and the data types that the libraries use.
The standard way to convert from a struct PyObject
to a "native" data type is the PyArg_ API:
- The "s", "s#', and "s*" formats (and the "z" variants) will handle a
struct PyUnicodeObject
as input by encoding the data using the default encoding and generating a C-style NUL-terminated string. By changing from "ascii" to "UTF-8" we convert cases that would fail before, and make them work. - The "u" variants work on unicode and UCS-4 data, or require the caller to specify an encoding.
- "et" passes the data from PyStringObject instances without recoding; I don't see how changes from "ascii" to "UTF-8" can cause a problem here.
The Py_BuildValue API works the other way, taking "native" types and converting back to struct PyObject
instances. In each case, I believe that it is safe to change the default encoding from ascii to UTF-8.
Benefit to Fedora
With this change, developers will find it significantly easier to use Fedora to write Python scripts: scripts will behave the same way when run within shell pipelines or during cron jobs as when the script is invoked directly from a terminal - a source of mysterious errors will go away.
Scope
(I plan to raise this on the upstream Python development list)
In theory this is just a one-byte change in the site.py
shipped in the python
rpm.
We do not plan to make the change in the python3
rpm although this has the same code in its site.py
; the existing implementation defaults to UTF-8, which matches out defaults.
How To Test
Given that this one-line change makes a deep and subtle change to the internals of Python, the best way of testing this is to get it into Rawhide ASAP and for people to test their Python code on a version of Python with the change.
If anyone encounters a regression related to this change, please file a bug immediately, and let dmalcolm@redhat.com know.
I have been testing with this change on my main development box and have not yet seen any regressions. John Dennis has also tested this and reports no regressions.
Smoketest
- Run
python -c "import sys; print(sys.getdefaultencoding())"
- It should report
UTF8
, notascii
(assuming that LANG ends with "utf8") - The same test should be runnable with
python3
, and reportutf-8
Shell pipelines
The following shell pipeline should display the first 3 letters of the Greek alphabet (alpha, beta, gamma) within "less"
[david@brick ~]$ python -c 'print u"\u03b1\u03b2\u03b3"' | less
It should no longer exhibit a UnicodeEncodeError like this one:
Traceback (most recent call last): File "<string>", line 1, in <module> UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-2: ordinal not in range(128)
User Experience
Most users should notice no change. People maintaining Python scripts should find that mysterious errors for scripts that only occur when inside shell pipelines or during cron jobs go away, and that they now work as they do when running the script manually.
If anyone encounters a regression related to this change, please file a bug immediately, and let dmalcolm@redhat.com know.
Dependencies
None: this is a one-line change in our python rpm.
Contingency Plan
In theory this is a one-line change in the site.py file shipped in our python rpm, and so it can be backed out by reverting that one line change.
(It may be that Python applications develop a dependency on our Python having made this change and so would be broken by reverting)
Documentation
- Extensive information on this can be found at Features/PythonEncodingUsesSystemLocale.
Release Notes
- Python 2's
site.py
has been changed so that Python 2's default encoding now respects the encoding from theLANG
environment variable, typically using UTF-8, rather than defaulting to ASCII. This should eliminate a common source ofUnicodeEncodeError
problems seen when running Python within shell pipelines.