Features/PythonEncodingUsesSystemLocale

= Change Python 2's encoding to use the system locale =

Summary
Make Fedora's C implementation of Python 2 use a locale-aware default string encoding (generally "UTF-8"), rather than hardcoding "ascii", thus avoiding exceptions of the form when running scripts in shell pipelines and cron jobs.

Owner

 * Name: Dave Malcolm


 * Email: 

Current status

 * Targeted release: Fedora 13
 * Last updated: 2010-01-20
 * Percentage of completion: withdrawn by owner

The upstream python community has requested that I not make this change, so I'm withdrawing this feature proposal. It's not clear to me how to do that through our feature process; the only available exit-states seem to be "Complete" and "Incomplete".

(unfortunately the python-dev list archives for that period seem corrupted; the gmane archive for the thread is here).

Detailed Description
This was originally requested as bug 243541.

Python's  includes this fragment of code: def setencoding: """Set the string encoding used by the Unicode implementation. The    default is 'ascii', but if you're willing to experiment, you can    change this.""" encoding = "ascii" # Default value set by _PyUnicode_Init if 0: # Enable to support locale aware default string encodings. import locale loc = locale.getdefaultlocale if loc[1]: encoding = loc[1] if 0: # Enable to switch off string to Unicode coercion and implicit # Unicode to string conversion. encoding = "undefined" if encoding != "ascii": # On Non-Unicode builds this will raise an AttributeError...       sys.setdefaultencoding(encoding) # Needs Python Unicode build !

It is proposed to change the first conditional to  in our CPython 2 build, so that Fedora's Python by default reads the locale from the environment and uses that encoding. This will generally mean  is used, rather than.

CPython's "default encoding"
The C implementation of Python 2 has two ways it can represent text strings:
 * the classic legacy  object in which each character is represented as a single byte in an undefined character set.  This is represented internally as a
 * objects where each character is represented as either 16-bit or 32-bit word in the Unicode character set (UCS). This is represented internally as a  .  We use UCS4 (32-bit) in Fedora's builds of Python.

Python 2 will encode and decode between unicode objects and str objects based on what Python believes the character set and character encoding are for the str object.

CPython 2's implementation has an internal read-only variable called  which is returned by   (for brevity sake I'm going to refer to this variable as default_encoding). Whenever Python passes a string to an external API or receives a string from an external API, e.g. any string ultimately passed to a C function and the C binding has not explicitly specified its encode/decode requirements then Python consults the unicode_default_encoding variable to decide how to encode/decode that string. That means any time you print a string, open a file, call a function in a CPython binding it is subject to the default encoding.

(In Python 3, the  object became a , and   became a   object)

The  is set in   to   for historical reasons. Then  makes the default_encoding read-only by removing it from the   module name space. This means you cannot call  without generating an exception. This also means Python's default encoding is locked to.

The reason for this appears to be an optimization within CPython: at the C level a   actually carries two copies of the string:
 * its UCS-{2,4} representation (this is the  field), and
 * its encoded representation after encoding it according to the value in the global  variable; this is the   field.

Think of this as a cached value of the string in the default encoding. The first time a unicode object is subject to encode/decode it caches the encoded value of the string to avoid having to encode/decode every time the unicode object needs to accessed in its encoded form. This cached value is invalidated when the unicode string content changes but there is no mechanism to invalidate it when the default encoding changes (hence, I believe, the restrictions on changing the default encoding, and the possibility that any  instances created prior to the modification of the default encoding may exhibit incorrect behavior with respect to encoding).

In Python 3, the default value of  is "utf-8" (this has been in the py3k branch of CPython's implementation since revision 55108); we do not plan to touch site.py for python3.

The system locale's encoding
In Fedora there is the notion of the "locale", embodying various localization parameters for the whole operating system. From the perspective of the "operating system locale", there is an "encoding", separate to that of the CPython runtime. From this perspective of operating system locale, our default encoding is UTF-8. This is normally set via login scripts in. The user if they wish may choose to override the system default. In both instances the default language and encoding is exported via an environment variable: [david@brick ~]$ echo $LANG en_US.utf8

It's possible to query this locale information from Python using the  module: >>> import locale >>> print locale.getdefaultlocale ('en_US', 'UTF8')

The encoding of stdout/stderr/stdin varies with TTY-connectivity
To add to the confusion, Py_InitializeEx sets up the encoding of each of stdout, stderr, stdin to the default locale encoding (typically UTF-8), _provided_ they are connected to a tty: errors=0x0) at Objects/fileobject.c:458 Python/pythonrun.c:322 out>) at Modules/main.c:512 at Modules/python.c:23 so that the python interpreter run interactively from a terminal uses UTF-8 for the standard streams: >>> sys.getdefaultencoding 'ascii' >>> sys.stdin.encoding 'UTF-8' >>> sys.stdout.encoding 'UTF-8' >>> sys.stderr.encoding 'UTF-8'
 * 1) 0 PyFile_SetEncodingAndErrors (f=0xb7fc5020, enc=0x80edc28 "UTF-8",
 * 1) 1 0x04fbdd49 in Py_InitializeEx (install_sigs= ) at
 * 1) 2 0x04fbe29e in Py_Initialize  at Python/pythonrun.c:359
 * 2) 3 0x04fc9886 in Py_Main (argc=, argv= foo.txt Traceback (most recent call last): File " ", line 1, in UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-2: ordinal not in range(128) [david@brick ~]$ python -c 'print u"\u03b1\u03b2\u03b3"' | less Traceback (most recent call last): File " ", line 1, in UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-2: ordinal not in range(128)

PyGTK and Pango
A significant "gotcha" here is that the  Python module forces the global default encoding variable to be 'utf-8'. It can do this because it's implemented in CPython where there are no restrictions; it directly calls /* set the default python encoding to utf-8 */ PyUnicode_SetDefaultEncoding("utf-8");

Let's take a little test drive and see things in action for ourselves: $ python Python 2.5.1 (r251:54863, Jun 15 2008, 18:24:51) [GCC 4.3.0 20080428 (Red Hat 4.3.0-8)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> sys.getdefaultencoding 'ascii' >>> sys.setdefaultencoding('utf-8') Traceback (most recent call last): File " ", line 1, in AttributeError: 'module' object has no attribute 'setdefaultencoding' >>> import pango >>> sys.getdefaultencoding 'utf-8'

This hidden global side-effect can be particularly confusing, since the module is typically imported implicitly by other modules (e.g. by the  module)

This was first introduced in pygtk in a 2000-10-25 commit, and was moved from the pygtk module to the pango module in a 2006-04-01 commit in response to https://bugzilla.gnome.org/show_bug.cgi?id=328031

site.py
Looking over the source history in upstream's Subversion: 'Added support to set the default encoding of strings at startup time to the values defined by the C locale...' -- changed default encoding to "ascii". you can still change the default via site.py...: Looking over upstream mailing list archives for this period:
 * the site.py hook to set the default encoding from the locale was added on June 7th 2000 in rev 15634:
 * the code was disabled by default 5 weeks later on July 15th 2000 in rev 16374 by effbot (Fredrik Lundh):
 * and the code was optimized two months later on Sept 18th 2000 in rev 17513, to only set it if it's changed:
 * Python-Dev changing the locale.py interface?: Fredrik Lundh 
 * followed by: "ascii default encoding":
 * http://mail.python.org/pipermail/python-dev/2000-July/006724.html

(unfortunately side-tracked into a debate of "deprecated" vs "depreciated"); I may have missed some of the discussion though.

sys.setdefaultencoding
The function  is defined in Python/sysmodule.c, it calls PyUnicode_SetDefaultEncoding(encoding) on the string "encoding"

PyUnicode_SetDefaultEncoding is defined in Objects/unicodeobject.c; it has this code: /* Make sure the encoding is valid. As side effect, this also loads the encoding into the codec registry cache. */   v = _PyCodec_Lookup(encoding); then copies the encoding into the buffer: "unicode_default_encoding"; this buffer supplies the return value for PyUnicode_GetDefaultEncoding, which is used in many places inside the unicode implementation, plus in bytearrayobject.c: bytearray_decode and in stringobject.c: PyString_AsDecodedObject and PyString_AsEncodedObject so it would seem that there's at least some risk in changing this setting.

ASCII vs UTF-8
UTF-8 is identical by design to ASCII when the set of characters is composed only from the ASCII character set: code points 0-127 are all represented in UTF-8 as bytes 0-127, identical to ASCII. So any string which was encodable in "ascii" will also be encodable in "utf-8", and the encodings will be byte-for-byte identical. Data containing bytes in the range 128-255 were not valid "ascii", and attempts to decode them to unicode would have failed.

An internationalized application is highly likely to store and emit characters outside of code points 0-127. With the current setting, scripts that do so will work when run directly at a TTY (since sys.stdout then has UTF-8 encoding), but will fail with a  when run as a cronjob, or as part of a shell pipeline.

Applications which used i18n unicode strings previously could only have worked correctly if they were manually encoding to UTF-8 on every output call, they should also see no regression. Applications which load unicode strings from translation catalogs would never have worked correctly and will now work.

Note, the only way existing applications could have worked correctly is:


 * 1) They load unicode strings and manually convert to UTF-8 on output.  Fixing the correct default encoding will remove the need for manual conversion on every output call.
 * 2) They load their i18n strings from a message catalog in UTF-8 format. This is typically specified as the codeset parameter in  or . In this case the strings loaded from the catalog are not instances, but are normal python instances.  When gettext is told to return strings via _ using the UTF-8 codeset python represents them as 'str' not 'unicode', in other words they are sequences of octets. When output the default encoding is not applied because they are not unicode strings, rather they are vanilla strings. Thus output works in our environment because their entire lifetime in python is as UTF-8.
 * 3) They imported pango at a suitably early placed during the running of the script, which internally rewrote the default encoding to be UTF-8.

However, there are many good reasons to work with i18n strings as instances, not byte sequences within instances which happen to be represented as UTF-8 (e.g. can't count the number of characters, can't concatenate, etc.). Thus applications should be able to represent their i18n strings as unicode (internally as UCS-4) and output correctly with correct translation to UTF-8 automatically applied by python, not manually.

(adapted from jdennis's comments on https://bugzilla.redhat.com/show_bug.cgi?id=243541)

The PyArg_ and Py_BuildValue APIs
There are numerous Python modules which wrap libraries, some modules provided as part of the core python package, and some from add-on rpms.

In order to wrap the libraries, the module implementations must convert data between  instances and the data types that the libraries use.

The standard way to convert from a  to a "native" data type is the PyArg_ API:
 * The "s", "s#', and "s*" formats (and the "z" variants) will handle a  as input by encoding the data using the default encoding and generating a C-style NUL-terminated string.  By changing from "ascii" to "UTF-8" we convert cases that would fail before, and make them work.
 * The "u" variants work on unicode and UCS-4 data, or require the caller to specify an encoding.
 * "et" passes the data from PyStringObject instances without recoding; I don't see how changes from "ascii" to "UTF-8" can cause a problem here.

The Py_BuildValue API works the other way, taking "native" types and converting back to  instances. In each case, I believe that it is safe to change the default encoding from ascii to UTF-8.

Benefit to Fedora
With this change, developers will find it significantly easier to use Fedora to write Python scripts: scripts will behave the same way when run within shell pipelines or during cron jobs as when the script is invoked directly from a terminal - a source of mysterious errors will go away.

Scope
(I plan to raise this on the upstream Python development list)

In theory this is just a one-byte change in the  shipped in the   rpm.

We do not plan to make the change in the  rpm although this has the same code in its  ; the existing implementation defaults to UTF-8, which matches out defaults.

How To Test
Given that this one-line change makes a deep and subtle change to the internals of Python, the best way of testing this is to get it into Rawhide ASAP and for people to test their Python code on a version of Python with the change.

If anyone encounters a regression related to this change, please file a bug immediately, and let dmalcolm@redhat.com know.

I have been testing with this change on my main development box and have not yet seen any regressions. John Dennis has also tested this and reports no regressions.

Smoketest

 * Run
 * It should report, not   (assuming that LANG ends with "utf8")
 * The same test should be runnable with, and report

Shell pipelines
The following shell pipeline should display the first 3 letters of the Greek alphabet (alpha, beta, gamma) within "less" [david@brick ~]$ python -c 'print u"\u03b1\u03b2\u03b3"' | less

It should no longer exhibit a UnicodeEncodeError like this one: Traceback (most recent call last): File " ", line 1, in UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-2: ordinal not in range(128)

User Experience
Most users should notice no change. People maintaining Python scripts should find that mysterious errors for scripts that only occur when inside shell pipelines or during cron jobs go away, and that they now work as they do when running the script manually.

If anyone encounters a regression related to this change, please file a bug immediately, and let dmalcolm@redhat.com know.

Dependencies
None: this is a one-line change in our python rpm.

Contingency Plan
In theory this is a one-line change in the site.py file shipped in our python rpm, and so it can be backed out by reverting that one line change.

(It may be that Python applications develop a dependency on our Python having made this change and so would be broken by reverting)

Documentation

 * Extensive information on this can be found at Features/PythonEncodingUsesSystemLocale.

Release Notes

 * Python 2's  has been changed so that Python 2's default encoding now respects the encoding from the   environment variable, typically using UTF-8, rather than defaulting to ASCII.  This should eliminate a common source of   problems seen when running Python within shell pipelines.

Comments and Discussion

 * See Talk:Features/PythonEncodingUsesSystemLocale