From Fedora Project Wiki
(Import notes)
 
 
(4 intermediate revisions by 2 users not shown)
Line 1: Line 1:
{{draft|This is a raw log of everything that was discussed at the Python Guidelines workshop.  I'll apply some formatting to this and move it to a new area to finish it off.}}
At Flock 2013 in Charleston, SC we met to discuss various ways in which the Python Guidelines should be updated in light of the changes happening to upstream packaging standards, tools, and the increasing push to use python3.  These are the notes from that discussion.


SCL - Collections
== Wheel: the new upstream distribution format ==


* Use it to create a parallel stack.
Wheels have more metadata so it becomes more feasible to automatically generate spec files given upstream spec files.  In Fedora we'd use wheels like this:
* What is the advantage over virtualenv: to find out what's on your system, have to consult both rpm and pip  SCL can tell you a single systemIf you biuld from an rpm then you may know more about what rpms are nstalledOtherwise you just have a blob.
 
   * Better that you have one-one relationship between what's in SCL and what's on system
* Use the tarball from pypi, not the wheel.
  * You do have knowledge of what files are on the filesystem in the rpm database so that allows certain auditing to take place (whereas virtualenv doesn't).
* In %prep, unpack the tarball
  * virtualenv doesn't integrate with people's current tools to deal with rpms.
* In %build create a wheel with something like <code>pip wheel --no-deps $(pwd)</code>.
** This may create a .whl file or an unpacked wheel.  Either one can be used in the next step
* In %install, use something like <code>pip install wheel --installdir</code> to install the wheel.  It gets installed onto the system in different FHS compliant dirs:
** datadir
** scriptdir
** platlib
** purelib
** docsdir
** These dirs are encoded in a pip (or python3 stdlib) config file.
 
{{admon/note|python-wheel is not as good as pip|The wheel command from python-wheel might not have an equivalent to --root (to install into the buildroot) but pip does have something so we'd need to use pip to install}}
 
{{admon/note|Built pacakge format|Wheels are intended as a built package format. Upstream advice is that we do not package from wheels if we want our packages to be "from source".  Instead, we should favour sdists or checkouts from usptream's SCM}}
 
Installing wheels creates a "metadata store" (distinfo directory) so we would want to install using the wheel package that we build so that this directory is fully installedThis way pip knows about everything that's installed via system packages.
 
* setup.py install => will only play nice with the distinfo data in certain casesSo most of the time we want to convert to wheel building.
   * If the package can't be built as a wheel then distinfo will be created if:
    * setuptools is used in setup.py to build if a special command line flag is used.
    * if it's not then it likely will not.
* pip always uses setuptools to install (even if distutils is used in the setup.py) so it will always create distinfo metadata.
* With pip wheel we can use a single directory.  No need to copy to a second directory anymore.
** pip wheel (build) will clean the build artifcats automatically.
* We will no longer need egginfo files and dirs (if distinfo is installed)
 
pip-1.5 due out by end of year  (?Not sure why this was important... it brought a new feature but I don't remember what that was?)
 
Upgrading to Metadata 2.0 will be an automatic thing if we build and install from wheels.  METADATA-2.0 will be able to answer "This package installs these python modules".  The timeframe for this is pip-1.6 which is due out the middle of next year.  (Hopefully f22).
 
pyp2rpm from slavek may be able to use Metadata 2.0 to generate nearly complete


=== Should we depend on both pip and setuptools explicitly? ===
In guidelines BR both because upstream pip may make this an optional feature and we may or may not put that requirement into pip.


================
=== Metadata 2.0 for non-wheels ===
For automake and other ways of creating packages; we want to install distinfo directory.  Currently, the upstream may be generating and installing egg-info.  If so, this could just be updated to provide distinfo instead.  If the upstream doesn't provide egg-info now, we aren't losing anything by not generating distinfo (ie: things that didn't work before (because they lacked metadata) will simply continue not to work).


Python Guidelines
It might be nice to get generation of the metadata into upstream automake itself but someone would have to commit to doing that.  We probably don't need to get generation of wheels into upstream automake because wheels are a distribution format, not an install format.


Parallel 2 and 3 stack
== Shebang lines ==


Explicitly say from __future__ import unicode_literals is almost certainly a bad thing
Agree that we want to convert shebang lines to /usr/bin/python2 and /usr/bin/python3 (and drop usage of /usr/bin/python).
* Reason:  Some things should be the native string type.  Attribute names on objects, for instance.
* If you are in the frame of mind that you are reading python2 code, then you may be surprised when a bare literal string returns unicode.
* python2 -bb  -- turn off automatic promotion so that you get a warning or an error when you mix bytes and unicode strings.


===============
FPC ticket has been opened already -- hashing out an implementation on that ticket.  Something that may help is checking that the shebang line on pip itself is /usr/bin/python2... if we change that to /usr/bin/python2 it should affect everything that it installs  (Need to check this)
wheel -- new


More feasible to generate spec files automatically given the upstream spec file.
* May need to use some pip command line option to have scripts installed the setup.py script target install (?not sure what this note was meant to mean?)


%prep unpack the tarball
{{admon/note|python3-pip| The pip script for python3 is named python3-pip which follows the guidelines recommendation to have a "python3-" prefix when a package provides both python2 and python3 scripts.  We discussed changing this (either the specific pip package or the general guidelines) and decided that this was fine. (python3-pip actually provides both python3-pip and pip-python3)}}
%build create a wheel -- can just be an unpacked directory
  bdist_wheel => setup.py subcommand that can be used to create a wheel


pip wheel --nodeps to build


== Parallel Python2 and Python3 stack ==


%install  Wheel gets installed onto the system into different FHS compliant dirs
=== Notes to packagers who need to port ===
datadir
scriptdir
platlib
purelib
docsdir


pip install wheel --installdir
Packagers can help upstreams port their code to python3.  Here are some hints to help them:


wheel command from python-wheel might not have --root command but pip does have it so we'd need to use pip to install these.
Explicitly saying <code>from __future__ import unicode_literals</code> is almost certainly a bad thing for several reasons:
* Some things should be the native string type.  Attribute names on objects, for instance.
* If you are in the frame of mind that you are reading python2 code, then you may be surprised when a bare literal string returns unicode.  The <code>from __future__ import unicode_literals</code> occurs at the top of the file while the strings themselves are spread throughout.  When you get a traceback and go to look at the code you will almost certainly jump down to the line the traceback is on and may well miss the unicode_literals line at the top.


Wheel creates a "database" (distinfo directory).  So we would want to install using the wheel package that we build so that this directory is fully installed so pip knows about everything.
Some programs and command line switches help migrate:


* setup.py install => will only play nice with the distinfo data in certain casesSo most of hte time we want to convert to wheel building.
* python-unicodenazi package provides a module that will help catch mixing byte str and unicode string. These mixtures are almost certianly illegal in python3.
  * If the package can't then:
* python2 -b -- turns off automatic conversion of byte str and unicode string so that you get a warning or an error when you mix bytes and unicode strings.
    * if it's setuptools then we will emit the correct metadata w/ a special command line flag
* python-modernize -- attempts to convert your code to a subset of python2 that runs on python3.
    * if it's not
* 2to3 -- (when run in non-overwrite mode, it will simply tell you what things need to be changed).


* pip always uses setuptools to install.


* Upgrading to Metadata 2.0 will be an automatic thing.
=== Python3 by default ===


* pip2rpm from slavek
We decided on the mailing lists to switch over when PEP394 changes its recommendation.  2015 is the earliest that upstream is likely to change this and it may be later depending on what the ecosystem of python2 and python3 looks like at that time.


* Depend on both pip and setuptools explicitly?
To get ready for that eventuality, we need to change shebang lines from /usr/bin/python to /usr/bin/python2.  Since moving to pip as the means to install this, we should audit these after the pip migration and change any of these that the pip conversion did not take care of.
  * In guidelines BR both because upstream pip may make this an optional feature and we may or may not put that requirement into pip.


* For automake and other ways of creating packages; we want to install distinfo directory but if the upstream doesn't we aren't losing anything.
We also discussed whether to convert scripts from /usr/bin/env python to /usr/bin/pythonX.  In the past, there was perceived cost as this would deviate from upstream.  Now, however, we will have to maintain patches to convert to using "python2" rather than "python" so we could consider banning /usr/bin/env as well.   env is not good in the shebang line for several reasons:


* We no longer need egginfo files and dirs (if distinfo is installed)
* Will always ignore virtualenv.  So scripts run in a virtualenv that use /usr/bin/env will use the system python instead of the virtualenv's python.
* If a sysadmin installs another python interpreter on the path (for instance, in /usr/local) for their use on their systems, that python interpreter may also end up being used by scripts which use /usr/bin/env to find the interpreter.  This might break rpm installed scripts.


pip-1.5 due out by end of year.
* python3.4 will bundle a version of pip as get_pip which users of upstream releases can use to bootstrap an updated pip package from pypi. In Fedora we can have python-libs Require: python-pip and use a symlink or something to replace the bundled version


check: Shebang line on pip itself... if we change that to /usr/bin/python2 will that effect everything that it installs?  (Should be but check)
== Naming of python modules and subpackages ==


* May need to use some pip command line option to have scripts installed the setup.py script target install.
We have three potential package names:
  * python3-pip via guidelines
* python2-setuptools
* python3-setuptools
* python-setuptools


METADATA-2.0 Be able to answer "This package installs these modules".
These can be real toplevel packages (directly made from an srpm name) or a subpackage. There are several reasons that separate packages are better than subpackages:
=> timeframe pip-1.6 middle of next year.  (Hopefully f22).


* With pip wheel we can use a single directoryNo need to copy to a second directory anymore.
* It allows the packager to tell when to abandon the python2 versionIf they orphan the python2 version and no one picks it up, then it is no longer important enough to anyone to use.  With subpackages, the maintainer would remove the python2 version from their spec file.  Then they'd get a bug report asking them to put it back in if someone was still using it (or people would stop using Fedora because it was no longer providing the python2 modules that they needed).
** pip wheel (build) will clean the build artifcats automatically.
* It allows the python2 and python3 packages to develop independently.  With subpackages, a bug in one version of the package prevents the build from succeeding in either.  This can stop package updates to either version even though the issue only exists in one or the other.
* Spec file is cleaner in that there's no conditionals for disabling python2 or python3 builds


====================
Separate packages have the following drawback:
* A packager that cares about both python2 and python3 has to review and build two separate packages.
* We suspect that with two packages, many python modules will only be built for python2 because no one will care about building the python3 version and it's more extra work.


Python3
On first discussing this, we came up with the following plan:
* New packages -- Two separate packages
* Old packages -- grandfathered in but if the reasons make sense to the packager then you can split them into separate packages


* Python3 by default:
After further discussion and deciding to put more weight on wanting to have python3 packages built we decided that we'd stay closer to the current guidelines, proposing slight guidelines changes so that rationale for subpackages vs dual packages is more clear and the two approaches are on a more equal footing.
  * Change when PEP394 changes
* Not until 2015 or later


* Shebang lines -- we need to change those to /usr/bin/python2 ==> We should audit and change these after the pip migration as the pip conversion will take care of a lot of these.
=== Module naming ===
** Cost of converting /usr/bin/env to /usr/bin/pythonX has also decreased since we have to review and maintain patches for /usr/bin/python => /usr/bin/pythonX.  So we could consider banning /usr/bin/env as well.
** env in shebang line:  Will always ignore virtualenv.  Means that sysadmin's changes will affect this.


* pip -- get-pipWill bundle a version of pipIn Fedora we can have python-libs Require: python-pip and use a symlink or something
We decided that even though spec files would get uglier it would make sense to have python-MODULE packages with python2-MODULE and python3-MODULE subpackagesPackages which had separate srpms for these would simply have separately named python2-MODULE and python3-MODULE toplevel packagesThe result of this is that users of bugzilla may have a problem in their python2-MODULE install and have to look up both python2-MODULE and python-MODULE in order to find what component to file the bug under.  This may cause extra work but it won't be outright confusing (ie: no python3-MODULE will need to file under python2-MODULE or vice versa).


* Naming: How do we want to name python modules?
For the subpackages, we can add with_python2 conditionals to make building python2 optional on some Fedora versions.  There are currently no Fedora or RHEL versions that would disable python2 module building.
** python2-setuptools vs python3-setuptools vs python-setuptools


*** New packages -- Two separate packages
== pypy ==
**** Allows us to tell when to abandon the python2 bindings
**** makes it so that python2 and python3 don't have


** Old packages -- grandfathered in but if the reasons make sense to you then you can split them into separate packages
We wondered how we should (or if we should) package modules for pypy.  Problems with pypy:


* Decided that it might not be ideal but we're going to promote both subpackage and dual packages.
* Realistically if you're using C dependencies you shouldn't be using pypy (pypy doesn't do ref counting natively so it has to be emulated for the CAPI.  This can cause problems as bugs in the extension's refcounting may cause problems in the emulation where they would be hidden in C Python.)
** Some of platlib will work using the emulated CAPI.
* The byte compiled files will differ
** At the source level you could share purelib
** python3.2(3?) added a different directory to save the CPython byte compiled files but this won't help with python2


* Add conditionals to make with_python2 conditional optional
After some tired discussion (this was at the end of the day and end of the discussion) we decided it would be worthwhile to try this:


* Could be worth a try to have it use the system site-packages that python has.
** pypy using the site-package via a symlink in pypy to the system site-packages.  We release note it as:
This is a technical preview -- many things may not work and we reserve the right for this to go away in the future.  The implementation of how pypy gets access to site-packages may well change in the future.


We also tried to decide whether we only wanted to build up a pypy module stack or if we also wanted to allow applications we ship to use pypy.  At first we thought that it might be better not to rely on pypy.  But someone brough up the skeinforge package.  skeinforge runs 4x faster when it uses pypy than when it uses cpython.  (skeinforge slices 3d models for 3d printers to print) So there is a desire to be able to use it.


===============
We tentatively decided that packages should be able to use pypy at maintainer discretion.  May need more thought on this to limit it in some way for now (esp. because we may change how pypy site-packages works).


{{admon/note|Post-Flock discussion|I talked with Alex Gaynor, one of the pypy upstream developers after flock and he didn't think that a symlink was very clean.  He didn't know of any problems off hand but he didn't think it was a very good idea.  I think we may want to explore a multi-stack approach (similar to how we package for python2 and python3) instead.  He also noted that compiled extensions cannot be shared as the ABI is different.}}


pypy -- realistically if you're using C dependencies you shouldn't be using pypy (ref counting in extensions can cause problems between cpython and emulated)
== Tangent: SCL - Collections ==


Byte compiled files will differ
* Use it to create a parallel stack.
but at the source level you could share purelib
Some of platlib will work using hte emulated C api.


* Worht a try to have it use the system site-packages that python has....
=== What is the advantage over virtualenv ===
** pypy using the site-package via a symlink.  We release note it as:
This is a technical preview -- many things may not work and we reserve the right for this to go away in the future.


** pypy is 4x faster than cpython for the printrun application.
With virtualenv, to find out what's on your system you have to consult both rpm and pip.  SCL can tell you useful information with a single system.  If you build SCLs from an existing rpm then you may know more about what rpms are installed.  Otherwise you just have a blob but even the blob has useful information:
* Packages can use pypy.
* You do have knowledge of what files are on the filesystem in the rpm database so that allows <code>rpm -ql</code> and <code>rpm -qf</code> to work
* virtualenv doesn't integrate with people's current tools to deal with rpms (createrepo, yum, etc)
* Better that you have one-one relationship between what's in SCL and system packages (No bundling)

Latest revision as of 20:45, 27 August 2013

At Flock 2013 in Charleston, SC we met to discuss various ways in which the Python Guidelines should be updated in light of the changes happening to upstream packaging standards, tools, and the increasing push to use python3. These are the notes from that discussion.

Wheel: the new upstream distribution format

Wheels have more metadata so it becomes more feasible to automatically generate spec files given upstream spec files. In Fedora we'd use wheels like this:

  • Use the tarball from pypi, not the wheel.
  • In %prep, unpack the tarball
  • In %build create a wheel with something like pip wheel --no-deps $(pwd).
    • This may create a .whl file or an unpacked wheel. Either one can be used in the next step
  • In %install, use something like pip install wheel --installdir to install the wheel. It gets installed onto the system in different FHS compliant dirs:
    • datadir
    • scriptdir
    • platlib
    • purelib
    • docsdir
    • These dirs are encoded in a pip (or python3 stdlib) config file.
python-wheel is not as good as pip
The wheel command from python-wheel might not have an equivalent to --root (to install into the buildroot) but pip does have something so we'd need to use pip to install
Built pacakge format
Wheels are intended as a built package format. Upstream advice is that we do not package from wheels if we want our packages to be "from source". Instead, we should favour sdists or checkouts from usptream's SCM

Installing wheels creates a "metadata store" (distinfo directory) so we would want to install using the wheel package that we build so that this directory is fully installed. This way pip knows about everything that's installed via system packages.

  • setup.py install => will only play nice with the distinfo data in certain cases. So most of the time we want to convert to wheel building.
 * If the package can't be built as a wheel then distinfo will be created if:
   * setuptools is used in setup.py to build if a special command line flag is used.
   * if it's not then it likely will not.
  • pip always uses setuptools to install (even if distutils is used in the setup.py) so it will always create distinfo metadata.
  • With pip wheel we can use a single directory. No need to copy to a second directory anymore.
    • pip wheel (build) will clean the build artifcats automatically.
  • We will no longer need egginfo files and dirs (if distinfo is installed)

pip-1.5 due out by end of year (?Not sure why this was important... it brought a new feature but I don't remember what that was?)

Upgrading to Metadata 2.0 will be an automatic thing if we build and install from wheels. METADATA-2.0 will be able to answer "This package installs these python modules". The timeframe for this is pip-1.6 which is due out the middle of next year. (Hopefully f22).

pyp2rpm from slavek may be able to use Metadata 2.0 to generate nearly complete

Should we depend on both pip and setuptools explicitly?

In guidelines BR both because upstream pip may make this an optional feature and we may or may not put that requirement into pip.

Metadata 2.0 for non-wheels

For automake and other ways of creating packages; we want to install distinfo directory. Currently, the upstream may be generating and installing egg-info. If so, this could just be updated to provide distinfo instead. If the upstream doesn't provide egg-info now, we aren't losing anything by not generating distinfo (ie: things that didn't work before (because they lacked metadata) will simply continue not to work).

It might be nice to get generation of the metadata into upstream automake itself but someone would have to commit to doing that. We probably don't need to get generation of wheels into upstream automake because wheels are a distribution format, not an install format.

Shebang lines

Agree that we want to convert shebang lines to /usr/bin/python2 and /usr/bin/python3 (and drop usage of /usr/bin/python).

FPC ticket has been opened already -- hashing out an implementation on that ticket. Something that may help is checking that the shebang line on pip itself is /usr/bin/python2... if we change that to /usr/bin/python2 it should affect everything that it installs (Need to check this)

  • May need to use some pip command line option to have scripts installed the setup.py script target install (?not sure what this note was meant to mean?)
python3-pip
The pip script for python3 is named python3-pip which follows the guidelines recommendation to have a "python3-" prefix when a package provides both python2 and python3 scripts. We discussed changing this (either the specific pip package or the general guidelines) and decided that this was fine. (python3-pip actually provides both python3-pip and pip-python3)


Parallel Python2 and Python3 stack

Notes to packagers who need to port

Packagers can help upstreams port their code to python3. Here are some hints to help them:

Explicitly saying from __future__ import unicode_literals is almost certainly a bad thing for several reasons:

  • Some things should be the native string type. Attribute names on objects, for instance.
  • If you are in the frame of mind that you are reading python2 code, then you may be surprised when a bare literal string returns unicode. The from __future__ import unicode_literals occurs at the top of the file while the strings themselves are spread throughout. When you get a traceback and go to look at the code you will almost certainly jump down to the line the traceback is on and may well miss the unicode_literals line at the top.

Some programs and command line switches help migrate:

  • python-unicodenazi package provides a module that will help catch mixing byte str and unicode string. These mixtures are almost certianly illegal in python3.
  • python2 -b -- turns off automatic conversion of byte str and unicode string so that you get a warning or an error when you mix bytes and unicode strings.
  • python-modernize -- attempts to convert your code to a subset of python2 that runs on python3.
  • 2to3 -- (when run in non-overwrite mode, it will simply tell you what things need to be changed).


Python3 by default

We decided on the mailing lists to switch over when PEP394 changes its recommendation. 2015 is the earliest that upstream is likely to change this and it may be later depending on what the ecosystem of python2 and python3 looks like at that time.

To get ready for that eventuality, we need to change shebang lines from /usr/bin/python to /usr/bin/python2. Since moving to pip as the means to install this, we should audit these after the pip migration and change any of these that the pip conversion did not take care of.

We also discussed whether to convert scripts from /usr/bin/env python to /usr/bin/pythonX. In the past, there was perceived cost as this would deviate from upstream. Now, however, we will have to maintain patches to convert to using "python2" rather than "python" so we could consider banning /usr/bin/env as well. env is not good in the shebang line for several reasons:

  • Will always ignore virtualenv. So scripts run in a virtualenv that use /usr/bin/env will use the system python instead of the virtualenv's python.
  • If a sysadmin installs another python interpreter on the path (for instance, in /usr/local) for their use on their systems, that python interpreter may also end up being used by scripts which use /usr/bin/env to find the interpreter. This might break rpm installed scripts.
  • python3.4 will bundle a version of pip as get_pip which users of upstream releases can use to bootstrap an updated pip package from pypi. In Fedora we can have python-libs Require: python-pip and use a symlink or something to replace the bundled version

Naming of python modules and subpackages

We have three potential package names:

  • python2-setuptools
  • python3-setuptools
  • python-setuptools

These can be real toplevel packages (directly made from an srpm name) or a subpackage. There are several reasons that separate packages are better than subpackages:

  • It allows the packager to tell when to abandon the python2 version. If they orphan the python2 version and no one picks it up, then it is no longer important enough to anyone to use. With subpackages, the maintainer would remove the python2 version from their spec file. Then they'd get a bug report asking them to put it back in if someone was still using it (or people would stop using Fedora because it was no longer providing the python2 modules that they needed).
  • It allows the python2 and python3 packages to develop independently. With subpackages, a bug in one version of the package prevents the build from succeeding in either. This can stop package updates to either version even though the issue only exists in one or the other.
  • Spec file is cleaner in that there's no conditionals for disabling python2 or python3 builds

Separate packages have the following drawback:

  • A packager that cares about both python2 and python3 has to review and build two separate packages.
  • We suspect that with two packages, many python modules will only be built for python2 because no one will care about building the python3 version and it's more extra work.

On first discussing this, we came up with the following plan:

  • New packages -- Two separate packages
  • Old packages -- grandfathered in but if the reasons make sense to the packager then you can split them into separate packages

After further discussion and deciding to put more weight on wanting to have python3 packages built we decided that we'd stay closer to the current guidelines, proposing slight guidelines changes so that rationale for subpackages vs dual packages is more clear and the two approaches are on a more equal footing.

Module naming

We decided that even though spec files would get uglier it would make sense to have python-MODULE packages with python2-MODULE and python3-MODULE subpackages. Packages which had separate srpms for these would simply have separately named python2-MODULE and python3-MODULE toplevel packages. The result of this is that users of bugzilla may have a problem in their python2-MODULE install and have to look up both python2-MODULE and python-MODULE in order to find what component to file the bug under. This may cause extra work but it won't be outright confusing (ie: no python3-MODULE will need to file under python2-MODULE or vice versa).

For the subpackages, we can add with_python2 conditionals to make building python2 optional on some Fedora versions. There are currently no Fedora or RHEL versions that would disable python2 module building.

pypy

We wondered how we should (or if we should) package modules for pypy. Problems with pypy:

  • Realistically if you're using C dependencies you shouldn't be using pypy (pypy doesn't do ref counting natively so it has to be emulated for the CAPI. This can cause problems as bugs in the extension's refcounting may cause problems in the emulation where they would be hidden in C Python.)
    • Some of platlib will work using the emulated CAPI.
  • The byte compiled files will differ
    • At the source level you could share purelib
    • python3.2(3?) added a different directory to save the CPython byte compiled files but this won't help with python2

After some tired discussion (this was at the end of the day and end of the discussion) we decided it would be worthwhile to try this:

  • Could be worth a try to have it use the system site-packages that python has.
    • pypy using the site-package via a symlink in pypy to the system site-packages. We release note it as:

This is a technical preview -- many things may not work and we reserve the right for this to go away in the future. The implementation of how pypy gets access to site-packages may well change in the future.

We also tried to decide whether we only wanted to build up a pypy module stack or if we also wanted to allow applications we ship to use pypy. At first we thought that it might be better not to rely on pypy. But someone brough up the skeinforge package. skeinforge runs 4x faster when it uses pypy than when it uses cpython. (skeinforge slices 3d models for 3d printers to print) So there is a desire to be able to use it.

We tentatively decided that packages should be able to use pypy at maintainer discretion. May need more thought on this to limit it in some way for now (esp. because we may change how pypy site-packages works).

Post-Flock discussion
I talked with Alex Gaynor, one of the pypy upstream developers after flock and he didn't think that a symlink was very clean. He didn't know of any problems off hand but he didn't think it was a very good idea. I think we may want to explore a multi-stack approach (similar to how we package for python2 and python3) instead. He also noted that compiled extensions cannot be shared as the ABI is different.

Tangent: SCL - Collections

  • Use it to create a parallel stack.

What is the advantage over virtualenv

With virtualenv, to find out what's on your system you have to consult both rpm and pip. SCL can tell you useful information with a single system. If you build SCLs from an existing rpm then you may know more about what rpms are installed. Otherwise you just have a blob but even the blob has useful information:

  • You do have knowledge of what files are on the filesystem in the rpm database so that allows rpm -ql and rpm -qf to work
  • virtualenv doesn't integrate with people's current tools to deal with rpms (createrepo, yum, etc)
  • Better that you have one-one relationship between what's in SCL and system packages (No bundling)