From Fedora Project Wiki
(Add benchmark results)
 
(13 intermediate revisions by 3 users not shown)
Line 1: Line 1:
= Add -fno-omit-frame-pointer and -mno-omit-leaf-frame-pointer to default compilation flags =
= Add -fno-omit-frame-pointer and -mno-omit-leaf-frame-pointer to default compilation flags =
{{Change_Proposal_Banner}}


== Summary ==
== Summary ==


Fedora will add -fno-omit-frame-pointer and -mno-omit-leaf-frame-pointer to the default C/C++ compilation flags, which will improve the effectiveness of profiling and debugging tools.
Fedora will add -fno-omit-frame-pointer and -mno-omit-leaf-frame-pointer to the default C/C++ compilation flags, which will improve the effectiveness of profiling and debugging tools.
This Change will be implemented for Fedora Linux 38 and the Change authors and FESCo will evaluate whether to retain it by Fedora Linux 40. This Change will be implemented via a <tt>%_include_frame_pointers</tt> macro to allow packages to trivially opt-out of retaining frame pointers during compilation if needed by undefining the macro. The Change owners kindly request packagers to track opt-outs in Bugzilla and blocking against our [https://bugzilla.redhat.com/show_bug.cgi?id=IncludeFramePointers tracking bug] so that regressions can be appropriately investigated (and hopefully resolved).


== Owner ==
== Owner ==
Line 15: Line 15:


== Current status ==
== Current status ==
[[Category:ChangeReadyForFesco]]
[[Category:ChangeAcceptedF38]]
[[Category:SystemWideChange]]
[[Category:SystemWideChange]]


Line 22: Line 22:


* [https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org/thread/OOJDAKTJB5WGMOZRXTUX7FTPFBF3H7WE/ devel thread]
* [https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org/thread/OOJDAKTJB5WGMOZRXTUX7FTPFBF3H7WE/ devel thread]
* FESCo issue: [https://pagure.io/fesco/issue/2817 #2817]
* FESCo issue: [https://pagure.io/fesco/issue/2923 #2923] (formerly [https://pagure.io/fesco/issue/2817 #2817])
* Tracker bug: <will be assigned by the Wrangler>
* Tracker bug: [https://bugzilla.redhat.com/show_bug.cgi?id=2158246 #2158246]
* Release notes tracker: <will be assigned by the Wrangler>
* Release notes tracker: [https://pagure.io/fedora-docs/release-notes/issue/947 #947]


== Detailed Description ==
== Detailed Description ==
Line 115: Line 115:
=== Benchmarking of the performance impact ===
=== Benchmarking of the performance impact ===


To verify the performance impact of the proposed change, we compared a number of benchmarks on a Fedora 37 system where every package is built with frame pointers against the same benchmarks on a regular Fedora 37 system. The source code for the benchmarks can be found [https://github.com/DaanDeMeyer/fpbench here]. We used copr to build the packages required to run the benchmarks (and their dependencies) with frame pointers. Then, using mkosi, we build one Fedora 37 container with frame pointers and one Fedora 37 container without frame pointers and run various benchmarks inside these containers.
To verify the performance impact of the proposed change, we compared a number of benchmarks on a Fedora 37 system where every package is built with frame pointers against the same benchmarks on a regular Fedora 37 system. The source code for the benchmarks can be found in the [https://github.com/DaanDeMeyer/fpbench fpbench] repository on github. We used copr to build the packages required to run the benchmarks (and their dependencies) with frame pointers. Then, using mkosi, we build one Fedora 37 container with frame pointers and one Fedora 37 container without frame pointers and run various benchmarks inside these containers.
 
The results of the benchmarks are listed below:


<code>
The results of the benchmarks can be found in the readme of the fpbench repository as we ran into formatting issues trying to add the results to the wiki.
                Benchmark                        Result      Mean (omit / no-omit) Mean Difference Std Dev (omit / no-omit) Num Tests (omit / no-omit)
                blender-v1                  Duration (s)              392.75 / 400.2            1.9%        0.2111% / 0.4299%                      4 / 5
              botan-v1-s1          AES-256 (MB/second)        5721.316 / 5694.8384            0.5%        0.486% / 0.6865%                    8 / 10
              botan-v1-s1 AES-256 [openssl] (MB/second)      5598.8459 / 5566.2604            0.6%        1.1385% / 1.2932%                    8 / 10
              botan-v1-s1          Blowfish (MB/second)        231.6743 / 231.6037            0.0%        1.2896% / 1.2925%                    8 / 10
              botan-v1-s1          CAST-256 (MB/second)            90.6434 / 90.63            0.0%        0.1703% / 0.1823%                    8 / 10
              botan-v1-s1  ChaCha20Poly1305 (MB/second)        642.0556 / 641.6031            0.1%        0.5257% / 0.8547%                    8 / 10
              botan-v1-s1            KASUMI (MB/second)          78.1414 / 78.0889            0.1%          1.586% / 1.508%                    8 / 10
              botan-v1-s1          Twofish (MB/second)        176.0366 / 177.4218            0.8%        2.3791% / 0.5486%                    8 / 10
                    gcc-v1                  Duration (s)        246.9607 / 252.9407            2.4%        0.0859% / 0.0395%                      4 / 4
    openssl-v1-b16384-s10      aes-128-cbc (MB/second)      1688.5987 / 1688.0522            0.0%        0.0229% / 0.035%                      4 / 5
    openssl-v1-b16384-s10      aes-192-cbc (MB/second)      1429.2144 / 1429.2428            0.0%        0.0433% / 0.0139%                      4 / 5
    openssl-v1-b16384-s10      aes-256-cbc (MB/second)      1238.3488 / 1238.5301            0.0%          0.02% / 0.0204%                      4 / 5
    openssl-v1-b16384-s10  camellia-128-cbc (MB/second)        204.8652 / 204.8495            0.0%        0.0537% / 0.0136%                      4 / 5
    openssl-v1-b16384-s10  camellia-192-cbc (MB/second)          152.7494 / 152.742            0.0%        0.0177% / 0.0231%                      4 / 5
    openssl-v1-b16384-s10  camellia-256-cbc (MB/second)        152.7787 / 152.8181            0.0%        0.0293% / 0.0424%                      4 / 5
    openssl-v1-b16384-s10          des-ede3 (MB/second)            28.481 / 28.4732            0.0%        0.5417% / 0.3393%                      4 / 5
    openssl-v1-b16384-s10            ghash (MB/second)        14061.5 / 14057.4878            0.0%        0.1272% / 0.0857%                      4 / 5
    openssl-v1-b16384-s10        hmac(md5) (MB/second)        720.2127 / 720.1047            0.0%        0.0311% / 0.0631%                      4 / 5
    openssl-v1-b16384-s10              md5 (MB/second)          724.5759 / 724.535            0.0%        0.0195% / 0.0279%                      4 / 5
    openssl-v1-b16384-s10              rand (MB/second)      5909.9436 / 5928.8331            0.3%        0.0609% / 0.2412%                      4 / 5
    openssl-v1-b16384-s10              sha1 (MB/second)      1665.9534 / 1666.3307            0.0%        0.0279% / 0.0259%                      4 / 5
    openssl-v1-b16384-s10            sha256 (MB/second)      1317.9472 / 1317.9478            0.0%        0.0374% / 0.022%                      4 / 5
    openssl-v1-b16384-s10            sha512 (MB/second)        644.6275 / 644.3334            0.0%        0.042% / 0.0419%                      4 / 5
  pgbench-v1-s100-c20-T20      Average Latency (in ms)            2.7258 / 2.7633            1.4%        4.6027% / 4.5057%                    12 / 15
  pgbench-v1-s100-c20-T20      Transactions per second        7356.0942 / 7252.962            1.4%        4.4527% / 4.4865%                    12 / 15
      pyperformance-v1-r1                          2to3            0.2823 / 0.2886            2.2%        0.4236% / 0.2374%                      4 / 5
      pyperformance-v1-r1                    chameleon            0.0075 / 0.0077            2.9%        0.3635% / 0.8713%                      4 / 5
      pyperformance-v1-r1                        chaos            0.0757 / 0.0769            1.5%        0.3042% / 0.1542%                      4 / 5
      pyperformance-v1-r1                  crypto_pyaes            0.0808 / 0.0836            3.5%        0.3391% / 0.3949%                      4 / 5
      pyperformance-v1-r1                    deltablue            0.0041 / 0.0043            5.9%        0.0402% / 0.5238%                      4 / 5
      pyperformance-v1-r1              django_template              0.0365 / 0.039            6.6%        0.4204% / 0.1759%                      4 / 5
      pyperformance-v1-r1                  dulwich_log              0.065 / 0.0673            3.5%        0.2735% / 0.5403%                      4 / 5
      pyperformance-v1-r1                      fannkuch            0.4175 / 0.4276            2.4%        0.1763% / 0.9582%                      4 / 5
      pyperformance-v1-r1                        float            0.0829 / 0.0862            3.9%        0.4386% / 0.8319%                      4 / 5
      pyperformance-v1-r1                  genshi_text              0.027 / 0.0269            0.6%        0.3068% / 0.5333%                      4 / 5
      pyperformance-v1-r1                    genshi_xml              0.0614 / 0.063            2.6%        0.2192% / 0.2014%                      4 / 5
      pyperformance-v1-r1                            go            0.1519 / 0.1552            2.2%        0.2472% / 1.1143%                      4 / 5
      pyperformance-v1-r1                        hexiom              0.007 / 0.0073            4.2%        0.214% / 0.2223%                      4 / 5
      pyperformance-v1-r1                      html5lib            0.0674 / 0.0698            3.5%        0.1654% / 0.2173%                      4 / 5
      pyperformance-v1-r1                    json_dumps              0.012 / 0.0126            5.2%        0.5807% / 0.7049%                      4 / 5
      pyperformance-v1-r1                    json_loads                  0.0 / 0.0            5.6%        0.1126% / 0.4373%                      4 / 5
      pyperformance-v1-r1                logging_format                  0.0 / 0.0            4.1%        0.3382% / 0.1203%                      4 / 5
      pyperformance-v1-r1                logging_silent                  0.0 / 0.0            3.6%        0.321% / 0.3155%                      4 / 5
      pyperformance-v1-r1                logging_simple                  0.0 / 0.0            4.7%        0.3069% / 0.1819%                      4 / 5
      pyperformance-v1-r1                          mako              0.0106 / 0.011            3.7%        1.0228% / 0.3132%                      4 / 5
      pyperformance-v1-r1                meteor_contest              0.1107 / 0.115            3.8%        0.1211% / 0.1522%                      4 / 5
      pyperformance-v1-r1                        nbody            0.0974 / 0.1066            8.6%        0.5261% / 1.6923%                      4 / 5
      pyperformance-v1-r1                      nqueens            0.0989 / 0.1007            1.8%        0.1031% / 0.318%                      4 / 5
      pyperformance-v1-r1                      pathlib            0.0174 / 0.0182            4.5%        0.8631% / 0.6443%                      4 / 5
      pyperformance-v1-r1                        pickle                  0.0 / 0.0            7.1%        0.5275% / 0.4534%                      4 / 5
      pyperformance-v1-r1                  pickle_dict                  0.0 / 0.0            3.8%        0.2149% / 0.3785%                      4 / 5
      pyperformance-v1-r1                  pickle_list                  0.0 / 0.0            0.4%        0.6997% / 0.3421%                      4 / 5
      pyperformance-v1-r1            pickle_pure_python            0.0003 / 0.0003            4.3%        0.1638% / 0.6242%                      4 / 5
      pyperformance-v1-r1                      pidigits            0.1953 / 0.1964            0.6%        0.0255% / 0.039%                      4 / 5
      pyperformance-v1-r1                      pyflate            0.4697 / 0.4817            2.5%        1.202% / 0.6028%                      4 / 5
      pyperformance-v1-r1                python_startup              0.009 / 0.0091            1.4%        1.3195% / 0.8585%                      4 / 5
      pyperformance-v1-r1        python_startup_no_site            0.0063 / 0.0064            0.8%        0.5466% / 0.9968%                      4 / 5
      pyperformance-v1-r1                      raytrace            0.3316 / 0.3454            4.0%        0.2206% / 0.1397%                      4 / 5
      pyperformance-v1-r1                regex_compile            0.1515 / 0.1568            3.4%        0.095% / 0.2667%                      4 / 5
      pyperformance-v1-r1                    regex_dna            0.1636 / 0.1712            4.4%        0.1309% / 0.1551%                      4 / 5
      pyperformance-v1-r1                  regex_effbot            0.0025 / 0.0027            6.0%        0.072% / 0.2793%                      4 / 5
      pyperformance-v1-r1                      regex_v8            0.0185 / 0.0195            5.1%        0.0692% / 0.3962%                      4 / 5
      pyperformance-v1-r1                      richards              0.0478 / 0.051            6.3%        0.1424% / 0.3229%                      4 / 5
      pyperformance-v1-r1                  scimark_fft            0.3213 / 0.3517            8.6%        0.3755% / 0.3829%                      4 / 5
      pyperformance-v1-r1                    scimark_lu            0.1288 / 0.1366            5.7%        0.3641% / 0.8092%                      4 / 5
      pyperformance-v1-r1          scimark_monte_carlo            0.0728 / 0.0781            6.8%        0.2093% / 0.8044%                      4 / 5
      pyperformance-v1-r1                  scimark_sor            0.1277 / 0.1337            4.5%        0.3221% / 0.2813%                      4 / 5
      pyperformance-v1-r1      scimark_sparse_mat_mult            0.0047 / 0.0052            9.5%        0.3764% / 0.5924%                      4 / 5
      pyperformance-v1-r1                spectral_norm              0.109 / 0.1182            7.7%        0.2437% / 1.1068%                      4 / 5
      pyperformance-v1-r1        sqlalchemy_declarative            0.1323 / 0.1358            2.6%        0.1379% / 0.3609%                      4 / 5
      pyperformance-v1-r1        sqlalchemy_imperative            0.0176 / 0.0183            3.4%        0.4265% / 0.2622%                      4 / 5
      pyperformance-v1-r1                  sqlite_synth                  0.0 / 0.0            7.6%        0.4354% / 1.4469%                      4 / 5
      pyperformance-v1-r1                  sympy_expand            0.5166 / 0.5408            4.5%        0.0961% / 0.2924%                      4 / 5
      pyperformance-v1-r1              sympy_integrate            0.0219 / 0.0227            3.4%        0.1906% / 0.4094%                      4 / 5
      pyperformance-v1-r1                    sympy_str              0.309 / 0.3218            4.0%        0.1631% / 0.1377%                      4 / 5
      pyperformance-v1-r1                    sympy_sum            0.1654 / 0.1717            3.7%        0.1806% / 0.2936%                      4 / 5
      pyperformance-v1-r1                        telco            0.0075 / 0.0079            5.1%        0.7319% / 0.536%                      4 / 5
      pyperformance-v1-r1                  tornado_http            0.1148 / 0.1165            1.5%        0.299% / 0.5194%                      4 / 5
      pyperformance-v1-r1              unpack_sequence                  0.0 / 0.0            1.2%        0.2169% / 1.0563%                      4 / 5
      pyperformance-v1-r1                      unpickle                  0.0 / 0.0            4.4%          0.48% / 0.3702%                      4 / 5
      pyperformance-v1-r1                unpickle_list                  0.0 / 0.0            3.8%        0.0715% / 0.8382%                      4 / 5
      pyperformance-v1-r1          unpickle_pure_python            0.0003 / 0.0003            3.2%        0.0958% / 0.1659%                      4 / 5
      pyperformance-v1-r1            xml_etree_generate            0.0845 / 0.0902            6.3%        0.0764% / 0.5246%                      4 / 5
      pyperformance-v1-r1          xml_etree_iterparse            0.1093 / 0.1131            3.3%        0.4933% / 0.648%                      4 / 5
      pyperformance-v1-r1              xml_etree_parse              0.1671 / 0.173            3.4%        0.2738% / 0.797%                      4 / 5
      pyperformance-v1-r1            xml_etree_process            0.0617 / 0.0658            6.2%        0.3025% / 1.196%                      4 / 5
redis-v1-n10000000-P32-c50    GET (requests per second) 2236723.2778 / 2214881.0667            1.0%        0.3753% / 0.5794%                    9 / 15
redis-v1-n10000000-P32-c50    LPOP (requests per second)  1604092.6667 / 1596965.076            0.4%        0.5202% / 0.4415%                    9 / 15
redis-v1-n10000000-P32-c50  LPUSH (requests per second) 1699474.6533 / 1698218.7593            0.1%        2.1367% / 0.4933%                    9 / 15
redis-v1-n10000000-P32-c50    SADD (requests per second) 2041983.3333 / 2063767.1413            1.1%        0.5224% / 0.4361%                    9 / 15
redis-v1-n10000000-P32-c50    SET (requests per second)  1843387.11 / 1835161.7687            0.4%        0.8817% / 0.6366%                    9 / 15
            zstd-v1-b3-i10      Compression Speed (MB/s)          259.7067 / 256.09            1.4%        1.6995% / 1.6875%                    30 / 50
            zstd-v1-b3-i10    Decompression Speed (MB/s)          1820.99 / 1829.85            0.5%        0.2523% / 0.4443%                    30 / 50
</code>


Summarizing the results:
Summarizing the results:
Line 223: Line 126:
* Redis benchmarks do not seem to be significantly impacted when built with frame pointers
* Redis benchmarks do not seem to be significantly impacted when built with frame pointers


Aside from the pyperformance benchmarks, the impact of building with frame pointers is limited on the benchmarks we performed. We're unsure why exactly the Python benchmarks are more significantly affected when CPython is built with frame pointers.  
Aside from the pyperformance benchmarks, the impact of building with frame pointers is limited on the benchmarks we performed. Our findings on the impact of the Python benchmarks when CPython is built with frame pointers are discussed in [https://pagure.io/fesco/issue/2817#comment-826636 this comment].


=== Alternatives to frame pointers ===
=== Alternatives to frame pointers ===
Line 244: Line 147:
      
      
== Scope ==
== Scope ==
* Proposal owners: Put up a PR to change the rpm macros to build packages by default with -fno-omit-frame-pointer and -mno-omit-leaf-frame-pointer by default.
* Proposal owners: Put up a PR to change the rpm macros to build packages by default with <tt>-fno-omit-frame-pointer</tt> and <tt>-mno-omit-leaf-frame-pointer</tt> by default.
** [https://src.fedoraproject.org/rpms/redhat-rpm-config/pull-request/230 PR#230] on [https://src.fedoraproject.org/rpms/redhat-rpm-config/ redhat-rpm-config] to add conditional support for always including frame pointers (merged)
** [https://pagure.io/fedora-rust/rust2rpm/pull-request/237 PR#237] on [https://pagure.io/fedora-rust/rust2rpm rust2rpm] to add conditional support for always including frame pointers in Rust packages (merged)
** [https://src.fedoraproject.org/rpms/redhat-rpm-config/pull-request/231 PR#231] on [https://src.fedoraproject.org/rpms/redhat-rpm-config/ redhat-rpm-config] to enable frame pointers by default (merged)


* Other developers: Review and merge the PR implementing the Change.
* Other developers: Review and merge the PR implementing the Change.
Line 280: Line 186:
== Contingency Plan ==
== Contingency Plan ==


* Contingency mechanism: The new version can be released without every package being rebuilt with fno-omit-frame-pointer and -mno-omit-leaf-frame-pointer. Profiling will only work perfectly once all packages have been rebuilt but there will be no regression in behavior if not all packages have been rebuilt by the time of the release. If the Change is found to introduce unacceptable regressions, the PR implementing it can be reverted and affected packages can be rebuilt.
* Contingency mechanism: The new version can be released without every package being rebuilt with fno-omit-frame-pointer and -mno-omit-leaf-frame-pointer. Profiling will only work perfectly once all packages have been rebuilt but there will be no regression in behavior if not all packages have been rebuilt by the time of the release. If the Change is found to introduce unacceptable regressions, the PR implementing it can be reverted and affected packages can be rebuilt. Individual packages can also opt out of the change with `%undefine _include_frame_pointers`.
* Contingency deadline: Final freeze
* Contingency deadline: Final freeze
* Blocks release? No
* Blocks release? No

Latest revision as of 22:50, 6 January 2023

Add -fno-omit-frame-pointer and -mno-omit-leaf-frame-pointer to default compilation flags

Summary

Fedora will add -fno-omit-frame-pointer and -mno-omit-leaf-frame-pointer to the default C/C++ compilation flags, which will improve the effectiveness of profiling and debugging tools.

This Change will be implemented for Fedora Linux 38 and the Change authors and FESCo will evaluate whether to retain it by Fedora Linux 40. This Change will be implemented via a %_include_frame_pointers macro to allow packages to trivially opt-out of retaining frame pointers during compilation if needed by undefining the macro. The Change owners kindly request packagers to track opt-outs in Bugzilla and blocking against our tracking bug so that regressions can be appropriately investigated (and hopefully resolved).

Owner

Current status

Detailed Description

Why perform full system profiling in production?

Credits to Mirek Klimos, whose internal note on stacktrace unwinding formed the basis for this description (myreggg@gmail.com).

Generally, when implementing optimizations after receiving a report on a performance issue, there are two hurdles a developer must overcome:

  • They have to recompile their program with sufficient debugging information to enable accurate and reliable profiling. Frame pointers are an example of such information.
  • They have to reproduce the scenario under which the software performed poorly.
  • They have to gather the necessary profiling data by running the recompiled program in the reproduced scenario.

After gathering the profiling data, the developer can use that data to guide possible optimizations. Usually, this ends being an iterative process, where a possible optimization is implemented, and the scenario is rerun with the recompiled program to measure the effects on performance.

When dealing with a single program without dependencies, recompiling the software, reproducing the scenario and gathering the profiling data might not be terribly hard to achieve. However, when dealing with a large program with many dependencies, either in the form of shared libraries or via IPC, recompiling all of these dependencies with debugging information, reproducing the exact scenario under which the performance issue occurs, and gathering all the profiling data from all the dependencies becomes a complicated exercise.

An interesting approach to avoid the above hurdles is to make sure we can do profiling of the entire system directly in production. This approach means we don't have to recompile our software, don't need to reproduce the scenario under which the software performs poorly, and gives us a single unified approach to gather profiling data for all the applications we're interested in. Naturally, this approach depends on being able to profile the entire system efficiently so that there's no noticeable impact on any running services.

Another requirement (unrelated to this proposal, but interesting nonetheless) is that we need logic to only enable profiling when it's interesting to do so. There's a few different options:

  • On demand profiling: Only start profiling when we receive an explicit request to do so.
  • Interval based continuous profiling: Profile for a specific amount of time every X seconds/minutes/hours/...
  • Trigger based profiling: Start profiling based on some predefined conditions, such as high CPU or memory usage.

If we agree that being able to do full system profiling in production is useful, the next section explains why we need frame pointers in all software running on the system to be able to do effective full system profiling.

How to do full system profiling

Probably the most prominent way to do full system profiling on Linux with low overhead is by using the perf sampling profiler. Sampling profilers like perf operate by "statistical profiling" (sampling). They take a sample every N events e.g. "cpu-cycles" to understand the statistical breakdown of time spent in functions or function callstacks executing on the CPU. perf has an accompanying Linux subsystem that allows it to take samples every N events with very low overhead. A perf sample can include all kinds of information, but for profiling, what we're typically interested in is the call stack of the programs that are currently executing.

To record samples for specific hardware/software events using the perf subsystem on Linux, developers can use the perf_event_open() system call. To have the recorded samples include the call stack, the PERF_SAMPLE_CALLCHAIN can be set in the config struct passed to perf_event_open(). For userspace stacks, the call stack can only be sampled in kernelspace if the userspace program and its dependencies are built with frame pointers. If frame pointers are not available, PERF_SAMPLE_STACK_USER can be used to sample the entire stack instead, allowing for unwinding in userspace instead using e.g. DWARF debugging info.

The perf subsystem also has support for attaching BPF programs to a perf event fd. The program will be called every time an event is sampled and is provided the perf event data. This can be used to attach arbitrary logic to perf sampling, and makes it possible to implement custom logic on top of perf's sampling without having to leave kernelspace. To get access to the userspace stack from BPF, bpf provides the bpf_get_stackid() helper function. Similar to PERF_SAMPLE_CALLCHAIN, this function depends on the userspace program and its dependencies to have been compiled with frame pointers for it to be able to traverse the call stack.

To get accurate profiling results, we want to be able to sample events at a relatively high sampling rate. This means that we want to do the minimal amount of work every time we sample an event to avoid overhead. Traversing a stack using frame pointers is cheap, since we only have to traverse the frame pointers until we reach the top of the stack. In comparison, to unwind using DWARF, we first have to copy the full stack from kernelspace to userspace, and then unwind the stack using DWARF debugging info, which is relatively slow (see https://fzn.fr/projects/frdwarf/frdwarf-oopsla19.pdf). Because of this, to make full system profiling using perf work effectively, it's imperative that all software running on the system is compiled with frame pointers so that the call stack can be unwound in kernelspace for minimum overall overhead.

Frame pointers for debugging and tracing with BPF

The above profiling example was just one use case where we benefit from having access to the frame pointer in BPF. Since frame pointers enable BPF to unwind every userspace stack, we can get an accurate callstack from every BPF program we can think of. This makes certain kinds of debugging much easier, especially tools where we want to investigate who is calling a specific function.

A good example is the ustack() function in bpftrace. bpftrace is a high level tracing language for BPF that can easily hook into system calls, function calls, kernel tracepoints, and more. And since frame pointers guarantee that bpftrace's ustack() helper function works all the time, we're able to log the full callstack from every bpftrace script.

As another example, the bcc tools directory has around ten BPF tools with a -U option to print the userspace stack when tracing some event, such as tracing calls to cap_capable() for security capability checks, or just tracing slow function calls in general with the funcslower script. All these tools can only work reliably when all software running on the system is built with frame pointers.

All the above tooling enables in-depth debugging of applications without needing to modify the source code of the applications itself. BPF can be used to attach to function calls, kernel tracepoints, kernel functions, system calls, and all of this is presented in an easy to use fashion via bcc and bpftrace.

To summarize, BPF tooling that works with or benefits from stack trace information in general will work much more reliably when all software is built with frame pointers. As a result, implementing this change proposal will make the BPF tracing ecosystem of tools much more useful on Fedora in general, whereas currently many of the current BPF tools are hamstringed due to the lack of frame pointers.

Unwinding

How does the profiler get the list of function names? There are two parts of it:

  1. Unwinding the stack - getting a list of virtual addresses pointing to the executable code
  2. Symbolization - translating virtual addresses into human-readable information, like function name, inlined functions at the address, or file name and line number.

Unwinding is what we're interested in for the purpose of this proposal. The important things are:

  • Data on stack is split into frames, each frame belonging to one function.
  • Right before each function call, the return address is put on the stack. This is the instruction address in the caller to which we will eventually return — and that's what we care about.
  • One register, called the "frame pointer" or "base pointer" register (RBP), is traditionally used to point to the beginning of the current frame. Every function should back up RBP onto the stack and set it properly at the very beginning.

The “frame pointer” part is achieved by adding push %rbp, mov %rsp,%rbp to the beginning of every function and by adding pop %rbp before returning. Using this knowledge, stack unwinding boils down to traversing a linked list:

https://i.imgur.com/P6pFdPD.png

Where’s the catch?

The frame pointer register is not necessary to run a compiled binary. It makes it easy to unwind the stack, and some debugging tools rely on frame pointers, but the compiler knows how much data it put on the stack, so it can generate code that doesn't need the RBP. Not using the frame pointer register can make a program more efficient:

  • We don’t need to back up the value of the register onto the stack, which saves 3 instructions per function.
  • We can treat the RBP as a general-purpose register and use it for something else.

Whether the compiler sets frame pointer or not is controlled by the -fomit-frame-pointer flag and the default is "omit", meaning we can’t use this method of stack unwinding by default.

To make it possible to rely on the frame pointer being available, we'll add -fno-omit-frame-pointer and -mno-omit-leaf-frame-pointer to the default C/C++ compilation flags. This will instruct the compiler to make sure the frame pointer is always available. This will in turn allow profiling tools to provide accurate performance data which can drive performance improvements in core libraries and executables. It'll also make stacktraces from all BPF tooling more reliable as they'll be guaranteed to have access to a reliable stacktrace via frame pointers.

Feedback

Potential performance impact

  • Meta builds all its libraries and executables with -fno-omit-frame-pointer and -mno-omit-leaf-frame-pointer by default. Internal benchmarks did not show significant impact on performance when omitting the frame pointer for two of our most performance intensive applications.
  • From https://hal.inria.fr/hal-02297690/document, a paper on DWARF unwinding, we find that Google also compiles all its internal critical software with frame pointers to ensure fast and reliable backtraces.
  • Given that the kernel on Fedora already uses the ORC debuginfo format and this works well, we'll keep compiling the kernel without frame pointers since there's no benefits for profiling or debugging to be gained by compiling the kernel with frame pointers. This prevents any regressions in kernel performance such as those reported by https://lore.kernel.org/all/20170602104048.jkkzssljsompjdwy@suse.de/T/#u.
  • Brendan Gregg from Netflix advocates making -fno-omit-frame-pointer the default in GCC (https://www.brendangregg.com/Slides/SCALE2015_Linux_perf_profiling.pdf)

Should individual libraries or executables notice a significant performance degradation caused by including the frame pointer everywhere, these packages can opt-out on an individual basis as described in https://docs.fedoraproject.org/en-US/packaging-guidelines/#_compiler_flags.

Benchmarking of the performance impact

To verify the performance impact of the proposed change, we compared a number of benchmarks on a Fedora 37 system where every package is built with frame pointers against the same benchmarks on a regular Fedora 37 system. The source code for the benchmarks can be found in the fpbench repository on github. We used copr to build the packages required to run the benchmarks (and their dependencies) with frame pointers. Then, using mkosi, we build one Fedora 37 container with frame pointers and one Fedora 37 container without frame pointers and run various benchmarks inside these containers.

The results of the benchmarks can be found in the readme of the fpbench repository as we ran into formatting issues trying to add the results to the wiki.

Summarizing the results:

  • Compiling the kernel with GCC is 2.4% slower with frame pointers
  • Running Blender to render a frame is 2% slower on our specific testcase
  • openssl/botan/zstd do not seem to be affected significantly when built with frame pointers
  • The impact on CPython benchmarks can be anywhere from 1-10% depending on the specific benchmark
  • Redis benchmarks do not seem to be significantly impacted when built with frame pointers

Aside from the pyperformance benchmarks, the impact of building with frame pointers is limited on the benchmarks we performed. Our findings on the impact of the Python benchmarks when CPython is built with frame pointers are discussed in this comment.

Alternatives to frame pointers

There are a few alternative ways to unwind stacks instead of using the frame pointer:

  • DWARF data - The compiler can emit extra information that allows us to find the beginning of the frame without the frame pointer, which means we can walk the stack exactly as before. The problem is that we need to unwind the stack in kernel space which isn't implemented in the kernel. Given that the kernel implemented it's own format (ORC) instead of using DWARF, it's unlikely that we'll see a DWARF unwinder in the kernel any time soon. The perf tool allows you to use the DWARF data with --call-graph=dwarf, but this means that it copies the full stack on every event and unwinds in user space. This has very high overhead. For more details on why DWARF unwinding is slow, please see https://hal.inria.fr/hal-02297690/document which contains detailed information on the problems with DWARF unwinding.
  • ORC (undwarf) - problems with unwinding in kernel led to creation of another format with the same purpose as DWARF, just much simpler. This can only be used to unwind kernel stack traces; it doesn't help us with userspace stacks. More information on ORC can be found here.
  • LBR - New Intel CPUs have a feature that gives you source and target addresses for the last 16 (or 32, in newer CPUs) branches with no overhead. It can be configured to record only function calls and to be used as a stack, which means it can be used to get the stack trace. Sadly, you only get the last X calls, and not the full stack trace, so the data can be very incomplete. On top of that, many Fedora users might still be using CPUs without LBR support which means we wouldn't be able to assume working profilers on a Fedora system by default.
  • CTF Frame - An in progress RFC will add support to binutils to attach a new ctf_frame section to ELF binaries containing unwinding information. This new unwinding format claims to be more compact than eh_frame, faster to unwind, and simpler to implement an unwinder with. Should this format be accepted into binutils and should the kernel merge a CTF unwinder in the future, we could start building applications with CTF frame unwind information which could then be used in the kernel for unwinding userspace stacks instead of frame pointers. Unfortunately, CTF Frame is still a work-in-progress and won't be available for some time (if at all).
  • Shadow Stacks Shadow stacks are a hardware feature found on new Intel and AMD CPUs that improve security by copying return address information to a separate read-only shadow stack so that it's possible to verify the return address on the original stack wasn't modified by for example a buffer overflow. This information could potentially be used to unwind the stack. However, it's very early days for shadow stacks, they're only supported on very new CPU models, there's no kernel support yet and it's not completely certain that we'll be able to use this information for unwinding. As such, it's not a viable option for unwinding at this time but it might become one at some point in the future.

To summarize, if we want complete stacks with reasonably low overhead (which we do, there's no other way to get accurate profiling data from running services), frame pointers are currently the best option.

Benefit to Fedora

Implementing this change will provide profiling tools with easy access to stacktraces of installed libraries and executables which will lead to more accurate profiling data in general. This in turn can be used to implement optimizations to core libraries and executables which will improve the overall performance of Fedora itself and the wider Linux ecosystem.

Various debugging tools can also make use of the frame pointer to access the current stacktrace, although tools like gdb can already do this to some degree via embedded dwarf debugging info.

Scope

  • Proposal owners: Put up a PR to change the rpm macros to build packages by default with -fno-omit-frame-pointer and -mno-omit-leaf-frame-pointer by default.
  • Other developers: Review and merge the PR implementing the Change.
  • Policies and guidelines: N/A (not needed for this Change)
  • Trademark approval: N/A (not needed for this Change)
  • Alignment with Objectives: N/A

Upgrade/compatibility impact

This should not impact upgrades in any way.

How To Test

  1. Build the package with the updated rpm macros
  2. Profile the binary with perf record -g <binary>
  3. Inspect the perf data with perf report -g 'graph,0.5,caller'
  4. When expanding hot functions in the perf report, perf should show the full call graph of the hot function (at least for all functions that are part of the binary compiled with -fno-omit-frame-pointer)

User Experience

Fedora users will be more likely to have a streamlined experience when trying to debug/profile system executables/libraries. Tools such as perf will work out of the box instead of requiring to users to provide extra options (e.g. --call-graph=dwarf/LBR) or requiring users to recompile all relevant packages with frame pointers.

Dependencies

The rpm macros for Fedora need to be adjusted to include -fno-omit-frame-pointer and -mno-omit-leaf-frame-pointer in the default C/C++ compilation flags, and exclusions need to be added for performance-sensitive packages that don't benefit from being compiled with frame pointers.

The current list of packages that need to be excluded from this proposal is: - Any kernel packages

Contingency Plan

  • Contingency mechanism: The new version can be released without every package being rebuilt with fno-omit-frame-pointer and -mno-omit-leaf-frame-pointer. Profiling will only work perfectly once all packages have been rebuilt but there will be no regression in behavior if not all packages have been rebuilt by the time of the release. If the Change is found to introduce unacceptable regressions, the PR implementing it can be reverted and affected packages can be rebuilt. Individual packages can also opt out of the change with %undefine _include_frame_pointers.
  • Contingency deadline: Final freeze
  • Blocks release? No

Documentation

Release Notes

Packages are now compiled with frame pointers included by default. This will enable a variety of profiling and debugging tools to show more information out of the box.