From Fedora Project Wiki

For the purposess of design and redesign of configuration services that affect various aspects of network-related configuration, I (User:Pavlix) decided to write down a reference architecture that we could later adhere to. I hope other people to step in and improve the description as well as the architecture itself.

A reference architecture for configuration services affecting kernel and long-running daemons

Let's start with the goals. That's the most important part of the whole thing, and also the first thing to be disputed and, if necessary, corrected.

Goals of a configuration service

  • Push configuration to kernel and/or long-running services
  • Optionally pull existing configuration from kernel and/or long-running services (e.g. after restart)
  • Optionally start/stop/reconfigure helper services
  • Report the configuration and notify about configuration changes
  • Retrieve configuration from disk and (optionally) store configuration to disk
  • Optionally support switching between (global or selective) configuration profiles

This was just a basic checklist. The goals may have to be tweaked for a particular purpose.

Configuration library and object model

There are many approches to configuration services but I'm going to describe a library-centric one which is different from many services that we are using now. The basic idea behind the architecture I'm proposing is to have a library that defines and exposes the configuration object tree. This library then communicates with backends that handle communication with the kernel and long-running daemons as well as with the filesystem, etc.

Therefore I feel obliged to provide the reasons why a library should be the center of the whole thing:

  • A single library API can be used by applications that want to access the configuration directly as well as by various bindings and tools.
  • If the configuration daemon is present, it uses the library API to access the operating system backends and the library API can be used to access the daemon.
  • In many situations a configuration daemon is not needed and then the tools and application use the same library API to talk to the operating system as it would do to talk to the daemon.
  • The library could (optionally) transition between sitting on top of the operating system and on top of the daemon.
  • Library's configuration tree is private to the tool that's using it until it requests commiting the changes. That way the library is transaction-safe as long as the backends are.

Diagram of the most basic usage would look like:


  • OPERATING SYSTEM
  • SYSCALL / RPC (process boundary)
  • OS CONNECTOR (library backend, operating system frontend)
  • LIBRARY
  • TOOL

The tool could be a command-like utility, a short-running GUI tool or it could even be a long-running service (for example a virtualization manager that needs to keep track of the configuration).

Configuration daemon

In those cases that require a configuration daemon for various reasons, for example:

  • Share the notion of a configuration between client instances.
  • Perform centralized decisions regarding the configuration.
  • Provide a well-defined API to other system components.
  • To also provide API for user applications (if applicable).

The daemon as well as the daemon's frontend would be based on the library. While the daemon would be a consumer of the library, the frontend to the daemon would serve as a backend to the library, talking to the daemon via RPC.

Diagram from the core OS to the configuration tool would look like:

  • OPERATING SYSTEM
  • SYSCALL / RPC (process boundary)
  • OS CONNECTOR (library backend, operating system frontend)
  • LIBRARY
  • DAEMON
  • RPC (process boundary)
  • DAEMON CONNECTOR (library backend, daemon frontend)
  • LIBRARY
  • TOOL

Note the two instances of the library in the diagram. One sits on top of the operating system and is part of the daemon process and the other communicates to the daemon and is part of the tool process.

The library could even have some automagical logic to keep track of the daemon and switch to talk directly to the operating system when it's dead. Ordinary tools could then leave the backend choice and management to the library while the daemon and specific tools would explicitly choose the set of backends.

RPC methods

In the Linux world, I often hear about greatness D-Bus for its capability to connect various components together. I personally believe this is a little bit exaggerated. If you look at the diagram in the previous section, there are two places where RPC is mentioned.

The first RPC layer is between the library backend and the operating system by which we mean the kernel or long-running daemons to be managed. In that case the RPC method is determined by the service and we usually just use a library coming with the service to handle the communication. Therefore, unless we're also contributing to the kernel and/or specialized daemons, we don't care about the RPC method being used.

The second RPC layer is between the daemon connector and the daemon. Here the RPC method can be chosen arbitrarily because the daemon connector would be distributed with the daemon. We could simply choose the RPC method according to other means to talk to the daemon than through the library if we expect that at all. We can of course offer several RPC methods at once.

The following list of RPC methods is by no means complete, it is here just to provide information about notable RPC methods.

File-based RPC

Based on FUSE or similar tools, similar to kernel's /sys and /proc/sys. It's great for people using basic shell filesystem tools but it doesn't seem to be ideal for transactional use.

D-Bus (with a daemon)

Used by many other desktop or non-desktop tools. It provides some basic means to expose (and limit) the API to the non-root users. Together with systemd/policykit/consolekit it provides more fine-tuning.

Socket-based or private D-Bus

Except for people used to using D-Bus APIs directly, there's no real diffenece between using the D-Bus format, another standardized format or a custom format to exchange data between processes. Any non-root usage would have to be handled internally.

Configuration backends

The backends would use the library for access to the data structures, while the library would manage the backends and tell them what to do. The following actions would form the API of the backend:

  • Initialize: Set up any backend resources and bind the backend to the library's configuration tree instance.
  • Finalize: Clean up and return all resources.
  • Pull: Retrieve the configuration via this backend from the operating system, from the filesystem, etc. This action would mark modified objects as dirty.
  • Push: Commit the configuration via this backend to the operating system, to a filesystem, etc. This action would optionally use the dirty flag to identify modifications and reset it when the modifications are done.
  • Register: Register for asynchronous change notifications, i.e. from the kernel.


Not all backends would implement all of those methods and not all backends would affect the whole configuration tree. Therefore very often multiple backends whould have to be used with the library. And simetimes, multiple differently configured instances of the same backend might be useful.

While we should be modest at the beginning, the architecture could possibly be extended so that the backend could perform actions that would later result in changes in the configuration tree.

Relation to existing tools

FirewallD

  • OPERATING SYSTEM: Linux kernel.
  • SYSCALL / RPC: Netlink.
  • LIBRARY and OS CONNECTOR: Internal to the daemon.
  • DAEMON: FirewallD.
  • RPC: D-Bus.
  • LIBRARY and DAEMON CONNECTOR: ?
  • TOOL: Various tools including firewall-cmd and firewall-config.

The two library instances are different, one of them being internal to the daemon.

The kernel backend

FirewallD (according to Thomas) only manages the rules that it added itself. Therefore the push action would have to respect that and the pull and register actions would not be used at all.

AFAIK when FirewallD is started/stopped, it resets the whole firewall, though, therefore it would need a special reset action. Or it could be done optionally (according to supplied configuration) when backend instance is initialized and finalized.

The configuration tree

  • configuration
    • default profile
    • profiles (list)
      • profile
        • storage (where the profile is saved)
        • ...

LNST

libvirt

NetworkManager

  • OPERATING SYSTEM: This is mostly the kernel.
  • SYSCALL / RPC: Netlink and various ioctls to talk to the kernel.
  • LIBRARY and OS CONNECTOR: NetworkManager's nm-platform internal module with its linux implementation.
  • DAEMON: NetworkManager daemon.
  • RPC: D-Bus (private socket and daemon-based)
  • LIBRARY and DAEMON CONNECTOR: NetworkManager-glib library.
  • TOOL: Various tools including nmcli and GUI frontends.

The main difference here is that each library is different and one of them monopolized by the NetworkManager process. Also, the DAEMON acts as a profile manager that applies connection profiles and policy decisions to devices, which makes it span over different levels of abstraction.