From Fedora Project Wiki

< Networking‎ | Ideas

Revision as of 13:05, 25 September 2013 by Pavlix (talk | contribs) (→‎Configuration backends)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

For the purposess of design and redesign of configuration services that affect various aspects of network-related configuration, I (User:Pavlix) decided to write down a reference architecture that we could later adhere to. I hope other people to step in and improve the description as well as the architecture itself.

A reference architecture for configuration services affecting kernel and long-running daemons

Goals of a configuration service

  • Provide an API to get/set configuration optionally notify about configuration changes
  • Push configuration to kernel and/or long-running services
  • Optionally pull existing configuration from kernel and/or long-running services (e.g. after restart)
  • Optionally start/stop/reconfigure helper services
  • Optionally support persistent configuration
  • Optionally support switching between (global or selective) configuration profiles

Configuration API

The most flexible way to provide a configuration API is to provide a library. Any other service can then link to the library and use the provided functionality. When the configuration service is a separate process, the library can be used access it via IPC or the IPC mechanism can be used directly from other services.

Configuration library

The main reason for a configuration library is that any system service can use it to access the runtime configuration. While the functionality of the library can be backed by a separate daemon process, it's entirely optional.


  • Single library API can be used whether using a direct backend or talking to a configuration daemon
  • Possibility to switch between a direct backend and a daemon-based backend at runtime
  • Possibility to reuse the library as a backend for the daemon
  • Configuration daemon is optional (but may provide additional features)
  • Transaction safety if backend supports it (configuration data is private until commited)

Usage scheme (direct access to operating system):

  • SYSCALL / RPC (process boundary)
  • OS CONNECTOR (library backend, operating system frontend)
  • TOOL

The tool could be a command-like utility, a short-running GUI tool or it could even be a long-running service (for example a virtualization manager that needs to keep track of the configuration).

Configuration daemon

There are a couple of tasks that the library doesn't provide by itself unless the individual instances communicate via some central point:

  • Keep shared notion of communication
  • Perform centralized decisions
  • Granular authorization policy
  • Expose functionality via IPC method (usualy a means, not a goal)

A typical way to support the set of features specified above is to run a separate network configuration daemon that can be accessed through the library.

Optionally, the library can not only be used as a frontend to the daemon but also as its backend, simplifying things a lot.

Scheme of a daemon with library as a frontend and backend:

  • SYSCALL / RPC (process boundary)
  • OS CONNECTOR (library backend, operating system frontend)
  • RPC (process boundary)
  • DAEMON CONNECTOR (library backend, daemon frontend)
  • TOOL

Note the two instances of the library in the diagram. One sits on top of the operating system and is part of the daemon process and the other communicates to the daemon and is part of the tool process.

RPC methods

In the Linux world, I often hear about greatness D-Bus for its capability to connect various components together. I personally believe this is a little bit exaggerated. If you look at the diagram in the previous section, there are two places where RPC is mentioned.

The first RPC layer is between the library backend and the operating system by which we mean the kernel or long-running daemons to be managed. In that case the RPC method is determined by the service and we usually just use a library coming with the service to handle the communication. Therefore, unless we're also contributing to the kernel and/or specialized daemons, we don't care about the RPC method being used.

The second RPC layer is between the daemon connector and the daemon. Here the RPC method can be chosen arbitrarily because the daemon connector would be distributed with the daemon. We could simply choose the RPC method according to other means to talk to the daemon than through the library if we expect that at all. We can of course offer several RPC methods at once.

The following list of RPC methods is by no means complete, it is here just to provide information about notable RPC methods.

File-based RPC

Based on FUSE or similar tools, similar to kernel's /sys and /proc/sys. It's great for people using basic shell filesystem tools but it doesn't seem to be ideal for transactional use.

D-Bus (with a daemon)

Used by many other desktop or non-desktop tools. It provides some basic means to expose (and limit) the API to the non-root users. Together with systemd/policykit/consolekit it provides more fine-tuning.

Socket-based or private D-Bus

Except for people used to using D-Bus APIs directly, there's no real diffenece between using the D-Bus format, another standardized format or a custom format to exchange data between processes. Any non-root usage would have to be handled internally.

Configuration backends

The backends would use the library for access to the data structures, while the library would manage the backends and tell them what to do. The following actions would form the API of the backend:

  • Initialize: Set up any backend resources and bind the backend to the library's configuration tree instance.
  • Finalize: Clean up and return all resources.
  • Pull: Retrieve the configuration via this backend from the operating system, from the filesystem, etc. This action would mark modified objects as dirty.
  • Push: Commit the configuration via this backend to the operating system, to a filesystem, etc. This action would optionally use the dirty flag to identify modifications and reset it when the modifications are done.
  • Register: Register for asynchronous change notifications, i.e. from the kernel.

Not all backends would implement all of those methods and not all backends would affect the whole configuration tree. Therefore very often multiple backends whould have to be used with the library. And simetimes, multiple differently configured instances of the same backend might be useful.

While we should be modest at the beginning, the architecture could possibly be extended so that the backend could perform actions that would later result in changes in the configuration tree.

Implementation details

Programming language:

Relation to existing tools


  • OPERATING SYSTEM: Linux kernel.
  • SYSCALL / RPC: Netlink.
  • LIBRARY and OS CONNECTOR: Internal to the daemon.
  • DAEMON: FirewallD.
  • RPC: D-Bus.
  • TOOL: Various tools including firewall-cmd and firewall-config.

The two library instances are different, one of them being internal to the daemon.

Benefits of the library appoach (if used with FirewallD):

  • Frontend for virtualization tools supporting both FirewallD and non-FirewallD systems
  • Frontend for local and remote configuration management
  • Backend for legacy OS configuration files or even scripts

The kernel backend

FirewallD (according to Thomas) only manages the rules that it added itself. Therefore the push action would have to respect that and the pull and register actions would not be used at all.

AFAIK when FirewallD is started/stopped, it resets the whole firewall, though, therefore it would need a special reset action. Or it could be done optionally (according to supplied configuration) when backend instance is initialized and finalized.

The configuration tree

  • configuration
    • default profile
    • profiles (list)
      • profile
        • storage (where the profile is saved)
        • ...




  • OPERATING SYSTEM: This is mostly the kernel.
  • SYSCALL / RPC: Netlink and various ioctls to talk to the kernel.
  • LIBRARY and OS CONNECTOR: NetworkManager's nm-platform internal module with its linux implementation.
  • DAEMON: NetworkManager daemon.
  • RPC: D-Bus (private socket and daemon-based)
  • LIBRARY and DAEMON CONNECTOR: NetworkManager-glib library.
  • TOOL: Various tools including nmcli and GUI frontends.

The main difference here is that each library is different and one of them monopolized by the NetworkManager process. Also, the DAEMON acts as a profile manager that applies connection profiles and policy decisions to devices, which makes it span over different levels of abstraction.