From Fedora Project Wiki

For the purposess of design and redesign of configuration services that affect various aspects of network-related configuration, I (User:Pavlix) decided to write down a reference architecture that we could later adhere to. I hope other people to step in and improve the description as well as the architecture itself.

A reference architecture for configuration services affecting kernel and long-running daemons

Let's start with the goals. That's the most important part of the whole thing, and also the first thing to be disputed and, if necessary, corrected.

Goals of a configuration service

  • Push configuration to kernel and/or long-running services
  • Pull existing configuration from kernel and/or long-running services (e.g. after restart)
  • Optionally start/stop/reconfigure helper services
  • Report the configuration and notify about configuration changes
  • Retrieve configuration from disk and (optionally) store configuration to disk
  • Optionally support switching between (global or selective) configuration profiles

This was just a basic checklist. The goals may have to be tweaked for a particular purpose.

Configuration library and object model

There are many approches to configuration services but I'm going to describe a library-centric one which is different from many services that we are using now. The basic idea behind the architecture I'm proposing is to have a library that defines and exposes the configuration object tree. This library then communicates with backends that handle communication with the kernel and long-running daemons as well as with the filesystem, etc.

Therefore I feel obliged to provide the reasons why a library should be the center of the whole thing:

  • A single library API can be used by applications that want to access the configuration directly as well as by various bindings and tools.
  • If the configuration daemon is present, it uses the library API to access the operating system backends and the library API can be used to access the daemon.
  • In many situations a configuration daemon is not needed and then the tools and application use the same library API to talk to the operating system as it would do to talk to the daemon.
  • The library could (optionally) transition between sitting on top of the operating system and on top of the daemon.
  • Library's configuration tree is private to the tool that's using it until it requests commiting the changes. That way the library is transaction-safe as long as the backends are.

Configuration daemon

In those cases that require a configuration daemon for various reasons, the daemon as well as the daemon's frontend would be based on the library. While the daemon would be a consumer of the library, the frontend to the daemon would serve as a backend to the library, talking to the daemon via RPC.

Diagram from the core OS to the configuration tool would look like:

  • OPERATING SYSTEM / LONG-RUNNING DAEMON
  • SYSCALL / RPC (process boundary)
  • OS CONNECTOR (library backend)
  • LIBRARY
  • DAEMON
  • RPC (process boundary)
  • DAEMON CONNECTOR (library backend, daemon frontend)
  • LIBRARY
  • TOOL

Note the two instances of the library in the diagram. One sits on top of the operating system and is part of the daemon process and the other communicates to the daemon and is part of the tool process.

The library could even have some automagical logic to keep track of the daemon and switch to talk directly to the operating system when it's dead. Ordinary tools could then leave the backend choice and management to the library while the daemon and specific tools would explicitly choose the set of backends.

Configuration backends

The backends would use the library for access to the data structures, while the library would manage the backends and tell them what to do. The following actions would form the API of the backend:

  • Initialize: Set up any backend resources and bind the backend to the library's configuration tree instance.
  • Pull: Retrieve the configuration via this backend from the operating system, from the filesystem, etc. This action would mark modified objects as dirty.
  • Push: Commit the configuration via this backend to the operating system, to a filesystem, etc. This action would optionally use the dirty flag to identify modifications and reset it when the modifications are done.
  • Register: Register for asynchronous change notifications, i.e. from the kernel.

Not all backends would implement all of those methods and not all backends would affect the whole configuration tree. Therefore very often multiple backends whould have to be used with the library. And simetimes, multiple differently configured instances of the same backend might be useful.

While we should be modest at the beginning, the architecture could possibly be extended so that the backend could perform actions that would later result in changes in the configuration tree.