- 1 Draft Use Cases
- 2 Questions / Discussion Points
- 2.1 Items under "Provide the best platform for secure application deployment
- 2.2 Difference between server and devops?
- 2.3 Difference between Server and Cloud products vs compute nodes?
- 2.4 Balance between OS packages and more rapidly moving frameworks in collections...
- 2.5 Visibility of packages
- 2.6 Overlap between other working groups
- 2.7 Headless only or support GUI?
- 2.8 Applications vs. Platform
- 2.9 Versioning
- 2.10 Configuration Tools
- 2.11 CIM / OpenLMI
- 2.12 Fedora Compatibility With Popular Config Mgmt Tools
- 2.13 Minimize BS Maintenance
Draft Use Cases
- The user must be able to easily deploy and configure any supported Fedora Server role. (Examples may include: FreeIPA Domain Controller, BIND DNS, DHCP, Database server, iSCSI target, File/Storage server.)
- The user must be able to query, monitor, configure and manage a Fedora Server remotely using stable and consistent public interfaces.
- The user must be able to simply enroll the Fedora Server into a FreeIPA or Active Directory domain.
- Users must be able to control and contain the resources consumed by services running on the system.
- Users must be able to rapidly re-deploy services in accordance with their DevOps practices using Fedora Server.
- ASK SOFTWARE COLLECTIONS WG The user must be able to easily deploy and configure applications to supported high-value frameworks. (Example frameworks: JBoss, Ruby on Rails, Django, Turbogears, Node.js, PHP.)
- ASK CLOUD WG Provide a platform for acting as a node in an OpenStack rack.
- ASK CLOUD WG Users must be able to create, manipulate and terminate large numbers of containers using a stable and consistent interface.
- Users must be able to use Fedora Server in fully headless operation. We commit to supporting only those GUI applications that can work with forwarded X (or the equivalent on other windowing systems)
The set of GUI software that should be installable and usable via trusted X-forwarding on Fedora Server will be defined by the Server working group. 
Questions / Discussion Points
Items under "Provide the best platform for secure application deployment
are cgroups / containers meant here?
Difference between server and devops?
Difference between Server and Cloud products vs compute nodes?
- "Traditional servers [like pets] have names, personalities, and are lovingly cared for. When they get sick, you diagnose the problem and carefully nurse them back to health, sometimes at great expense. Like pets. On the other hand, cattle sre numbered, and thought of as basically identical, and if they get sick, you put them down and get another one." --mattdm
- "This was my thinking, I thought the cloud WG is about building lean images that do not have much in the way of wizards, installers, and hand-holding software, but are focused toward rapid deployment, either as lean hosts or as computing/elastic guests, all controlled via some configuration engine like puppet etc... basically single task machines. I see the server WG more about building heavier, long term, multipurpose servers (be it on bare metal or as guest)." --simo
- "I think it would make sense for the Base Design and Environments/Stacks WGs to define how to install, build and package the runtimes and the applications that use them (including the conflicting/multiple versions issue), without focusing on a specific use (e.g. treating web applications, GUI applications and CLI applications equally), and for the Server WG to handle deploying the web applications within the web server and managing deployed web applications." --mitr
- "I would say single purpose servers/vm/containers all fall under the server WG as well but I think we should be looking at this from application stand point as in which of those services/daemon fall under the server WG and that means we could make up to about 500 550 applications or "products" that can be deployed on bare metal in vm's or containers we would be delivering." --johannbg
- "I think it would make most sense for Cloud and Server to "share applications", i.e. the same application package can be deployed either within a single-purpose Cloud image (automatically managed for horizontal scaling), or as a single instance within a Server (one of many applications running on this particular Server). Given that, I think the Server WG should indeed choose a very limited set of "applications" / "services" to include within the Server product and to make management of this limited set of services really good." --mitr
Balance between OS packages and more rapidly moving frameworks in collections...
Visibility of packages
- "Would it be radical to suggest that "packages" should be invisible to an admin that doesn't want to see them? "Enable the DNS server", configure what it is serving, *product's magic here*, the DNS server runs." --mitr
Overlap between other working groups
- "Basically where I stand any application that runs daemon/service as in it's an application (or set of applications) that runs in the background waiting to be used, or carrying out essential tasks on an pyshical/vm or in container or in other words basically it's an systemd/upstart/sysv unit/service or an container that can be started and enable with the systemd and service commands and is not part of the desktop/graphical target (such as gdm.service which thus makes it part of the workstation group ), as well is not part of the base/coreOS ( like device mapper etc ) it belongs within the server WG." --johannbg
- "I tentatively agree, although I guess there may be desktop-oriented daemons we may not care about. Say a desktop-oriented backup daemon, that is sort of single user or anyway ill-suited for a multi-user server. Also should we care much about Graphical UIs ? Or should we freeze early and maintain whatever version was considered stable in the Desktop WG at the start of the cycle ? And who is going to maintain it if we do so and happen to have a longer term cycle than the desktop ?" --simo
- "I agree that there will be situations where an administrator will want to install a GUI on a server (even if it's just because they have one machine in a rack that they use to fix things up when things go sideways)." --sgallagh
Headless only or support GUI?
- "We expect headless operation to be the norm, and if graphical interaction is needed, it will usually be done remotely via another system, possibly Fedora Workstation. However, we should also keep in mind the high likelihood that people are going to want to administer these systems from Windows, Macintosh or possibly tablet devices. With this in mind, I'd like to suggest that we focus our efforts around web-based applications and scriptable shell commands. (So things like Katello/Foreman, OpenLMI, FreeIPA WebUI and similar). These should all be consumable from any graphical client." --sgallagh
- "Not only that, there are some applications that although they are 'server apps' they require a GUI even if just to install them. stupid example but IIRC I remember things like game servers install programs that would run only with the Graphical UI. I also remember proprietary packages that would have a server component that would need X to install. Yes stupid things, but you can't discount them. I am also thinking that installing openoffice in order to do server-side rendering (doc conversions and so on) is going to drag in at least X libraries." --simo
- "We see this also as a dependency requirement for various 3rd party software. For example, Oracle, some printserver software, ArcGIS, and several FlexLM intergrated apps all want at least some portion of a gui to run. Not all of them handle a forwarded display properly. Some link directly to firefox to display their documentation." --Evolution
Applications vs. Platform
- "Our transitioning process needs to be able to cover 500+ applications ( or in other words be as generic as possible ) or so, so it obviously cannot depend on the existence of web fronted otherwise we would be excluding 99% of those server applications." --johannbg
- "... but the Server shouldn't ship 500 "services" as an integrated part of the product (are there even that many services to provide?). Regarding the "competing products", I'd go as far as to say that the Server should give the users a "good LDAP server" without exposing which upstream project is internally providing the functionality - even possibly switching the upstream projects on an upgrade if one of them started to fall behind." --mitr
- "I'd like for us to be focusing on a *platform* and a set of standard, visible APIs and working with the Base Design and Environments/Stacks groups to have service packages treated similarly to "apps" in other operating systems. We ourselves don't necessarily need to do all of the porting to accommodate this (though we will probably want to select a group of high-value servers that we use as examples, such as Apache HTTPD and BIND)." --sgallagh
- "I generally agree - though I'd focus more on getting the "high-value servers" working well than on calling ourselves a "platform" - it's far too easy to make a platform that doesn't "work" without noticing when there are no major users of the platform." --mitr
- "Also, I'd like for us to try to manage this separation so that we can allow our consumers to pick and choose which server they actually want, rather than necessarily the freshest upstream bits. Those fresh copies *must* be available (and probably the default if not otherwise specified), but it would be REALLY nice to be able to hang onto MyServer 2.4 after 3.0 comes out if your other applications aren't ready for it." -sgallagh
- "For the "high-value" servers (which provide external functionality, not an application API), I strongly disagree. We should be managing the transitions (no functionality dropped, all configuration migrated) so that the user will never want to use the older version. For APIs / runtimes, yes, we have no choice but to provide the older versions when the upstreams make ABI-incompatible changes." --mitr
- "The latest generation, redhat-config-*, were, IIRC, written over a comparatively short period of time (ISTR they all happened within a year!), and covered basically all of the major server functions at the time - networking, httpd, bind, mail, ...) This has been done in the past, and this could be done again, if we really tried." --mitr
- "Yes; a good UI is a part of the deployment/reliability story. The "desktop application" for managing a server is really out of scope for the Workstation WG as proposed. The management interface and the underlying API would be much better implemented by the same group." --mitr
- "I wasn't either and I equally don't know, but I can tell you why they weren't useful to me at the time:
- no handling of multiple machines (or even remote connections to a single machine)
- little cohesive ux design (this did get iteratively better)
- some of them didn't work very well (*cough* samba)
- generally, you had to commit to using them and never touching the config files by hand
- I was operating in an environment with Red Hat Linux, Debian, Other-Linux-Flavor-Of-The-Day, BSDI, NetBSD, netbh Solaris, SunOS, IRIX, Tru64, and, um, VMS. All the various single-vendor GUIs just brought more pain.
- I would certainly add "no API" to the list of complaints _now_, but I'm not ashamed to admit that that wasn't on list of sysadmin concerns a decade ago." --mattdm
- "They were abandoned, more or less:
- because the hard core admins didn't use them anyway; they would either edit the configs by hand (old days) or just push out their configs with puppet/chef/ansible/cfengine/salt (these days)
- becuase the less hard core admins had enough other issues that this wasn't going to win them over
- because we were unable to create a larger upstream community that allowed us to drive development forward
- because chasing all the options that could be configured for something like this is actually somewhat significant work
- because there wasn't any encapsulation for common automation - it was just separate 'click here; do this' sort of tools" --notting
- "I think we should, over time, move towards a "G"UI (whether local or web is an implementation detail in this) for one-time use, and an actual API used by an actual, current-era, programming language, for automated use. The CLI will obviously stay, both because many users are comfortable with it, and because we can't replace it during this decade." --mitr
- "I agree with this completely, and it's one of the principal drivers of the OpenLMI project (full disclosure: I'm heavily involved in this effort). " --sgallagh
CIM / OpenLMI
Fedora Compatibility With Popular Config Mgmt Tools
- "Perhaps making leading [config management] tools work well with Fedora (at this point mostly systemd & journald) would be better goal. I for one know that systemd and puppet aren't the best of friends." --jdorff
Minimize BS Maintenance
- "I think these two are really important. I'd like the Server to be as close to zero "bullshit maintenance" as possible, automating everything that is automatable. E.g.:
- The administrator doesn't have to deal with N different configuration file formats, and N different semantics of how /usr/*, /etc/*, /usr/*.d interact.
- The administrator isn't required to type directives into a file and to deal with the fallout of a typo in the directive name.
- Upgrades within a release always work without human intervention. (=> after we get some experience, they could even be automatic.)
- Upgrades between releases always work without human intervention unless the feature is completely removed (and in that case the user will be told before the upgrade starts). E.g. configuration and file formats are transparently and automatically updated.
- The administrator is never required to set up the same option in two places. (E.g. joining an IPA domain should automatically configure all services to use IPA. Perhaps even have "the services" (see below) preinstalled and allow the user to enable them, instead of install+enable as currently.)
- No alerts by a system that works correctly." --mitr