statistics++: Making Fedora Project statistics accessible and automated
Ian Weller, Fedora Engineering, Red Hat, Inc.
- 1 Project overview
- 2 Target audience
- 3 Goals
- 4 Non-goals
- 5 Details / design overview
- 6 Requirements for release
- 7 Use cases
- 8 Relationship to other services
- 9 Reviewers
- 10 Schedule and milestones
- 11 Open issues
- 12 Resources for information
- 13 Responsible parties
Fedora Infrastructure has had a limited foray into the field of statistics. The Statistics page on the Fedora Project Wiki contains some limited information about the number of HTTP requests made to various infrastructure applications and the number of wiki edits made per month.
The statistics app in the first version of Fedora Community attempted to improve on the Statistics page, but ultimately failed because of the complexity of adding new and relevant automated queries to the platform and the limited amount of information Fedora's application servers could access.
With the planned messaging infrastructure for infrastructure applications, a statistics application can be programmed to listen on the message bus, record activity, and store activity in a database for later retrieval. This program will be called statistics++.
statistics++ consists of three components:
- datanommer, a server daemon that listens on the infrastructure message bus and records activity to a database
- datagrepper, an HTTP application that provides a RESTful web API for downloading data stored in the database based on a simple query syntax
- dataviewer, an HTTP application that produces automated data displays such as tables or charts
datanommer is targeted toward infrastructure application developers who wish to make their data available for use in datagrepper and dataviewer.
datagrepper is targeted toward software developers who wish to generate their own queries for personal use or for inclusion in dataviewer.
dataviewer is targeted toward any user interested in statistics about the Fedora Project, such as Fedora users and developers, Red Hat executives, and journalists.
This project aims to solve the following problems:
- Data on the Statistics wiki page can only be generated and validated by those who have access to Fedora log servers.
- Data on the Statistics wiki page requires a human to generate the data each week.
- Data on the Statistics wiki page does not encompass all infrastructure applications.
- Data on the Statistics wiki page can be modified by anybody who can edit the wiki.
- To generate data for other infrastructure applications (such as FAS, Koji, Bodhi, and other applications), separate code has to be written for each application in order to download data.
To solve these problems, statistics++ will have the following functionality:
- Open, read-only access to any anonymized data collected by infrastructure applications
- A standard RESTful API for downloading data
- Flexible schemas for storing and retrieving data from infrastructure applications
- Live updates of statistical data from infrastructure applications
- An interface for creating automated queries and representing data in tables or charts
This project should not attempt to solve the following problems:
- Live pushing of data to other applications (the purpose of the messaging bus)
Details / design overview
I decided to break statistics++ into three components to make them more modular. There are some benefits to this:
- Each component can be versioned and updated separately, assuming there is no API breakage (there shouldn't be).
- Other projects can decide to use the project as a whole or separate components (for example, using datanommer alone to prevent using the TG2 stack).
- I get to reuse the name datanommer (the name for the statistics project started about two years ago that did effectively the same thing but was put on hold due to limited resources).
datanommer will be a system service written in Python. At a basic level, its purpose is to connect to a message bus, find messages that it is interested in, and store data from those messages into a database.
An init script or systemd service file (depending on the release) will be written for datanommer.
A configuration file defines data stored for each application. These data definitions are called schemas. A schema represents a single application, but applications can have multiple schemas. Each schema consists of this configuration:
- The namespace to check messages against (with named groups)
- The fields that are stored in the database and their types (SQLAlchemy field types, most likely)
- (optional) A regular expression for reading data in from log files using the datanommer-logread utility
When enabled, datanommer will check each message on the bus against its list of namespaces. If it matches any that datanommer knows, it will extract the data and store it in the database.
datagrepper is a web frontend written in the TurboGears 2 framework, to be run through Apache httpd via WSGI. Its purpose is to accept queries to the statistics database and return the requested information.
Depending on implementation, datagrepper may or may not need access to datanommer's configuration file. If the database is SQL-backed (i.e. PostgreSQL), datagrepper can determine the schema for each database based on table layouts. If a NoSQL database is used, datanommer could put information about the schema in the database. Alternatively to all of these choices, datagrepper can simply have access to datanommer's configuration file.
The index page of datagrepper shows available schemas that data can be downloaded from and what fields can be fetched or searched. By default, it presents output in HTML, but can be displayed in JSON.
/query URI accepts a query string as either a GET or POST request. Query string variable names match those of the database fields. Django-like field lookup arguments will be accepted (for example, sending the query string
date__lte=2011-12-31 will return rows in the table where the "date" field is less than or equal to December 31, 2011).
/query will accept a
__format argument, which can either be
json to return data in JSON or
csv to return data in CSV.
datagrepper client API
A Python client API will be available for datagrepper which will automate some of the intricacies of downloading data via HTTP, using gzip compression, continuing queries and converting the JSON output to a Python object.
dataviewer is a web frontend written in the TurboGears 2 framework, to be run through Apache httpd via WSGI. Its purpose is to make queries to datagrepper using the client API and display data in various formats (such as tables or charts).
The specific plan for defining what displays are available and how they get data is currently being discussed in the
#fedora-apps IRC channel.
Requirements for release
- The following applications must send activity or log messages over the message bus:
- Git (pkgs.fedoraproject.org and git.fedorahosted.org)
- The datanommer service must run, connect to a message bus, listen for activity, parse activity messages and store data into a database for all of the above services.
- Data from before datanommer began running must be gathered from log files or application databases and placed in the database.
- The datagrepper service must run and respond to basic queries. The data schema for each infrastructure application and the query syntax must be documented, and examples in that documentation must function. The service must be capable of providing responses in JSON and compressing a response when requested.
- Queries on Statistics using the above application data must be automated and displayed in dataviewer.
Within six months, statistics++ should be able to handle the following use cases:
- Adam wants information on wiki edits made in 2011. He doesn't have experience with any programming languages, but if he could import data into a spreadsheet program he can use the data that way.
- Brenda needs information on how often different architectures were requested from MirrorManager in order to provide information to FESCo on the debate of demoting an architecture to secondary.
- Cathy is a journalist and wants to determine the year-by-year growth rate of the Fedora user base.
- David is interested in seeing how many packages were available at each release's end-of-life and whether the rate of change is increasing or decreasing.
- Ethan of the Websites team wants to see if a certain page was regularly accessed enough to see if it should continue to be maintained.
- Fred wants to determine how many packages required to remain in testing for a certain period of time actually receive positive or negative karma in Bodhi.
Relationship to other services
statistics++ is indirectly related to every other infrastructure application, as we wish to include every infrastructure application in statistics++ eventually.
Subject to change; names are basically placeholders.
- Infrastructure reviewer: Kevin Fenzi
- Code reviewer: Toshio Kuratomi
- Message bus reviewer: Ralph Bean
- Frontend usability reviewer: Máirín Duffy
Schedule and milestones
Milestones aren't likely to change, but dates are subject to wild change depending on the status of messaging support in infrastructure.
|2012-04-13|| * Message bus in place
|2012-04-20|| datanommer done:
|2012-04-27||datanommer packaged for EPEL and in production infrastructure (or staging if during a change freeze)|
|2012-05-21|| datagrepper done
|2012-05-28||datagrepper Python client library done|
For statistics++ to run on Fedora Infrastructure, a messaging bus must be in place, and all components of statistics++ must be packaged for EPEL.
For the inclusion of each infrastructure application in statistics++, that application must send messages over the messaging bus, and data generated from that application prior to inclusion must be imported into the database.
- How does the datanommer configuration file define data types? (Currently thinking SQLAlchemy types will work best)
- How should messages sent while datanommer is not listening be handled?
- Should datanommer check for duplicate messages (for example, reading in log files during a time period when messages were received)? If so, should this be configured per-schema?
- How should datagrepper handle excessively large queries? Some large queries may take longer than a normal HTTP timeout to generate. Some ideas:
- Have a response that means "your query is generating, here's a code you can check to see if you can download it." Advantages: server can process query when it has idle time; downloads have less HTTP request overhead. Disadvantages: user has to wait for data; server has to retain data for some time period so it can be downloaded.
- MediaWiki style "query-continue" messages that give query string variables to be changed to access the next set of results
- Should we use RRD as a secondary database for faster queries and rendering?
- Should the dataviewer component be a separate web application or should it be part of the Fedora Community web framework?
Resources for information
- Current Statistics wiki page: http://fedoraproject.org/wiki/Statistics
- Why Fedora thinks metrics are important and some discussion on how to count users: http://fedoraproject.org/wiki/Infrastructure/Metrics
- Updates system metrics: https://admin.fedoraproject.org/updates/metrics/?release=F16
- Fedora Messaging SIG: http://fedoraproject.org/wiki/Messaging_SIG