Speaker
Description
To cope with the projected data rates for the European Spallation Source (ESS) the data acquisition system needs to be scalable. ESS will deploy a cluster of message brokers running Apache Kafka as the common data backbone for all instruments. Neutron data as well as any ancillary metadata (chopper information, sample environment parameters and so on) will be time stamped with the appropriate accuracy, transcribed messages using the Google Flatbuffer serialisation library and send to Kafka. From there the information is available to subscribers for file writing, visualisation or online processing.
The design offers some fairly unique features. For example as Kafka can be configured to keep a redundant copy of the data safe for a configurable retention period (limited by the available storage at the broker nodes), file writing can be requested to begin a point in time in the past. Data processed live can also be send into Kafka, which helps decoupling the scalable data processing facility from the consumer, i.e. the user at the instrument.
The presentation will cover the overall system architecture, the data sources and their time stamping, results from tests at scale. The ESS software suite around Kafka will also we discussed. It includes some custom build application, but largely consists of third party open source code, including the experiment control programme NICOS and tools for diagnostics, commissioning and visualisation.