- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
The Experimental Physics and Industrial Control System, EPICS, is a set of Open-Source software tools, libraries and applications developed collaboratively and used worldwide to create distributed soft real-time control systems for scientific instruments such as particle accelerators, telescopes, experiment beam lines and other large scientific experiments. The EPICS Collaboration Meetings provide an opportunity for developers and managers from the various EPICS sites to come together and discuss their work and inform future plans. Attendees can see what is being done at other laboratories, and review plans for new tools or enhancements to existing tools to maximize their effectiveness to the whole community and avoid duplication of effort.
The Fall 2024 EPICS Collaboration Meeting and related workshops were held Monday September 16 to Friday September 20, 2024 at Oak Ridge National Laboratory. The Collaboration Meeting uses three days: Tuesday - Thursday, September 17-19, 2024. Workgroup meetings and training sessions were held on Monday and Friday, September 16 and 20, 2024.
Thank you for joining us this fall in beautiful Oak Ridge, Tennessee!
Please refer to the "Timetable" for links to the presented material. Some video recordings can be found on https://controlssoftware.sns.ornl.gov/training/2024_EPICS/
Find your way onto the ORNL campus, locate SNS, get your badge
Demo of creating simple IOCs in EPICS 7 with both Channel Access and PV Access. You can follow along using a training-VM.
Using the same training-VM, this session demonstrates the workflow for connecting an IOC to field devices using OPC UA
Moderated discussion of using containers (docker, kubernetes, ...) for EPICS IOCs and related services. See instructions for preparing docker and related tools on your laptop.
Moderated discussion of using containers (docker, kubernetes, ...) for EPICS IOCs and related services. See instructions for preparing docker and related tools on your laptop.
Find your way onto the ORNL campus, locate SNS, get your badge
EPICS was first used at ORNL to build the control system for the SNS accelerator. The SNS machine was built in collaboration with six partner labs who also delivered EPICS controls for their segments. EPICS has been used to the operate the machine for nearly 20 years and the original custom instrument control system has been replaced with EPICS. The recently completed PPU project extended the EPICS machine control system as will the STS project. HFIR instrument beam lines are also being upgraded to EPICS. A new fusion project at ORNL, MPEX, will continue the EPICS tradition at ORNL. While EPICS is growing at ORNL with each new project, we have embarked on modernization efforts for sustainability and look forward to new cyber challenges.
An overview of the upgrades the APS Accelerator Controls Group has introduced to our EPICS-based control system environment.
The Advanced Light Source (ALS) at Lawrence Berkeley National Laboratory, a pioneering third-generation light source, has been operational since 1992. The ongoing ALS-U upgrade project aims to significantly enhance the brightness and coherent flux of X-rays with addition of a new Accumulator Ring and a new Storage Ring (SR). While the SR RF system remains unchanged for ALS-U, its control system, primarily consisting of Horner PLCs and EPICS, is undergoing a comprehensive upgrade. This upgrade plan addresses the replacement of the legacy relay-based cavity water system and the end-of-life Horner PLC hardware, and it is being implemented in multiple phases. This presentation will detail the upgrade plan and provide an update on its current status.
The Relativistic Heavy Ion Collider (RHIC) and its associated injectors—Linac, AGS, and Booster—have used Accelerator Device Object (ADO)-based control system since their inception. For more than two decades, the ADO infrastructure has provided a reliable and efficient controls solution for RHIC, Linac, AGS and Booster. The Electron Ion Collider (EIC) plans to use EPICS for its control software infrastructure. With source and injectors using ADO infrastructure and EIC using EPICS, we need a bridge between them. We are currently exploring various solutions to this bridging. We will present progress, challenges and current status of the work.
This year, NSLS-II will celebrate ten years since first light, and today operates 29 beamlines with two more under construction and many more planned. An overview of the current usage and organization of EPICS and related software will be presented. This overview includes brief descriptions of projects underway or planned, including IOC server virtualization, ansible based IOC deployment, improving testing environments, automated CS Studio UI creation with ansible and phoebusgen, and efforts to standardize fly scanning methods.
The Linac Coherent Light Source (LCLS) facility at the SLAC National Accelerator Laboratory recently completed the installation of an X-Ray free electron laser driven by a new superconducting (SC) accelerator producing electrons with energy of 4 GeV. Now in commissioning, the control system behind the new LCLS-II accelerator is a combination of commercially available off the shelf parts, like PLCs, and custom chassis that are tuned to the 1 MHz accelerator duty cycle. The software stack is built upon the EPICS toolkit. With the addition of the LCLS-II SC accelerator, the control system has expanded to a total of 10 million PVs. SLAC has recently begun construction of the first upgrade to the SC LCLS-II, the LCLS-II-HE project, which will see the electron energy increase from 4 GeV to 8 GeV and the maximum beam power increases from 120 kW to 240 kW. The EPICS control system must also scale as new network, archiving, and 1 MHz beam rate devices are added. The status and plans of the LCLS facility accelerator controls upgrades required for LCLS-II-HE will be discussed.
We recently finished the accelerator controls system for APS-U project successfully. During this presentation, we will discuss the latest status and some achievement highlights.
We report on the progress in transitioning the ISIS accelerators from our current control system to a PVAccess based implementation of EPICS. Phoebus and PVAccess PVs are now our preferred solution for any new user interfaces and device interfaces respectively. A full-stack of Phoebus tools and other support tools are in use. We describe our pipeline for deploying conventional IOCS and our development of Python-based IOCs. Names of mimicked PVs from the old control system do not comply with PV Naming Convention; this has prompted development of tools for automated renaming of PVs across screens, EPICS Archiver Appliance, etc. Difficulties and challenges that have arisen during the transition are discussed.
Sigray is a private company that develops and manufactures standalone room sized laboratory X-ray systems with performance comparable to Synchrotron Beamlines. Sigray produces Instruments for micro-xray fluorescence, computed tomography, and X-ray absorption Spectroscopy. These systems were developed using EPICS, possibly uniquely for a commercial company. Sigray has leveraged EPICS for low level integration and rapid development. Each system has just two computers, with EPICS and Synapps deployed on a Linux platform for motion controls, detector support, scanning and general monitoring. Engineering screens use MEDM. The customer facing screens use a bespoke Windows interface with Windows based high level applications.
Using EPICS in an industrial environment comes with aspects that differ from running in a Laboratory. Commercially, managing an installed software base on computers located all over the world, built of varying releases of EPICS/Synapps/Area detector, many without internet access, is challenging. In addition, there is a need to add APIs, remote execution, and asynchronous controls interfaces for higher level machine functions like interlocks, alignment, service, batch processing, and semi-standard integration required development to manage instrument state, faults and process tracking in parallel with EPICS controls. While some of these may be unique concerns to industrial users, many of these items seem to overlap well with the current developments within the EPICS communities. We look forward to adding Sigrays experience with these issues back into the community and see where the community looks to go in the future.
An update from the EPICS Core Developers Group describing recent additions to the core C++ software stack.
retools
is a extension library for EPICS IOCs developed at the FRIB that adds regular expresion capabilities to the IOC shell. Regular expressions can be used to automate or simplify operation of the IOC shell, writing of st.cmd
scripts and archiving/autosave management. Recent additions to the library include regex-capable analogs to dbpf
and dbgf
EPICS base functions. This presentation will combine a tutorial on the use of retools capabilities, including the latest features and examples of use.
ESS uses a fork of the PSI-developed require module to load EPICS modules at runtime to manage its IOCs. This has worked very well for managing the control system, but development and maintenance is a challenge due to the complexity of the additions to the build system. Recently we have created a proof-of-concept version of require that strips away most of the added complexity and uses only the underlying EPICS build system. In this presentation we will provide a brief history of require as well as how it works currently, as well as a description of our proof-of-concept, with the hope of finding greater interest within the broader EPICS community.
SNS has recently deployed IPMI monitoring for all MicroTCA based IOCs using EPICS device support module epicsipmi. While IPMI is an established and documented standard, vendors' implementation of the IPMI vary and are not always easy to support. epicsipmi had to be significantly modified to support this task, with some of the changes being specific to a particular MicroTCA vendor. This talk will discuss challenges observed at SNS when implementing IPMI monitoring of health related sensors (e.g. fans, voltages, temperatures, etc.) and controlling the power state for FRUs.
Historically, the SNS has had a patchwork of different ways to make rote processes easier and perform tasks that EPICs IOCs are not naturally capable of doing. Many programming languages have been utilized to bridge the gap from Perl to Java to VBScripting, but maintenance and reliability were challenging due to the diversity of approaches. In an attempt to build capability and automation on a reasonable scale while trying to standardize the code used, Python has proven a practical tool to achieve the goal. Basic scripting is the natural way to enter into the foray of the world of Python, but can quickly evolve into more and more complex applications. The focus will be the number of different ways that Python has been employed, and how others can emulate our on-going success.
This talk will provide a comprehensive introduction to PCASpy, a powerful Python framework that simplifies the development of EPICS drivers. We will explore the fundamental concepts of PCASpy, the process of using it to create EPICS drivers in Python, including the creation and management of Process Variables (PVs), and demonstrate its practical application through real-world examples at the Berkeley Center for Structural Biology (BCSB). Additionally, we will delve into the integration of EPICS IOCs within the BCSB Beamline Control System, which manages eight beamlines at the Advanced Light Source (ALS). By integrating BOS (Beamline Operating System), much like Bluesky, with EPICS, we enhance services such as locking, synchronization, automation, process control, and security. By the end of this session, attendees will gain the knowledge and confidence to start developing and integrating their own EPICS drivers using PCASpy, and effectively leverage Python's flexibility to enhance beamline control systems.
With the increasing use of p4p IOCs at the ISIS Neutron and Muon Source there is a need for RecCaster Python integration to allow, for example, use of ChannelFinder. The current implementation of the EPICS RecCaster tool is written in C++/C and is dependent on the EPICS base/modules libraries. Recsync-rs, a RecCaster library in Rust, and PyRecCaster Python bindings with cross-platform support are presented.
The Electron-Ion Collider (EIC) is a significant upgrade to accelerator complex at Brookhaven National Lab. To service the control system requirements of the project a new hardware system is being developed named the Common Hardware Platform (CHP). The CHP will be a 2U ‘pizza box’ style rackmount chassis containing a motherboard with two slots for pluggable multifunction daughterboards. Different types of daughterboards are being developed for applications such as beam instrumentation, low level RF, machine protection, power supply monitoring, timing distribution, and others. The motherboard will be the primary interface between the daughterboards and the EPICS control system network. The interface design between CHP systems and the EPICS control system is presented.
Brookhaven National Lab’s Electron-Ion Collider (EIC) project plans to transition from Accelerator Device Object (ADO) framework developed for the Relativistic Heavy Ion Collider (RHIC) applications, to the EPICS framework for new equipment control and monitoring. As part of performance testing for EIC, LLRF group received an arc detector chassis from Jefferson National Lab that was equipped with EPICS functionality. After successfully installing the chassis and initializing an IOC, it became apparent that certain critical values, such as snapshots of the raw waveform from the arc detector heads, were only accessible via a UDP protocol not supported by the EPICS implementation. To address this, software was developed in Python to periodically request the waveform data of each detector head using the caproto library and integrate them into an EPICS IOC. This IOC was then converted for use with existing RHIC controls protocol using EPICS-ADO Bridge developed by Andrei Sukonov at BNL, to allow for seamless monitoring and evaluation.
Joint coordination meeting of EPICS core developers and council
Experience of using real-time Linux features with EPICS.
From the perspective of replacing VME and VxWorks with µTCA and (real time) Linux.
As the uTCA chassis started to replace our obsolete VME/VxWorks control systems at ORNL we found that each chassis was turning into a custom Linux installation. When a new chassis was brought online, a commercial Linux OS was installed and customized. These customizations were not well documented and would take many hours to re-create in the event of a system failure.
We decided a new approach was needed that would minimize the boot image size and optimize recovery time. A smaller, standard image could be loaded over the network and would restart each custom IOC automatically.
A Buildroot created embedded Linux image optimally suites these requirements. This small, embedded image of around 34M could easily be loaded across the network. Most systems usually restart to a running IOC in about a minute. System customization like IP addresses and DMA size declarations are passed in on the kernel command line. Once booted, the procserv and startup files finished the system setup to recover to a running chassis. Buildroot provides all the necessary tools, libraries and many applications needed to complete the distribution.
This presentation will present the details for setting up Buildroot to run a uTCA EPICS IOC control system.
While Cosylab typically adapts to the specific EPICS environment of the facility we provide services to, we are sometimes charged with setting up a new facility where there is no preexisting environment. In fact, there may be little in the way of IT infrastructure, e.g., no control system network and no place to put git repositories. Moreover, the staff may have little familiarity with EPICS and no preferences with regard to deployment tools. While all these things do get set up in due time, development of the control system needs to start before any of it is in place. To enable development and deployment of EPICS IOCs in such a situation, we have come up a set of tools and practices based on containers that allow us to be productive early. We stay close to the vanilla EPICS build system. This allows us to quickly onboard developers, train the facility staff, and to later transition to a different deployment system if the complexity of the facility requires it.
A number of ISIS systems use communication protocols for which there are no existing IOCs. Instead, Python-based PVAccess servers have been chosen for rapid development to host associated PVs and control. However, the flexibility of p4p and EPICS means that code implementation of these servers may vary significantly between groups. This presentation will discuss how ISIS accelerator controls is standardising this design using a wrapper around the p4p library. In addition to discussing the methodology used to implement the Normative Type specification, we will also discuss roadblocks and lessons learned so far, as well as outlining future development plans.
ITER's operation requires complex automation sequences that are beyond the scope of the finite state machine concept that the EPICS SNL Compiler/Sequencer implements.
The Operations Applications group is developing OACTree (Operation, Automation and Control using Behavior Trees), a new sequencing tool based on behavior trees, which has been successfully used in its first production applications.
The short talk will introduce my experience with familiarizing myself and evaluating the ITER OACTree behavior-tree based sequencer. I give my impression of the pros and cons of the approach, as well as comparing it to the traditional EPICS State Notation Language approach. Time allowing, a short demo can be presented.
The Controls (CTL) group at the Advanced Photon Source (APS) is responsible for many services, applications, and over 1,000 production IOCs. This complex infrastructure also requires the CTL group to manage and monitor the status of countless servers and storage drives. The Infrastructure Monitoring System (IMS) was created to maximize system reliability. At the core of IMS are open-source technologies such as Uptime Kuma and Grafana. This talk will outline how these technologies are used, as well as the tools that were developed to integrate IOC status and hardware statistics into IMS.
PV-Monitor is a system to monitor EPICS environment health, developed by CLS. It has been presented before in previous EPICS conferences. This presentation aims to expose recent improvements made to the system, as well next steps.
The two accelerators KARA and FLUTE at the Karlsruhe Institute of Technology have been using Control System Studio as the main GUI for over ten years. We used the opportunity by the migration to Phoebus in order to set up a robust technical infrastructure with containers and a cluster setup aiming to provide a high availability environment. We are now in the process of transitioning the panels aiming for a the official switch in operations later this year. This presentation will report on the current setup, the panel migration and future plans.
The Phoebus technology stack is a suite of tools and middle layer services designed to monitor, control, and interact with control systems like EPICS. This talk will provide an overview of recent developments contributed by members of the Phoebus collaboration. Key topics include the latest Phoebus releases, enhancements to services such as Olog, Save Restore, and ChannelFinder, and plans for future developments, including new features like the Python Reccaster and CA Nameserver.
With the increasing scale and complexity of large-scale experimental facilities and their subsystems, it quickly becomes difficult to organize operator displays in a manner conducive to efficient human operation. In this paper, we present a drop-in extension for Phoebus to track usage and navigation data for those displays, and FRIB's analyses of this data once it is collected.
The Save and Restore (SAR) service allows users to capture snapshots of a control system's state at a specific point in time, using configurations of a set of PVs. These snapshots can be restored later to revert the system to its state at the moment the snapshot was taken. Phoebus provides tools for creating and managing these configurations, as well as for retrieving, comparing, and restoring snapshots. This talk will cover the latest release of the SAR service and its clients, highlighting new features such as authentication and authorization, an expanded set of REST endpoints for seamless integration into UIs and scripts, and enhanced connectivity with external data sources like the archiver.
In this talk, we introduce TDM, a cross-platform display manager for EPICS. It is developed using a combination of several web technologies, including Node, TypeScript, Electron.js, React.js, and WebSocket. This software can be used on Linux, MacOS, Windows, and via web browser. It adopts the server-client model in Electron.js. The server is for Channel Access data acquisition and windows management, the client is for post-processing and displaying the data. Each window runs on an individual thread, which fully utilize the CPU resource.
The Fermi National Accelerator Laboratory (FNAL) PIP-2 (Proton Improvement Plan 2) project will provide a high power linear proton accelerator to produce an intense high-energy neutrino beam for the Deep Underground Neutrino Experiment. PIP-2 has chosen to use EPICS, Phoebus and related tools to control this new accelerator. This represents the first use of EPICS for accelerator operations at FNAL. The existing legacy control system called ACsys (Accelerator Control Systems) was developed over years as a custom solution for the rest of the FNAL accelerator complex. We will present our efforts to integrate the existing and new systems and mention our status and plans for the future.
This year, Abilene Christian University’s Nuclear Energy eXperimental Testing Lab (NEXT) will deploy a new 50 gallon Molten Salt Test System (MSTS) utilizing an EPICS control system. MSTS is a critical technological stepping stone for the NEXT Lab’s ambitious goal to deploy the Natura MSR-1 as the Molten Salt Research Reactor by 2026. This presentation will elaborate on the process of deploying EPICS IOCs on single board computers using a custom build of Linux using Buildroot and Docker.
At the ISIS Accelerators the majority of our production EPICS systems are deployed in containers on three Docker Swarm servers. Our development is done on Windows using Docker (though WSL is also used) and our operators interact with the control system using Phoebus on Windows. This poses challenges when dealing with PVAccess's UDP broadcast based search across many network boundaries. To tackle this we use a mixture of PVA Gateways and the EPICS_PVA_NAME_SERVERS environment variable. However, these solutions have proved inflexible within the Docker Swarm. We detail the problems, our solutions, and present a new UDP broadcast relay called SnowSignal which may be used to bridge searches between network segments.
SNRC and CEA collaborate to the upgrade of the SARAF accelerator of deuteron and proton beams.
CEA is in charge of the control system design and implementation for the Injector, MEBT and Super Conducting Linac made up of 4 cryomodules hosting cavities and solenoids. The control system baseline [1] (MTCA and Siemens S7-1500 PLC) was presented during the EPICS meeting at Cadarache in June 2019.
This new presentation will focus on the cryomodule test stand software architecture and mainly on the LLRF, cavity conditioning and control, and solenoid control. The continuous integration of the EPICS developments and the Main Control System (GUI navigation, archiving, alarms…) associated to this test stand will also be broached.
[1] Status of the Saraf-Phase2 Control System, F. Gougnaud & al, ICALEPCS21, Shanghai, China
An overview of the various tests and testing capabilities of EPNix, which is a project for packaging EPICS-related software using the Nix package manager.
These tests include:
Despite progress in the adoption of machine learning to enhance control, automation and decision making in complex systems such as accelerators, there is not yet a standardized methodology for integrating these models with EPICS. In this presentation, we introduce tools and a template for integrating trained models into EPICS environments. It provides a structured approach to deployment and scalability through containerisation. The presentation will include examples of deployments from the accelerators at the ISIS Neutron and Muon Source.
Osprey DCS is developing the Machine Learning Data Platform (MLDP) supporting machine learning and data science applications specific to large particle accelerator facilities and other large experimental physics facilities. It represents a “data-science ready” host platform providing integrated support for advanced data science applications used for diagnosis, modeling, control, and optimization of these facilities. There are 3 primary functions of the platform: 1) high-speed data acquisition, 2) archiving and management of time-correlated, heterogeneous data, and 3) comprehensive access and interaction with archived data. The objective is to provide full-stack support for machine learning and data science, from low-level hardware acquisition to broad data accessibility within a portable, standardized platform offering a data-centric interface for accelerator physicists and data scientists. We present an overview of the MLDP including use cases, architecture, implementation, and deployment, along with the current development status. The MLDP is deployable at any facility, however, the low-level acquisition component requires EPICS.
The ELK stack (Elasticsearch, Logstash, and Kibana) is a powerful open-source suite of tools widely used for real-time log aggregation, analysis, and visualization. At the Advanced Photon Source (APS), the Controls group employs the ELK stack to enhance debugging, diagnostics, and usage tracking of EPICS IOCs. ELK enables us to efficiently collect, store, and analyze EPICS ioc error logs and caput logs from multiple sources, providing a unified and web-based platform for real-time data interpretation and analysis. This talk will cover the core concepts of the ELK stack, its key advantages, and how the Controls group integrates ELK into our daily operations to streamline debugging, diagnostics, and usage tracking processes.
The 5MW proton linear accelerator of the European Spallation Source ERIC is designed to accelerate the beam at a repetition rate of 14 Hz, which will dictate the refresh rate of most of the relevant data produced by acquisition systems. Around a thousand EPICS IOCs, mainly from RF stations and beam instrumentation, will produce large waveform PVs that cannot be feasibly transferred over the network and stored long-term. The Synchronous Data Service (SDS) is a proposal to facilitate the acquisition and correlation of high-resolution data from different systems during the most relevant period of time. Currently, SDS consists of an EPICS extension to be included in data acquisition IOCs and a client service that archives the SDS PVs. The EPICS IOC extension is implemented using the new C++ libraries provided by PVXS and is designed to be attached to any regular IOC that produces waveform records. The IOC module can store data from several pulses in a circular buffer and generates Normative Types PVs with a custom structure that includes data (from the original waveforms) and metadata (mainly the cycle identification number). The second main component of SDS is the collector service, responsible for monitoring a list of SDS PVs, correlating data from different systems based on the cycle ID and archiving them in permanent storage as NeXus files. In addition to the files, a database stores the metadata to enable a fast and powerful data retrieval service. The collector is implemented in Python using p4p for EPICS connections and Elasticsearch as the database and search engine. One of the main applications of SDS is to provide a distributed system for the collection of high-resolution data arrays of consecutive cycles for post-mortem and on-demand analysis, without overloading the control system network. In this presentation, we will describe the technical details of the implementation, the testing results and plans for deployment of the service in production.
A high-speed data acquisition (DAQ) system that collects large amounts of fast data from various technical systems around the storage ring has been developed for the Advanced Photon Source Upgrade (APS-U) project to provide time-correlated data for statistics, diagnostics, monitoring, and fault recording. The DAQ system uses EPICS controllers to collect data from FPGAs, uses the AreaDetector framework to create different plugins that processes the data, and publishes the data using the pvAccess network protocol to high-level applications and services. This talk will give an overview of the DAQ system, including features and high-level architecture, as well as provide examples of how the DAQ system was used during APS commissioning.
In a collaborational effort (ITER/HZB-BESSY/ESS/PSI), a Device Support for the OPC UA industrial SCADA protocol is under development. Goals, status and roadmap will be presented.
This year, Abilene Christian University’s Nuclear Energy eXperimental Testing Lab (NEXT) will deploy a new 50 gallon Molten Salt Test System (MSTS) utilizing an EPICS control system. MSTS is a critical technological stepping stone for the NEXT Lab’s ambitious goal to deploy the Natura MSR-1 as the Molten Salt Research Reactor by 2026. This presentation will elaborate on the tooling developed to simulate the underlying controls and instrumentation hardware for deployed EPICS operated systems.
Sigray creates and ships many customised X-ray machines that use EPICS and Synapps as a key part of their control systems. We use docker containerization to maintain a repository of files and scripts that manage creating and running our many and varied EPICS installations, thus allowing us to freeze all software dependencies in one container image and allows us to consistently have the same environment every time.
We call this CIDER: Configuration, Installation, Deployment of EPICS Repositories. CIDER stores all existing dockers in one place. It tracks each machine’s settings and PV’s, and allows replication, reuse and minor modification of our IOC’s.
This talk describes CIDER in more detail.
The Fast Event System, a global time base and event-based trigger distribution system, has been developed and commissioned for the Advanced Photon Source Upgrade (APS-U) project. The hardware components developed by Miro-research Finland (MRF) are installed in 24 VME Input/Output Controllers (IOCs) deployed with EPICS software. Based on the community-supported MRF device support repository, new driver functions and EPICS database templates are added to support new features of the MRF VME boards used in this project. In this presentation, the overall structure and function of the Fast Event System of APS-U are introduced. The EPICS device support modules developed for new features like the delay compensation for EVM modules and the onboard timestamp generation will be presented in detail. Additionally, user interface schemes, access security, and integration with legacy systems will be discussed. Finally, the prospects for future developments of the APS Fast Event system and the MRF component device support will be addressed.
APS Accelerator Controls recently upgraded the beam diagnostics cameras to digital cameras. The CS-Studio Pheobus image feature was utilized for this upgrade, and we also incorporated a new database for calibrating the beam size using a calibration mask.
The EPICS 7 introduces many new features and bug fixes. The Spallation Neutron Source (SNS) controls is expected to benefit from adopting this new version of EPICS base. In this work, we report the experience of upgrading the EPICS version of VME and soft IOCs from 3.14.X to 7.0.4 at SNS. During this upgrade, the VME CPU boards and VxWorks version are upgraded to 5500 and 6.9, respectively.
The Advanced Photon Source just underwent a major upgrade of its facilities. Part of which involved the complete reconstruction of multiple beamlines. Using the XPCS beamline as a case study, this talk will go over design decisions, cross-disciplinary group effort, controls development, and analysis tooling that went into the overall end-to-end exprerience being offered at that beamline.
In this work, we integrate and extend an HKL computation
package into EPICS through a PyDevice IOC. This provides
EPICS users a generalized approach to mapping real motor
rotation space to HKL reflections for a wide range of diffrac-
tometers (4-circle, 6-circle, kappa geometries). Utilizing
PyDevice for EPICS IOC development allows us integrate
Python bindings for core calculations written in C, simultane-
ously taking advantage of the efficiency of C and readability
of Python. The EPICS IOC provides an interface between
beamline hardware and users through an intuitive Phoebus
CSS GUI. Extensions are being developed to the original
HKL package to handle inelastic scattering in addition to
the original elastic scattering case for neutron and X-ray
diffraction.
In August 2024, a team of Beamline Controls Engineers at the Synchrotron Light Source BESSY II was running the EPICS Summer School 2024, a two-week event for young engineers, computer scientists, and physicists to introduce them to the world of control systems at big science machines.
Presentations, training, guided tours of facilities, and a hands-on group project helped the trainees understand particle accelerator control systems and develop essential skills in collaboration, problem-solving, and teamwork.
The organizers plan to make this a regular event and will try to establish a collaboration between institutes to share organization and hosting.
To support ITER’s remote participation plans while honoring cybersecurity requirements, we are developing the “EPICS Diode”, mirroring EPICS PVs through hardware devices allowing strictly one-directional network traffic.
We present the concept, implementation and status, showing the first results of scalability and performance measurements, possible enhancements and the next planned steps.
Overview of the initial PV Access prototype that uses TLS instead of plain TCP
This session will be online. Learn what's new with the motor record and related ecosystem, exchange your experience.
General timing system introduction followed by workshop
A tour of the Spallation Neutron Source which will cover the control room, the target building and a section of the linac. Duration is about 90 minutes, and the tour will cover about 1 mile of walking.