Challenges and Common Solutions in Implementing Full Streaming Readout for Sub-Detector Technologies

Slide Note
Embed
Share

Implementing full streaming readout with proposed sub-detector technologies and DAQ system concept poses challenges including proper data alignment, risks of data loss, and background noise affecting data rates. The transition point for electronic components from detector-specific to common solutions is envisioned at the DAM/EBDC level. Common software and firmware for control and configuration tasks differ based on detector needs, with a packet abstraction approach used for data transport.


Uploaded on Sep 29, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. E-1 Identify the main uncertainties/risks/challenges in implementing full streaming readout with the proposed sub-detector technologies and DAQ system concept. Proper data alignment: We need to demonstrate that we are reading detector data from the same beam crossing, and can align those data portions near-line/offline Front-ends need someability to autonomously identify a signal Risks are a loss of data alignment (not actually specific to SRO) Background/noise in some detectors may yield to high data rates. 1

  2. E-2 At what point in the readout chain from detector to recorded data do electronic components change from being detector-specific to using common solutions shared by all detector systems? DCM FEE DAM EBDC DCM DCM DCM Buffer Box FEE Online Data Filter DAM EBDC DCM DCM DCM FEE DAM EBDC DCM DCM DCM FEE Buffer Box DAM EBDC DCM DCM DCM Buffer Box To FEE Online Data Filter DAM EBDC DCM DCM DCM FEE Network Switch permanent storage and near-line processing Buffer Box DAM EBDC DCM DCM DCM FEE DAM EBDC DCM DCM DCM FEE Buffer Box Online Data Filter DAM EBDC DCM DCM DCM FEE Buffer Box DAM EBDC DCM DCM This point is envisioned to be the DAM/the EBDC. It is highly desirable to have one common readout interface card (the current FELIX card is the model/proxy for this). Cost- or development considerations might compel us to use different cards (for example, use cheaper cards for light detectors with less bandwidth requirements). We consider this a remote possibility because one common card design helps us economize on development efforts, and also spares management. 2

  3. E-3 a At what point in the data acquisition does software (and firmware) become common for all readout chains? Which hardware and software for control, configuration, and timing are common for all detectors? One needs to distinguish between the readout and control/configuration tasks. For years, the readout design envisioned to carry over to ECCE has used a packet abstraction to the raw data of different detectors that disentangles the contents of the data from the data transport. This packaging of data takes place in the EBDC for most systems, and already in the DAM for others. The difference between DAM and EBDC stems from the need for the EBDC to still obtain information from the data payload for the subsequent decision to keep or discard the data ( metadata ) Once formatted into a packet (usually dense-packed and/or compressed), it is harder to derive such information. 3

  4. E-3 b At what point in the data acquisition does software (and firmware) become common for all readout chains? Which hardware and software for control, configuration, and timing are common for all detectors? Control/configuration tasks. The configuration space and needs for different detectors are usually quite diverse. Some detectors need calibration constants, threshold settings, pedestals, bias voltages, and the like. The standard approach is to implement a high-level API ( initialize ) that is then specialized on a per-detector basis. This also supports assorted tasks such as threshold scans or bias voltage scans that are often needed to find the optimal settings Since FEE-based settings (e.g., thresholds) are usually communicated through the DAM, the EBDC is the natural place for this specialization to take place Power supplies for bias voltage are usually centrally controlled with their own APIs We envision a central configuration server that reaches out to the components to perform the common initializations (after a cold start, for example) in the right order. 4

  5. E-3 b configuration specialization example Control GUI Initialize TPC/DAM/SAMPA Memory-mapped register settings Power Supplies Mix of snmp, TCP/IP Calorimeters Memory-mapped register settings approach #!/usr/bin/env python3 snmpset //serial-like configs sendControlSequence(0,2, sp_cntrl_timing, sp_cntrl_init); from dam import * d = dam() // higher-level libraries d.reg.gtm_bco = 0 #... sendControlSequence(0,2, sp_cntrl_timing, sp_cntrl_reset); // 5

  6. E-3 c At what point in the data acquisition does software (and firmware) become common for all readout chains? Which hardware and software for control, configuration, and timing are common for all detectors? Timing The timing is controlled by the timing system. Individual timing modules (all firmware) act as the interface between the global accelerator timing/clock and the needs (format, delay settings etc) of an individual detector. Setup tasks such a timing scan would be conducted here. Established standard settings would be made by the central server described before. 6

  7. E-4 To what extent are offline algorithms foreseen to be used in the online system, and in which part(s) of the readout/DAQ system? For years, there has been no distinction between online and offline algorithms on the CPU. The same libraries are used in both settings Examples are decoding algorithms, data access APIs, fitting, clustering, and the like In first approximation, the offline algorithms will be used Occasionally there might be the need to use a faster, less elaborate algorithm online Those would be maintained in the offline universe for testing, benchmarking, etc GPU and FPGA-based algorithms might have no direct code equivalent in the offline world They would be derived and developed from offline algorithms (e.g., zero-suppression) 7

  8. E-5 Within your proposed DAQ/computing model, at what point will online calibration be required? Describe a high-level strategy for online calibration, and significant technical considerations in achieving this. The main places will be the EBDCs and the Online Event Filters. The event selection algorithm will require clusters, energies, pT values, and the like, which in turn require a good enough calibration so that the filters can make the selectin based on valid physics properties A good example are tower-by-tower calibrations in the calorimeters Most detector elements will undergo some form of initial calibration, for example in test beams, or with cosmics, or by some other means. Such calibrations can be refined after actual data have been taken. Many detectors will require special calibration runs periodically to update or verify the current calibration constants. As with question E-4, we envision that the application of such calibrations will be the same in an on- and offline setting. 8

  9. E-6 Describe how the development of the readout electronics for different sub-detectors will be centrally coordinated. 9

Related


More Related Content