
KM3NeT Research Infrastructure and Detector Overview
Discover the KM3NeT project, showcasing the research infrastructure with deep-sea neutrino telescopes, detection units, and data processing systems. Learn about the distribution across France, Italy, and Greece, with advanced computing models for data analysis.
Download Presentation

Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.
E N D
Presentation Transcript
Il calcolo per KM3NeT Pasquale Migliozzi INFN Napoli
The KM3NeT infrastructure (I) The KM3NeT research infrastructure will comprise a deep-sea neutrino telescope at different sites (ARCA), the neutrino-mass- hierarchy detector ORCA and nodes for instrumentation for measurements of earth and sea science (ESS) communities. ARCA/ORCA, the cable(s) to shore and the shore infrastructure will be constructed and operated by the KM3NeT collaboration.
The KM3NeT infrastructure (II) Both the detection units of ARCA/ORCA and the ESS nodes are connected via a deep-sea cable network to shore. Note that KM3NeT will be constructed at multiple sites (France (FR), Italy (IT) and Greece (GR)) as a distributed infrastructure. The set-up shown in Figure will be installed at each of the sites.
The detector The ARCA/ORCA will consist of building blocks (BB) containing 115 detection units (vertical structures supporting 18 optical modules each). Each optical module holds 31 3-inch photomultipliers together with readout electronics and instrumentation within a glass sphere. One building block thus contains approximately 65,000 photomultipliers. Each installation site will contain an integer number of building blocks. The data transmitted from the detector to the shore station include the PMT signals (time-over- threshold and timing), calibration and monitoring data. Phase Detector Layout approx. BB 2 BB ARCA/ 1 BB ORCA 6 building blocks 1 building block No. of DUs 31 DUs 345 DUs Start construction End 2014 (1 DU) 2016 of Full Detector Mid 2016 Phase 1 Phase 2 Final Phase 690 DUs 115 DUs Reference
The KM3NeT Computing Model The KM3NeT computing model (data distribution and data processing system) is based on the LHC computing model. The general concept consists of a hierarchical data processing system, commonly referred to as Tier structure
Data processing steps at the different tiers Computing Facility Tier Processing steps Access triggering, online- calibration, quasi-online reconstruction at detector site direct access, direct processing Tier-0 direct access, batch processing and/or grid access calibration and reconstruction, simulation computing centres Tier-1 local computing clusters Tier-2 simulation and analysis varying
Computing centres and pools provide resources for the KM3NeT Computing Facility Tier Main Task Access direct access, direct processing direct access, batch processing and grid access Tier-0 at detector site online processing general offline processing and central data storage Tier-1 CC-IN2P3 general offline processing and central data storage general offline processing, interim data storage reconstruction of data CNAF grid access ReCaS grid access HellasGrid HOU computing cluster local computing clusters grid access direct access, batch processing simulation processing Tier-2 simulation and analysis varying
Data distribution One of the main tasks is the efficient distribution of the data between the different computing centres CC-IN2P3, CNAF will act as central storage, i.e. the resulting data of each processing step is transferred to those centres. The data storage at the centres is mirrored. For calibration and reconstruction, processing is performed in batches. The full amount of raw data necessary for the processing is transferred to the relevant computing centre before the processing starts; given enough storage capacity is available (as is the case e.g. at ReCaS), a certain rolling part will be stored at the computing centre, e.g. the last year of data taking. For simulation, negligible input data is necessary. The output data will be locally stored and transferred to the main storage. The most fluctuating access will be on the reconstructed data (from Tier-2) for data analyses.
Overview on computing requirements per year per year size (TB) computing time (HS06.h) computing resources (HS06) One Building Block Phase 1 - first year of operation - second year of operation Phase 2 Final Phase 1000 300 350 M 60 M 40 k 7 k 100 25 M 3 k 150 2500 4000 40 M 1 G 2 G 5 k 125 k 250 k
Detailed expectations of necessary storage and computing time for one building block (per processing and per year) processing stage size per proc. (TB) time per proc. (HS06.h) size per year (TB) time per year (HS06.h) periodicity (per year) Raw Data Raw Filtered Data Monitoring and Minimum Bias Data 300 - 300 - 1 150 - 150 - 1 Experimental Data Processing Calibration (incl. Raw Data) 750 24 M 1500 48 M 2 Reconstructed Data DST 150 75 119 M 30 M 300 150 238 M 60 M 2 2 Simulation Data Processing Air showers atm. Myons neutrinos total: 100 50 2 827 14 M 1 M 22 k 188 M 50 25 20 995 7 M 638 k 220 k 353 M 0.5 0.5 10
Detailed expectations of necessary storage and computing time for Phase 1 (per processing and per year) size per proc. (TB) time per proc. (HS06.h) size per year (TB) time per year (HS06.h) periodicity (per year) processing stage Raw Data 85 - 85 - 1 Raw Filtered Data Monitoring and Minimum Bias Data Experimental Data Processing Calibration (incl. Raw Data) Reconstructed Data 43 - 43 - 1 213 3 M 425 6 M 2 43 21 15 M 4 M 85 43 31 M 8 M 2 2 DST Simulation Data Processing 10 5 2 208 14 M 1 M 22 k 37 M 10 5 20 290 14 M 1 M 220 k 60 M 1 1 Air showers atm. Muons 10 neutrinos total:
Networking phase connection average data transfer (MB/s) peak data transfer (MB/s) Phase 1: Tier-0 to Tier-1 Tier-1 to Tier-1 Tier-1 to Tier-2 5 25 15 500 25 5 Building Block: Tier-0 to Tier-1 Tier-1 to Tier-1 Tier-1 to Tier-2 25 50 10 125 500 50 Final Phase: Tier-0 to Tier-1 Tier-1 to Tier-1 Tier-1 to Tier-2 200 500 100 1000 5000 500 Rough estimate of the required bandwidth. Note that the connection from Tier-1 to Tier-2 has the largest fluctuation, driven by the analyses of data (i.e. by users)
KM3NeT on the GRID VO Central Services KM3NeT just starting on the GRID use case: CORSIKA simulation
VO Software Manager (SGM) First trial: Corsika production
Summary and Conclusion The data distribution model of KM3NeT is based on the LHC computing model. The estimates of the required bandwidths and computing power are well within current standards. A high bandwidth Ethernet link to the shore station is necessary for data archival and remote operation of the infrastructure. KM3NeT-INFN already addressed requests to CS2 KM3NET is already operative on GRID Negotiations are in progress with important CC (i.e. Warsaw) We are also active on Big Data future challenges, i.e. Asterics