Event Classification in Sand with Deep Learning: DUNE-Italia Collaboration

Slide Note
Embed
Share

Alessandro Ruggeri presents the collaboration between DUNE-Italia and Nu@FNAL Bologna group on event classification in sand using deep learning. The project involves applying machine learning to digitized STT data for event classification, with a focus on CNNs and processing workflows to extract primary interaction labels and create pixelated views for classification. The architecture is based on GoogLeNet model inspired by NOvA, utilizing Tensorflow/Keras libraries for implementation.


Uploaded on Sep 26, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. EVENT CLASSIFICATION IN SAND WITH DEEP LEARNING Meeting della collaborazione DUNE-Italia Lecce, 7/11/2023 Alessandro Ruggeri on behalf of the Nu@FNAL Bologna group

  2. STT-EVENT CLASSIFICATION WITH CNNS Applying ML to the digitized STT data for event classification. 1604.01444 Strategy inspired by a NO?A article (1604.01444): CNN which combines XZ and YZ views, as in STTs. X Y - - - - - - - - - - 1.6 1.4 1.2 0.8 0.6 0.4 0.2 2.5 1.5 0.5 3 - 2 - 1 - 1 - 10 10 3 3 24 24 STT hits: event 17, XZ view STT hits: event 17, YZ view So far using only STT hits: final model could include timing and calorimeter clusters. 24.5 24.5 CNNs would allow classification based on topology. 25 25 Dataset of ??-CC interactions with vertices in the STT, separated in: 25.5 25.5 Deep Inelastic Scattering (DIS) events (44%). 26 26 Z Z 10 10 Resonant Scattering (RES) events (38.2%). 3 3 Quasi-Elastic Scattering (QES) events (17.2%). 2

  3. PROCESSING WORKFLOW The edep-sim MC files were processed with the sand-reco Digitize module to get the digitized hits. Saved to a Pandas Dataframe for each MC file. Digitized hit coordinates and ???? converted to 128x128 pixel image-like views. Extracted the genie primary interaction label from the edep-sim file. Final pre-processing steps are applied and saved to Numpy mem-mapped files (model input). For each event Event primary interaction labels edep-sim file Additional processing RDataFrame For each file Numpy mmaps Pandas Dataframe Event hits coordinate list NumPy Pixelated XZ & YZ views Uproot Digits file 3

  4. PIXELATED VIEWS Views are saved to Pandas Dataframe as 128x128 pixel tensors. uint8 format used for more efficient storage: 256 ???? values in the [0,0.07] MeV range. ? ???? ? ???? Current pre-processing steps: Resizing to 80x80 pixels Selection cut on active pixels. Scaling w.r.t. ? and ? Normalization in the [0,1] range required by the model 4

  5. GOOGLENET ARCHITECTURE YZ-view input XZ-view input Architecture based on NOvA model. Convolution 64, 7x7+2(S) Convolution 64, 7x7+2(S) Max Pool 64, 3x3+2(S) Max Pool 64, 3x3+2(S) Views are passed to parallel branches based on the GoogLeNet architecture. Batch Batch Normalization Normalization Convolution 64, 1x1+1(S) Convolution 64, 1x1+1(S) Inception modules extract features at different scales in a parallel fashion. Convolution 128, 1x1+1(S) Convolution 128, 1x1+1(S) Filter Batch Batch Concatenation Normalization Normalization The resulting features are concatenated and then passed to a final inception module to extract combined features. Max Pool 128, 3x3+2(S) Max Pool 128, 3x3+2(S) Convolution 3x3+1(S) Convolution 5x5+1(S) Convolution 1x1+1(S) 64 128 32 32 96 16 64 128 32 32 96 16 Convolution 1x1+1(S) Convolution 1x1+1(S) Convolution 1x1+1(S) Max Pool 3x3+1(S) 128 192 96 64 128 32 128 192 96 64 128 32 Max Pool 128, 3x3+2(S) Max Pool 128, 3x3+2(S) Final classification after down-sampling. Previous Layer 192 208 48 64 96 16 192 208 48 64 96 16 Used the Tensorflow/Keras Python libraries. 192 208 48 64 96 16 Max Pool 128, 3x3+2(S) Dropout (40%) Dense Softmax (3 units) 5

  6. GOOGLENET PERFORMANCE Training and validation accuracies Training and validation losses Categorical Cross entropy Current results are not satisfactory: overfitting occurs since the initial epochs, even with high regularization. Categorical Accuracy Multiple strategies for regularizing the network were tried, with no improvement. Epoch Epoch Training and validation accuracies Training and validation accuracies Categorical Accuracy Categorical Accuracy Alternative pre-processing procedures did not improve the performance either. Epoch Epoch Training and validation accuracies Training and validation losses Categorical Cross entropy Categorical Accuracy Epoch Epoch 6

  7. RESNET18 ARCHITECTURE Alternative model based on the ResNet18 architecture. Parallel branches with four residual blocks each. Concatenation and convolution before final classification layer. Current results are not satisfactory: network is underfitting. 7

  8. VISUAL CHECKS 8

  9. INCREASED CONTRAST Tried increasing the contrast by applying ? = 0.5 correction to the normalized views. Tested the performance of the GoogLeNet model on the dataset: results are still not satisfactory. 9

  10. CONCLUSIONS By visual inspection, event topologies are not well separated. Distributions of some potential features do not show separation. E.g. weighted std. of active pixels in the x and y directions. Alternative strategies could be explored: different architectures or features. 10

  11. GRAZIE PER LATTENZIONE 11

Related


More Related Content