Advancing Auditory Enhancement: Integrating Spleeter with Advanced Remixing Techniques in The Cadenza Challenge 2023

A
d
v
a
n
c
i
n
g
 
A
u
d
i
t
o
r
y
 
E
n
h
a
n
c
e
m
e
n
t
:
 
I
n
t
e
g
r
a
t
i
n
g
S
p
l
e
e
t
e
r
 
w
i
t
h
 
A
d
v
a
n
c
e
d
 
R
e
m
i
x
i
n
g
 
T
e
c
h
n
i
q
u
e
s
 
i
n
T
h
e
 
C
a
d
e
n
z
a
 
C
h
a
l
l
e
n
g
e
 
2
0
2
3
Group E016
Authors: Debangshu Sarkar, Arundhuti Mukherjee
Affiliation: University of Leeds, UK
C
a
d
e
n
z
a
 
|
 
I
n
t
r
o
d
u
c
t
i
o
n
The task presented a unique opportunity to
explore advancements in audio
enhancement technologies.
2
Our submission focused on Task 1,
targeting improved audio for headphone
users with hearing loss.
Contrary to the baseline methods of Hybrid
Demucs and Open-Unmix, our project
incorporated Spleeter's deep learning
capabilities.
While Spleeter provided the foundation,
our project went further by integrating N-
ALR prescriptions, Butterworth bandpass
filters, and Dynamic Range Compression.
Parallel processing
    High computational speeds
C
a
d
e
n
z
a
 
|
 
S
p
e
c
i
f
i
c
a
t
i
o
n
s
3
High Performance
Computers (HPCs)
Working with large, complex audio datasets and heavy algorithms drove the decision to use advanced
computational resources.
HPCs ARC4 environment provided by University of Leeds
This gave us access to 149 nodes with 40 cores and 192GB of memory each.
Graphical
Processing Unit
(GPUs)
Google Colab’s V100 GPU
Increased computational speed by 5-10x compared to CPU processing.
These resources helped us achieve:
     Higher memory capacity
Getting to
 know Spleeter
S
p
l
e
e
t
e
r
 
|
 
A
n
 
I
n
t
r
o
d
u
c
t
i
o
n
4
The dynamic and adaptable nature of Spleeter encouraged us to explore further
S
p
l
e
e
t
e
r
 
|
 
W
h
y
 
c
h
o
o
s
e
 
t
h
i
s
 
m
o
d
e
l
?
5
E
fficiency, speed and familiarity with a varied genre of music drove us to use Spleeter for this task
D
e
m
i
x
i
n
g
 
|
 
S
t
a
g
e
s
6
Let us look further into the key steps
Model Configuration
Training
Audio Loading
and Resampling
Neural Network
Processing
1
2
3
4
Initializing the
'spleeter:4stems’ model
configuration
8 stem files – voice,
drums, bass and other
for each ear
Training dataset of pre-
separated VDBO files
Model is trained to
accurately identify and
separate similar components
in unseen tracks
Stereo audio tracks
loaded at 44100Hz using
AudioAdapter
Input
Output
D
e
m
i
x
i
n
g
 
-
 
S
t
a
g
e
s
|
 
T
r
a
i
n
i
n
g
7
Deep Dive : Training
Provided by The Cadenza
Challenge, the training
dataset contains songs
from various genres,
each meticulously
separated into vocals,
drums, bass, and other
elements stored as
individual .wav files.
TensorFlow coordinates
intricate training
operations, exposing the
model to batches of pre-
separated stems.
Internal parameters are
iteratively adjusted to
minimize differences
between predicted and
actual stems.
Multiple epochs refine
the model's parameters,
improving its accuracy in
identifying and isolating
audio components.
The model learns
intricate patterns and
nuances specific to
vocals, drums, bass, and
other elements
The well-trained model
can now accurately
identify and isolate
vocals, drums, bass, and
other components.
Diverse training dataset
TensorFlow driven training
Iterative optimisation
process
Trained model
2
D
e
m
i
x
i
n
g
 
-
 
S
t
a
g
e
s
|
 
L
o
a
d
i
n
g
 
&
 
R
e
s
a
m
p
l
i
n
g
8
Deep Dive : Audio Loading and Resampling
Audio tracks loaded into
Spleeter framework
AudioAdapter module
An instance of
AudioAdapter is created
by default
The loaded audio track
undergoes a resampling
operation to ensure a
consistent sample rate of
44100Hz.
Resampling maintains
uniformity in the input
data, aligning it with the
expected format.
The ‘load’ method is
invoked, loading the
audio to specified path.
The result of the loading
operation is a waveform,
representing the audio
data, and additional
information
The separate method of
the separator dissects
the audio into its
constituent stems based
on the trained model,
producing an
‘audio_descriptor’ as the
output.
Loading and Configuration
Resampling
Waveform extraction
Separation process
3
D
e
m
i
x
i
n
g
 
-
 
S
t
a
g
e
s
|
 
N
e
u
r
a
l
 
N
e
t
w
o
r
k
9
At the end of these processes, we have 8 demixed audio stems; 4 for each ear
Deep Dive : Neural Network Processing
Stereo tracks, having
undergone loading and
resampling to 44100Hz,
serve as the input to
Spleeter's trained neural
network.
Each audio waveform is
numerically represented,
forming the initial input
layer of the neural
network.
The network's initial
layers engage in feature
extraction, identifying
relevant patterns.
Raw waveforms are
transformed into abstract
features, enabling the
network to discern
distinct audio
components effectively.
The system utilizes
convolutional layers,
activation functions, and
pooling layers.
Captures hierarchical and
spatial dependencies in
audio data.
Applies learned
representations to
predict each audio
component's
contribution.
Separates complex audio
signals effectively.
Input preparation
Feature extraction
Layered architecture
Demixing operation
4
R
e
m
i
x
i
n
g
|
 
S
t
a
g
e
s
10
Let us look further into the key steps
NAL-R Application
Dynamic Range
Compression (DRC)
Remix Signal
Creation
    Butterworth Filter
1
4
3
2
Demixed stem files for
each audio track
Utilised audiograms for both
left and right ears to tailor
the audio to individual
listener profiles
Singular remixed signal
Combined NALR-processed
stems form a remixed signal
followed by padding
operations for different stem
durations
Implemented a custom
compressor to fine-tune the
dynamics of the separated
stems
Applied a bandpass filter
to the remixed audio
Remixed and filtered
for subjective
evaluation
Input
Output
R
e
m
i
x
i
n
g
 
-
 
S
t
a
g
e
s
|
 
N
A
L
-
R
11
Deep Dive : NAL-R Application
Performed audiogram
resampling and used
individual hearing
thresholds to create left
and right ear Audiogram
objects.
Utilized the NALR class to
design adaptive FIR
(Finite Impulse Response)
filters based on individual
audiometric data.
Interpolated gains and
designed linear-phase FIR
filters to ensure a smooth
and customized
frequency response.
Adjusted gain considering
listener-specific bias,
critical loss, and
individual hearing
conditions.
Listener-specific audiograms
NAL-R filters
Interpolated gains and
frequency response
Adapted gain based on
listener characteristics
1
R
e
m
i
x
i
n
g
 
-
 
S
t
a
g
e
s
|
 
D
R
C
12
Deep Dive : Dynamic Range Compression
Configure Parameters:
Compressor initialized
with attack, release,
threshold, attenuation,
RMS (Root Mean Square)
buffer, and makeup gain.
Attack and Release:
Adjusts signal dynamics
using user-defined attack
and release times.
Set Threshold:
Compression triggered
when the signal exceeds
a specified threshold,
controlled by
attenuation.
RMS-based Compression:
Signal undergoes
dynamic compression
based on RMS values and
makeup gain for level
adjustment.
Compressor initialization
Dynamics adjustment
Threshold compression
Dynamic processing
3
R
e
m
i
x
i
n
g
 
-
 
S
t
a
g
e
s
|
 
B
u
t
t
e
r
w
o
r
t
h
 
F
i
l
t
e
r
13
Deep Dive : Butterworth Bandpass Filter
Butterworth Filter: We
used a bandpass
Butterworth filter for
audio processing.
Frequency Range:
Focused on 250 Hz to
18,500 Hz for clear audio.
Parameters: Defined filter
settings like lowcut,
highcut, order, and
sample rate.
Second-Order Sections:
Applied SOS for stable
filtering.
Application:
‘apply_butter_bandpass_
filter’ function filters the
remixed signal.
Input/Output: Takes
remixed signal, outputs
filtered result.
Filter design
Filter implementation
Function application
3
At the end of these processes, we have 8 enhanced audio stems and the finely tuned remixed signal
O
b
j
e
c
t
i
v
e
 
E
v
a
l
u
a
t
i
o
n
 
|
 
M
e
t
r
i
c
s
 
a
n
d
 
S
c
o
r
e
s
14
We employed the Hearing Aid Audio Quality
Index (HAAQI) as an objective measure. This
metric, introduced by J. M. Kates and K. H.
Arehart, evaluates audio quality in the context
of hearing aids.
HAAQI considers factors crucial to music sound
quality in hearing-aid use, including clarity,
naturalness, and richness/fullness.
HAAQI
Objective evaluations were carried out by assessing the processed audio using these metrics.
Signal to Noise Ratio (SNR) quantifies the ratio of the
signal power to background noise power. It assesses the
clarity and fidelity of the primary audio signal.
Signal to Distortion Ratio (SDR) measures the ratio of the
signal power to the distortion introduced. It evaluates
the presence of unwanted artifacts or alterations
For both metrics, a positive value implies better quality
of audio.
SNR and SDR
O
b
j
e
c
t
i
v
e
 
E
v
a
l
u
a
t
i
o
n
 
-
 
H
A
A
Q
I
|
 
R
e
s
u
l
t
s
15
Comparing Average HAAQI Scores
A granular analysis of HAAQI scores for individual
listeners revealed that our model's scores varied
significantly from the baseline.
For example:
Listener L6036 showed a decrease from the
baseline score of 0.509 to 0.136 with our model.
Conversely, Listener L6049's score decreased
less dramatically, from 0.888 to 0.451,
suggesting a closer performance to the
baseline.
What can we explore in the future?
Impacts of diverse audiogram profiles, variation in music tracks and unfamiliar
genres
O
b
j
e
c
t
i
v
e
 
E
v
a
l
u
a
t
i
o
n
 
 
S
N
R
 
&
 
S
D
R
|
 
R
e
s
u
l
t
s
16
    Lower scores in objective evaluations prodded us to understand our results better with subjective scores.
What can we explore in the future?
Metrics more suitable for music based audio and foolproof methods of removing
external artifacts
A snapshot of results for SNR and SDR
Unexpected Negativity: Encountered
negative values, deviating from
conventional positive expectations.
Amplification of Noise: Possible
background noise or artifacts
amplification during demixing,
challenging standard signal-to-
noise/distortion paradigms.
S
u
b
j
e
c
t
i
v
e
 
E
v
a
l
u
a
t
i
o
n
|
 
R
e
s
u
l
t
s
17
    This motivates us to learn about gaps in subjective and objective alignment of audio analysis.
Comparing Listening Test Scores
Al James' "Schoolboy Fascination" saw an increase from a baseline score of 39.1 to 47.6 with our
model.
For "Girls Under Glass - We Feel Alright", the score improved from 26.2 to 32.2, indicating an
enhancement in listener perception.
Despite the lower objective
HAAQI  scores, listener feedback
on basic audio quality was
revealing.
On average, listeners preferred
our model's audio quality over
the baseline 
44% 
of the time.
 This suggests a subjective
preference for the audio
processed by our model,
despite what the HAAQI scores
might imply.
C
a
d
e
n
z
a
|
 
C
o
n
c
l
u
s
i
o
n
 
a
n
d
 
F
u
t
u
r
e
 
D
i
s
c
u
s
s
i
o
n
18
Spleeter Approach Validated: 
Successfully applied
Spleeter model and customized remixing techniques,
like Butterworth Bandpass filtering and Dynamic Range
Compression.
User Preferences Over Metrics: 
Despite lower HAAQI
scores, user feedback demonstrated a 44% preference
for our model, emphasizing the nuanced nature of
audio quality assessment.
Complexity Studies: 
Detailed exploration revealed the
complexity of audio processing for hearing
impairments, as evidenced by varied HAAQI scores,
negative SNR/SDR values, and song-specific outcomes.
Conclusion and Summary
Concluded with a commitment to refining
methodologies, embracing new metrics, and
contributing to the ongoing evolution of auditory
accessibility in the realm of machine learning and
audio processing.
The insights gained from this challenge pave the
way for further research into the intersection of
machine learning and auditory perception.
Future work will focus on refining our model to
better address the gaps identified through the
HAAQI scores while continuing to prioritise the
subjective listening experience.
Future Scope
T
h
a
n
k
 
Y
o
u
!
Q
&
A
19
Slide Note
Embed
Share

Our project for The Cadenza Challenge 2023 focused on improving audio for headphone users with hearing loss by integrating Spleeter's deep learning capabilities. We utilized N-ALR prescriptions, Butterworth bandpass filters, and Dynamic Range Compression to enhance audio quality. By leveraging advanced computational resources like HPCs and GPUs, we achieved high computational speeds, parallel processing, and increased memory capacity. Spleeter's integration with TensorFlow provided optimized model training, efficiency in parameter tuning, and scalability in architecture design, resulting in competitive demixing results with lower computational demands compared to other methods.

  • Auditory Enhancement
  • Audio Enhancement Technologies
  • Spleeter
  • Advanced Remixing Techniques
  • Cadenza Challenge

Uploaded on Oct 06, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. Advancing Auditory Enhancement: Integrating Advancing Auditory Enhancement: Integrating Spleeter with Advanced Remixing Techniques in Spleeter with Advanced Remixing Techniques in The Cadenza Challenge 2023 The Cadenza Challenge 2023 Group E016 Authors: Debangshu Sarkar, Arundhuti Mukherjee Affiliation: University of Leeds, UK

  2. Cadenza Cadenza | Introduction Our motivation for Cadenza 2023 Objective for the challenge The task presented a unique opportunity to explore advancements in audio enhancement technologies. Our submission focused on Task 1, targeting improved audio for headphone users with hearing loss. Distinct Approach Our motivation for Cadenza 2023 Contrary to the baseline methods of Hybrid Demucs and Open-Unmix, our project incorporated Spleeter's deep learning capabilities. While Spleeter provided the foundation, our project went further by integrating N- ALR prescriptions, Butterworth bandpass filters, and Dynamic Range Compression. 2

  3. Cadenza Cadenza | Specifications Working with large, complex audio datasets and heavy algorithms drove the decision to use advanced computational resources. HPCs ARC4 environment provided by University of Leeds This gave us access to 149 nodes with 40 cores and 192GB of memory each. High Performance Computers (HPCs) Graphical Processing Unit (GPUs) Google Colab s V100 GPU Increased computational speed by 5-10x compared to CPU processing. These resources helped us achieve: High computational speeds Parallel processing Higher memory capacity 3

  4. Spleeter Spleeter | An Introduction Origin of Spleeter Pre-trained Models TensorFlow Spleeter was created by Deezer Research in 2015. It was designed with ease of use, separation performance, and speed in mind Spleeter contains pre-trained models like U-Net and is trained on Deezer s internal datasets. TensorFlow serves as the core machine learning framework on which Spleeter is built. Source Separation This allows Spleeter to use TensorFlow's high-level APIs, like layers, activations, and other components available in the TensorFlow library Spleeter's default model separates audio into 4 stems: vocals, bass, drums, and other. Additionally, a 2 and 5 stem configurations generate vocals and accompaniment or vocals, bass, drums, piano, and other Getting to know Spleeter The dynamic and adaptable nature of Spleeter encouraged us to explore further 4

  5. Spleeter Spleeter | Why choose this model? Spleeter's integration with TensorFlow provides our project with optimized model training, efficient parameter tuning, enhanced computational efficiency, scalability, and flexibility in architecture design Spleeter achieves competitive demixing results with relatively lower computational demands compared to Hybrid Demucs and Open-Unmix. Spleeter is pre-trained on a diverse range of audio sources, including music genres relevant to The Cadenza Challenge. Spleeter has the capacity for parallelized computations, harnessing the power of GPUs. With the ability to separate the entire MUSDB18 test dataset into 4 stems in less than 2 minutes on a single GeForce RTX 2080 GPU Efficiency, speed and familiarity with a varied genre of music drove us to use Spleeter for this task 5

  6. Demixing Demixing | Stages Stereo audio tracks loaded at 44100Hz using AudioAdapter Input Training dataset of pre- separated VDBO files Audio Loading and Resampling Neural Network Processing 1 2 Training 3 4 Model Configuration Model is trained to accurately identify and separate similar components in unseen tracks Initializing the 'spleeter:4stems model configuration 8 stem files voice, drums, bass and other for each ear Output Let us look further into the key steps 6

  7. Demixing Demixing - Stages| Training Deep Dive : Training 2 Iterative optimisation process Diverse training dataset TensorFlow driven training Trained model TensorFlow coordinates intricate training operations, exposing the model to batches of pre- separated stems. Internal parameters are iteratively adjusted to minimize differences between predicted and actual stems. Multiple epochs refine the model's parameters, improving its accuracy in identifying and isolating audio components. The model learns intricate patterns and nuances specific to vocals, drums, bass, and other elements Provided by The Cadenza Challenge, the training dataset contains songs from various genres, each meticulously separated into vocals, drums, bass, and other elements stored as individual .wav files. The well-trained model can now accurately identify and isolate vocals, drums, bass, and other components. 7

  8. Demixing Demixing - Stages| Loading & Resampling Deep Dive : Audio Loading and Resampling 3 Loading and Configuration Resampling Waveform extraction Separation process The loaded audio track undergoes a resampling operation to ensure a consistent sample rate of 44100Hz. Resampling maintains uniformity in the input data, aligning it with the expected format. The separate method of the separator dissects the audio into its constituent stems based on the trained model, producing an audio_descriptor as the output. The load method is invoked, loading the audio to specified path. The result of the loading operation is a waveform, representing the audio data, and additional information Audio tracks loaded into Spleeter framework AudioAdapter module An instance of AudioAdapter is created by default 8

  9. Demixing Demixing - Stages| Neural Network Deep Dive : Neural Network Processing 4 Input preparation Feature extraction Layered architecture Demixing operation Stereo tracks, having undergone loading and resampling to 44100Hz, serve as the input to Spleeter's trained neural network. Each audio waveform is numerically represented, forming the initial input layer of the neural network. The network's initial layers engage in feature extraction, identifying relevant patterns. Raw waveforms are transformed into abstract features, enabling the network to discern distinct audio components effectively. Applies learned representations to predict each audio component's contribution. Separates complex audio signals effectively. The system utilizes convolutional layers, activation functions, and pooling layers. Captures hierarchical and spatial dependencies in audio data. At the end of these processes, we have 8 demixed audio stems; 4 for each ear 9

  10. Remixing Remixing| Stages Combined NALR-processed stems form a remixed signal followed by padding operations for different stem durations Implemented a custom compressor to fine-tune the dynamics of the separated stems Utilised audiograms for both left and right ears to tailor the audio to individual listener profiles Applied a bandpass filter to the remixed audio Remix Signal Creation Dynamic Range Compression (DRC) NAL-R Application Butterworth Filter 1 2 3 4 Output Remixed and filtered for subjective evaluation Input Demixed stem files for each audio track Singular remixed signal Let us look further into the key steps 10

  11. Remixing Remixing - Stages| NAL-R Deep Dive : NAL-R Application 1 Adapted gain based on listener characteristics Interpolated gains and frequency response Listener-specific audiograms NAL-R filters Performed audiogram resampling and used individual hearing thresholds to create left and right ear Audiogram objects. Utilized the NALR class to design adaptive FIR (Finite Impulse Response) filters based on individual audiometric data. Interpolated gains and designed linear-phase FIR filters to ensure a smooth and customized frequency response. Adjusted gain considering listener-specific bias, critical loss, and individual hearing conditions. 11

  12. Remixing Remixing - Stages| DRC Deep Dive : Dynamic Range Compression 3 Compressor initialization Dynamics adjustment Threshold compression Dynamic processing Set Threshold: Compression triggered when the signal exceeds a specified threshold, controlled by attenuation. RMS-based Compression: Signal undergoes dynamic compression based on RMS values and makeup gain for level adjustment. Configure Parameters: Compressor initialized with attack, release, threshold, attenuation, RMS (Root Mean Square) buffer, and makeup gain. Attack and Release: Adjusts signal dynamics using user-defined attack and release times. 12

  13. Remixing Remixing - Stages| Butterworth Filter Deep Dive : Butterworth Bandpass Filter 3 Filter design Function application Filter implementation Butterworth Filter: We used a bandpass Butterworth filter for audio processing. Frequency Range: Focused on 250 Hz to 18,500 Hz for clear audio. Parameters: Defined filter settings like lowcut, highcut, order, and sample rate. Second-Order Sections: Applied SOS for stable filtering. Application: apply_butter_bandpass_ filter function filters the remixed signal. Input/Output: Takes remixed signal, outputs filtered result. At the end of these processes, we have 8 enhanced audio stems and the finely tuned remixed signal 13

  14. Objective Evaluation Objective Evaluation | Metrics and Scores HAAQI SNR and SDR We employed the Hearing Aid Audio Quality Index (HAAQI) as an objective measure. This metric, introduced by J. M. Kates and K. H. Arehart, evaluates audio quality in the context of hearing aids. HAAQI considers factors crucial to music sound quality in hearing-aid use, including clarity, naturalness, and richness/fullness. Signal to Noise Ratio (SNR) quantifies the ratio of the signal power to background noise power. It assesses the clarity and fidelity of the primary audio signal. Signal to Distortion Ratio (SDR) measures the ratio of the signal power to the distortion introduced. It evaluates the presence of unwanted artifacts or alterations For both metrics, a positive value implies better quality of audio. Objective evaluations were carried out by assessing the processed audio using these metrics. 14

  15. Objective Evaluation Objective Evaluation - HAAQI| Results Comparing Average HAAQI Scores A granular analysis of HAAQI scores for individual listeners revealed that our model's scores varied significantly from the baseline. For example: Listener L6036 showed a decrease from the baseline score of 0.509 to 0.136 with our model. Conversely, Listener L6049's score decreased less dramatically, from 0.888 to 0.451, suggesting a closer performance to the baseline. Impacts of diverse audiogram profiles, variation in music tracks and unfamiliar genres What can we explore in the future? 15

  16. Objective Evaluation Objective Evaluation SNR & SDR| Results A snapshot of results for SNR and SDR Unexpected Negativity: Encountered negative values, deviating from conventional positive expectations. Amplification of Noise: Possible background noise or artifacts amplification during demixing, challenging standard signal-to- noise/distortion paradigms. Metrics more suitable for music based audio and foolproof methods of removing external artifacts What can we explore in the future? Lower scores in objective evaluations prodded us to understand our results better with subjective scores. 16

  17. Subjective Evaluation Subjective Evaluation| Results Comparing Listening Test Scores Despite the lower objective HAAQI scores, listener feedback on basic audio quality was revealing. On average, listeners preferred our model's audio quality over the baseline 44% of the time. This suggests a subjective preference for the audio processed by our model, despite what the HAAQI scores might imply. Al James' "Schoolboy Fascination" saw an increase from a baseline score of 39.1 to 47.6 with our model. For "Girls Under Glass - We Feel Alright", the score improved from 26.2 to 32.2, indicating an enhancement in listener perception. This motivates us to learn about gaps in subjective and objective alignment of audio analysis. 17

  18. Cadenza Cadenza| Conclusion and Future Discussion Conclusion and Summary Future Scope Concluded with a commitment to refining methodologies, embracing new metrics, and contributing to the ongoing evolution of auditory accessibility in the realm of machine learning and audio processing. Spleeter Approach Validated: Successfully applied Spleeter model and customized remixing techniques, like Butterworth Bandpass filtering and Dynamic Range Compression. User Preferences Over Metrics: Despite lower HAAQI scores, user feedback demonstrated a 44% preference for our model, emphasizing the nuanced nature of audio quality assessment. Complexity Studies: Detailed exploration revealed the complexity of audio processing for hearing impairments, as evidenced by varied HAAQI scores, negative SNR/SDR values, and song-specific outcomes. The insights gained from this challenge pave the way for further research into the intersection of machine learning and auditory perception. Future work will focus on refining our model to better address the gaps identified through the HAAQI scores while continuing to prioritise the subjective listening experience. 18

  19. Thank You! Thank You! Q&A 19

More Related Content

giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#