RT-Bench: Extensible Real-Time Benchmark Framework

 
RT-Bench
 
An Extensible Benchmark Framework
for the Analysis and Management Of
Real-Time Applications
 
Mattia Nicolella,
Shahin Roozkhosh, Denis Hoornaert,
Andrea Bastoni, Renato Mancuso
Outline
 
2
Analyzing Real-Time Systems
RT-System
System
Behavior
Static
Analysis
Benchmark
-driven
Analysis
Synthetic
Pragmatic
Full-Scale
Execution time
Deadline status
WCET
WSS
System Utilization
Task Schedulability
3
Pragmatic
EEMBC [6]
Popular Benchmarks Suites
Full-Scale
PapaBench [10]
Synthetic
RTEval [2]
Pragmatic
TACLeBench [7]
MiBench [9]
RT-Tests [1]
SD-VBS [13]
PARSEC [4]
IsolBench [12]
Mälardalen [8]
Rodinia [5]
SPLASH [3,11]
4
Pragmatic
EEMBC [6]
Benchmarks Suites RT Features
Full-Scale
PapaBench [10]
Synthetic
RTEval [2]
Pragmatic
TACLeBench [7]
MiBench [9]
RT-Tests [1]
SD-VBS [13]
PARSEC [4]
IsolBench [12]
Mälardalen [8]
Rodinia [5]
SPLASH [3,11]
 
5
RT-Bench: Motivations and Principles
RT-Bench
Implements RT features in
a generic way.
Multiple benchmarks use
the same implementation.
All benchmarks have the
same features.
Adapt
existing
suites
Time could be better spent
Bugs can be introduced.
Could be done multiple
times.
Different implementations
of the same features.
Missing
features
Many suites have
missing features for
RT-Systems.
Real-
Time
Abstracti-
on
 
Payloads are executed periodically.
Statistics are collected for each
period.
Common
Interface
 
All benchmarks report the
same statistics.
Original output can be
preserved.
Extensibi-
lity
 
Adding benchmarks is easy.
Documentation is available.
Linux & POSIX.4 compliant.
Compati-
bility
 
Improve existing benchmarks
without altering their logic.
Only benchmark payload
considered for stats.
6
Application layer
RT-Bench: Structure and Portability
Hardware Layer
Operating System
Utils 
(optional)
RT-Bench generator
Overhead Test
Profiled WSS,WCET,…
Plots generation
x
86
ARM64
POSIX.4 compliant
Linux
 scheduler
syscalls
Glibc
Adds RT features
7
RT-Bench generator
 
8
New
period?
Under the hood: RT-Bench generator
Profiling/Monitoring Thread
Initiali-
zation
Exit
 
No
Iteration
== n 
?
 
Yes
Execution
 
Yes
 
Error
 
Error
Tear-down
Measures
& log
Captures
timestamp
Period
timer
 
No
Enable
?
Sample &
Wait
Bmark.
running?
 
No
 
Yes
 
Yes
 
No
9
Extending RT-Bench
Benchmark
collection
Split main
logic into:
Initialization
Execution
Tear-down
RT-Bench
compatible
benchmarks
~300 SLOCs for one SD-VBS
benchmark
10
Bare-Bones Use Case
 
11
 
Bare-Bones
Use Case:
Demo
 
 
12
Advanced Use Case
 
13
Don’t be a stranger!
 
RT-Bench is available at: 
https://gitlab.com/bastoni/rt-bench
14
 
Thanks!
 
My contacts:
mnico@bu.edu
https://cs-people.bu.edu/mnico
 
References
 
1. RT-Test
. 
https://wiki.linuxfoundation.org/realtime/documentation/howto/tools/rt-tests
2. RTEval
. 
https://wiki.linuxfoundation.org/realtime/documentation/howto/tools/rteval
3. Splash2x benchmark suite
. 
https://parsec.cs.princeton.edu/parsec3-doc.htm#splash2x
4. Bienia et al. 2008.  
The PARSEC benchmark suite: Characterization and architectural implications. 
In Proceedings of the 17th international conference on Parallel
architectures and compilation techniques. 72–81.
5
. Che et al. 2009
. Rodinia: A benchmark suite for heterogeneous computing. 
In 2009 IEEE international symposium on workload characterization (IISWC). IEEE, 44–54.
6. Embedded Microprocessor Benchmark Consortium. 
EEMBC Benchmarks
. 
https://www.eembc.org/products
7
. Falk et al. 2016. 
TACLeBench: A Benchmark Collection to Support Worst-Case Execution Time Research.
 In 16th International Workshop on Worst-Case Execution Time
Analysis (WCET 2016) (OpenAccess Series in Informatics (OASIcs), Vol. 55), Martin Schoeberl (Ed.). Schloss Dagstuhl–Leibniz-Zentrum für Informatik, Dagstuhl, Germany,
2:1–2:10.
8. Gustafsson et al. 2010. 
The Mälardalen WCET benchmarks: Past, present and future. 
In 10th International Workshop on Worst-Case Execution Time Analysis (WCET 2010).
Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik.
9
. Guthaus et al. 2001. 
MiBench: A free, commercially representative embedded benchmark suite. 
In Proceedings of the fourth annual IEEE international workshop on workload
characterization. WWC-4 (Cat. No. 01EX538). IEEE, 3–14.
10. Nemeret al. 2006
. Papabench: a free real-time benchmark. 
In 6th International Workshop on Worst-Case Execution Time Analysis (WCET’06). Schloss Dagstuhl-Leibniz-
Zentrum für Informatik.
11. Sakalis et al. 2016. 
Splash-3: A properly synchronized benchmark suite for contemporary research. 
In 2016 IEEE International Symposium on Performance Analysis of
Systems and Software (ISPASS). IEEE, 101–111.
12
. Valsan et al. 2016. 
Taming Non-Blocking Caches to Improve Isolation in Multicore Real-Time Systems. 
In 2016 IEEE Real-Time and Embedded Technology and Applications
Symposium (RTAS). 1–12. 
https://doi.org/10.1109/RTAS.2016.7461361
13
. Venkata et al. 2009. 
SD-VBS: The San Diego Vision Benchmark Suite. 
In 2009 IEEE International Symposium on Workload Characterization (IISWC). 55–64.
https://doi.org/10.1109/IISWC.2009.5306794
 
16
Slide Note
Embed
Share

RT-Bench is an extensible benchmark framework designed for analyzing and managing real-time applications. It focuses on benchmark-driven analysis, offering a variety of popular benchmark suites, features, motivations, principles, structure, and portability. With a goal to improve existing benchmarks without altering their logic, RT-Bench provides a comprehensive approach to analyzing real-time systems and their performance metrics.

  • Real-time
  • Benchmark
  • Framework
  • Analysis
  • Management

Uploaded on Sep 13, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. An Extensible Benchmark Framework for the Analysis and Management Of Real-Time Applications RT-Bench Mattia Nicolella, Shahin Roozkhosh, Denis Hoornaert, Andrea Bastoni, Renato Mancuso

  2. Analyzing Real-Time Systems Static Analysis Execution time Deadline status WCET WSS System Utilization Task Schedulability System Behavior RT-System Benchmark -driven Analysis Synthetic Pragmatic Full-Scale 3

  3. Popular Benchmarks Suites Synthetic Pragmatic Pragmatic Full-Scale RTEval [2] TACLeBench [7] EEMBC [6] PapaBench [10] RT-Tests [1] SD-VBS [13] MiBench [9] IsolBench [12] M lardalen [8] PARSEC [4] SPLASH [3,11] Rodinia [5] 4

  4. Benchmarks Suites RT Features Execution Time Periodic Execution Profiled WCET Profiled WSS Utilization Synthetic Pragmatic Pragmatic Full-Scale RTEval [2] TACLeBench [7] EEMBC [6] PapaBench [10] RT-Tests [1] SD-VBS [13] MiBench [9] IsolBench [12] M lardalen [8] PARSEC [4] SPLASH [3,11] Rodinia [5] 5

  5. RT-Bench: Motivations and Principles Real- Time Abstracti- on Payloads are executed periodically. Statistics are collected for each period. Missing features Many suites have missing features for RT-Systems. All benchmarks report the same statistics. Original output can be Common Interface preserved. Adapt existing suites Adding benchmarks is easy. Documentation is available. Linux & POSIX.4 compliant. Extensibi- lity Time could be better spent Bugs can be introduced. Could be done multiple times. Different implementations of the same features. RT-Bench Improve existing benchmarks without altering their logic. Only benchmark payload considered for stats. Compati- bility Implements RT features in a generic way. Multiple benchmarks use the same implementation. All benchmarks have the same features. 6

  6. RT-Bench: Structure and Portability Application layer Overhead Test Profiled WSS,WCET, Plots generation Utils (optional) RT-Bench generator Adds RT features POSIX.4 compliant Linux scheduler syscalls Glibc Operating System x86 ARM64 Hardware Layer 7

  7. RT-Bench generator Benchmarks collection b. RT-Bench Genenerator c. d.e. a. ? Inititalization Execution Tear-down Compilation process RT-Bench. Gen. + a RT-Bench. Gen. + b RT-Bench. Gen. + e Standalone binaries 8

  8. Under the hood: RT-Bench generator Period timer Measures & log Yes Initiali- zation New period? Iteration == n ? Execution No No Error Error Yes Captures timestamp Tear-down Exit No Yes Yes Enable ? Sample & Wait Bmark. running? Profiling/Monitoring Thread No 9

  9. Extending RT-Bench initialization() main() execution() teardown() Split main logic into: Initialization Execution Tear-down RT-Bench compatible benchmarks Benchmark collection ~300 SLOCs for one SD-VBS benchmark 10

  10. Bare-Bones Use Case Periodic execution Deadline status Response time Utilization System to analyze RT-Bench Executable Stats report on terminal or in a file Core Pinning Set Sched policy Set Priority Dynamic memory allocation constrain 11

  11. Bare-Bones Use Case: Demo 12

  12. Advanced Use Case System to analyze Monitor cache events Minimum WSS WCET Schedulability ratio 13

  13. Dont be a stranger! RT-Bench is available at: https://gitlab.com/bastoni/rt-bench Try it out Make it better Give us feedback Add features Extend it 14

  14. Thanks! My contacts: mnico@bu.edu https://cs-people.bu.edu/mnico

  15. References 1. RT-Test. https://wiki.linuxfoundation.org/realtime/documentation/howto/tools/rt-tests 2. RTEval. https://wiki.linuxfoundation.org/realtime/documentation/howto/tools/rteval 3. Splash2x benchmark suite. https://parsec.cs.princeton.edu/parsec3-doc.htm#splash2x 4. Bienia et al. 2008. The PARSEC benchmark suite: Characterization and architectural implications. In Proceedings of the 17th international conference on Parallel architectures and compilation techniques. 72 81. 5. Che et al. 2009. Rodinia: A benchmark suite for heterogeneous computing. In 2009 IEEE international symposium on workload characterization (IISWC). IEEE, 44 54. 6. Embedded Microprocessor Benchmark Consortium. EEMBC Benchmarks. https://www.eembc.org/products 7. Falk et al. 2016. TACLeBench: A Benchmark Collection to Support Worst-Case Execution Time Research. In 16th International Workshop on Worst-Case Execution Time Analysis (WCET 2016) (OpenAccess Series in Informatics (OASIcs), Vol. 55), Martin Schoeberl (Ed.). Schloss Dagstuhl Leibniz-Zentrum f r Informatik, Dagstuhl, Germany, 2:1 2:10. 8. Gustafsson et al. 2010. The M lardalen WCET benchmarks: Past, present and future. In 10th International Workshop on Worst-Case Execution Time Analysis (WCET 2010). Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik. 9. Guthaus et al. 2001. MiBench: A free, commercially representative embedded benchmark suite. In Proceedings of the fourth annual IEEE international workshop on workload characterization. WWC-4 (Cat. No. 01EX538). IEEE, 3 14. 10. Nemeret al. 2006. Papabench: a free real-time benchmark. In 6th International Workshop on Worst-Case Execution Time Analysis (WCET 06). Schloss Dagstuhl-Leibniz- Zentrum f r Informatik. 11. Sakalis et al. 2016. Splash-3: A properly synchronized benchmark suite for contemporary research. In 2016 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS). IEEE, 101 111. 12. Valsan et al. 2016. Taming Non-Blocking Caches to Improve Isolation in Multicore Real-Time Systems. In 2016 IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS). 1 12. https://doi.org/10.1109/RTAS.2016.7461361 13. Venkata et al. 2009. SD-VBS: The San Diego Vision Benchmark Suite. In 2009 IEEE International Symposium on Workload Characterization (IISWC). 55 64. https://doi.org/10.1109/IISWC.2009.5306794 16

More Related Content

giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#