Storage Benchmark Proposal for NFVI Performance Measurement

Slide Note
Embed
Share

Proposal for the "STORPERF" project led by Edgar St. Pierre from EMC aims to provide tools to measure block and object storage performance in an NFVI environment. The project includes defining test cases, metrics, and test processes, as well as identifying open-source tools and integration points. It covers use cases, project deliverables, block and object performance test cases, and targets to characterize and troubleshoot storage performance in NFVIs. The proposal details storage capacity, performance testing parameters, workload analysis, and objectives to enhance storage performance evaluation in NFVI setups.


Uploaded on Sep 13, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. Storage Benchmark Proposal Edgar StPierre, EMC

  2. Proposal Project Name: STORPERF Repo Name: STORPERF Category: Requirements Project Lead: Edgar StPierre, EMC Project Goal: Provide tools to measure block and object storage performance in an NFVI. Ideally, this is complemented with an effort to characterize typical VF storage performance requirements.

  3. Use Cases (Scope) 1. Characterize expected storage performance behavior of an NFVI for any type of block or object storage deployment in an OPNFV lab 2. Troubleshoot actual storage performance in a production NFVI (obviously, to be used with caution) 3. Workload analysis during NFVI staging before deployment. (Integrate with project Bottlenecks.)

  4. Project Deliverables Brahmaputra: Definition of Performance Test Cases 1. Block multiple block sizes with fixed queue depths and target data sizes; measure both read and write performance 2. Object both small and very large target data size to include video streaming emulation Definition of basic metrics to measure performance: Max IOPS under various loads, Average I/O Latency, more? Definition of test process Including relative applicability of test processes to different VNF workloads Including robustness testing for impaired storage environments On track to deliver benchmark test tools in C release Identify open source tool(s) such as FIO, IOmeter, VDBench, The Grinder, Locust, and/or JMeter Identify integration points with Qtip, Yardstick, and/or Jenkins tool chain

  5. Block Performance Test Cases Storage capacity: min specified, max TBD Preconditioning of storage Storage performance degrades until it reaches steady state[1] Period TBD, but est 2-6 hours Test queue depths of 1, 16, & 128 Test block sizes of 4KB, 8KB, 64KB, 1MB Test 5 workloads: 4 corners and 1 mixed Metrics IOPS report measured to a max latency? (or when it hits the wall ?) for each workload Avg Latency report for each workload Workloads random sequential write read

  6. Object Performance Test Cases Swift API (?) Measure max concurrency with smaller data size (GET/PUT) Measure max TPS using variable object size payloads at max concurrency (1KB, 10KB, 100KB, 1MB, 10MB, 100MB, 200MB) 5 different GET/PUT workloads for each: 100/0, 90/10, 50/50, 10/90, 0/100 Separate metadata concurrency test using List & HEAD Metrics Transactions/second Error rate Per-test average latency

  7. Future Project Deliverables Future test extensions Expand captured performance metrics (e.g., I/O Latency variation for object streaming) Integration with Qtip for automated reporting Integration with Yardstick for automated execution Separate deliverable capturing corresponding typical VF storage performance requirements using the same metrics, for VFs that require block or object storage IO Captured through collaborative polling of VF producers, preferably using empirical data Used to drive pass/fail criteria for measurements

  8. Application of Test Tool Executes as VM(s) in test environment Manual deployment, or automated in tool chain Possible Target SUT: Direct attached block storage (local LUNs) External or distributed block storage (iSCSI) External or distributed object storage (HTTP) Storage Benchmark VM Storage Benchmark VM Storage Benchmark VM Host 1 Host 2 Host 3 Local LUN iSCSI HTTP DAS External Block Storage External Object Store

  9. Reporting Benchmark Tool will produce reports of SUT performance for defined test cases Accessed directly Accessed via Qtip Accessed via Yardstick

  10. Automation / Testability Full automation via integration with Yardstick and Qtip tool chain for testing of NFVI test environments

  11. Contributors Edgar StPierre, EMC Chanchal Chatterjee, EMC Iben Rodriguez, Spirent Jose Lausuch, Ericsson Ferenc Farkas, Ericsson Vishal Murgai, Cavium Networks (add to etherpad)

  12. References [1] For example, see http://snia.org/sites/default/files/SSS_PTS_Enterprise_v1.1.pdf and http://www.storagereview.com/fio_flexible_i_o_tester_synthetic_benchmark

Related


More Related Content