Understanding Power Consumption in Memory-Intensive Databases

Slide Note
Embed
Share

This collection of research delves into the power challenges faced by memory-intensive databases (MMDBs) and explores strategies for reducing DRAM power draw. Topics covered include the impact of hardware features on power consumption, experimental setups for analyzing power breakdown, and the effectiveness of reducing DRAM power draw under MMDB workloads. The findings emphasize the significance of managing DRAM power to mitigate energy costs and improve MMDB performance.


Uploaded on Sep 27, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. Scaling the Memory Power Wall with DRAM-Aware Data Management Raja Appuswamy Matthaios Olma Anastasia Ailamaki Ecole Polytechnique F d rale de Lausanne (EPFL)

  2. Power-Hungry MMDBs? Year End-use Elec. Bills Power plants (500MW) CO2 (US) Million MT Energy B kwh ($B) $9.0 2013 97 91 139 47 37 51 17 Energy efficiency - The New Holy Grail 2020 $13.7 147 50 2013-2020 incr. What is the energy behavior of MMDBs? $4.7 2

  3. Energy in MMDB *HP Power advisor, six-core Intel Xeon E7-4809v2 4-socket - 24 cores, 96 DIMM slots 1 DRAM Idle (Watt) Power (kWatt) 2.5x loaded DRAM Loaded (Watt) 4P Loaded (W) CPU Loaded (Watt) 0.5 1.25x idle 16x capacity 3x power 4x capacity 10x power 0 DRAM total capacity DRAM will emerge as a major contributor 3

  4. Server Memory Trends Processor: Intel Xeon E5 2 sockets, 4 channels/socket Memory Type Max Capacity Latency (nsec) Bandwidth (GB/s) Loaded W/GB Idle W/GB 153.7 Capacity Limitation UDIMM 128 72 0.2 0.02 768 Capacity Performance Tradeoff LRDIMM 235 40.4 0.15 0.07 Capacity Power Tradeoff HCDIMM 768 161.9 63.9 0.74 0.37 Big data => Big memory => Big power bills MMDBs should focus on reducing DRAM power 4

  5. Reducing DRAM Power Draw Hardware features Frequency scaling Power-down modes Limitations Enabled/disabled statically at boot time Power-down state transition controlled by MC What is the effectiveness under MMDB workloads? What is the impact on MMDB performance? 5

  6. Experimental Setup Hardware 2 GHz Intel Xeon E5-2640 v2 CPU (2 socket, 8 cores/socket) 256 GB of memory (16x 16 GB RDIMMs) Micro-benchmarks Concurrent Scans 128 MB of int64s Parallel Aggregation (a = (b(i)+c(i))) over 8-GB double arrays 1-8 threads affinitized to a single socket to avoid NUMA Macro-benchmarks (in paper) TPC-C, TPC-H 6

  7. Power Breakdown (Scans) CPU DRAM 100% Power Consumption (%) 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% DRAM contribution is high even at 100% CPU 1 2 3 4 5 6 7 8 # threads Even with 256 GB RDIMMs, DRAM contributes 30% 7

  8. Impact of Frequency Scaling 8 threads 1.6 Scans Aggregation (Normalized w.r.t 1600 MT/s) 1.4 Performance/Watt 1.2 1 0.8 0.6 0.4 0.2 0 800 1067 1333 1600 Memory Frequency (MHz) Most energy-efficient is not always the fastest 8

  9. Memory Bandwidth Utilization 45000 800 MHz 1067 MHz (aggregation) 1333 MHz 1600 MHz Memory Bandwidth (MB/s) 40000 35000 30000 25000 20000 15000 10000 5000 0 1 2 3 4 5 6 7 8 Bandwidth-intensive workloads suffer # threads 9

  10. Impact of Power-Down Modes (aggregation) No impact on performance or power 13% reduction in Power power Throughput 1.2 (Normalized w.r.t. disabled modes) Power and Performance 1 0.8 0.6 0.4 0.2 0 1 8 # threads 10

  11. Exploiting DRAM Idleness Power-down (Scan) Power-down (Agg) Active (Scan) Active (Agg) negligible power savings >40% residency, yet 100 Power-Down Residency (%) 80 60 40 20 0 1 8 MC good at predicting idleness Memory Controller is conservative # threads 11

  12. Can We Do Better? DRAM-aware memory layout Separate hot/cold data to enable longer idle time DB-driven gear shifting Pairing data and power modes Multitier memory and storage tiering. Rethinking anti-caching, 5-minute rule for two-level DRAM Tiering-aware query optimization Making optimizer aware of access latencies 12

  13. Conclusion Perfect storm MMDBs need more DRAM to meet application demands DRAM technology poses power-performance tradeoffs Promise Frequency scaling/low-power modes can limit power draw Hurdles Need software-driven DRAM DVFS and state transitions Need to redesigns MMDBs to exploit hardware features It s the memory, stupid! - Richard L. Sites 13

Related


More Related Content