Communication Steps for Parallel Query Processing: Insights from MPC Model

Slide Note
Embed
Share

Revealing the intricacies of parallel query processing on big data, this content explores various computation models such as MapReduce, MUD, and MRC. It delves into the MPC model in detail, showcasing the tradeoffs between space exponent and computation rounds. The study uncovers lower bounds on space exponent for Conjunctive Query algorithms and space exponent/round tradeoffs for tree-like queries under different communication models.


Uploaded on Oct 07, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. COMMUNICATION STEPS FOR PARALLEL QUERY PROCESSING Paraschos Koutris Paul Beame Dan Suciu University of Washington PODS 2013

  2. MOTIVATION Understand the complexity of parallel query processing on big data Focus on shared-nothing architectures MapReduce is such an example Dominating parameters of computation: Communication cost Number of communication rounds 2

  3. COMPUTATION MODELS The MapReduce model [Afrati et al., 2012] tradeoff between reducer size (input size of a reducer) and replication rate (in how many reducers a tuple is sent) The MUD model [Feldman et al., 2010] (Massive, Unordered, Distributed) model The MRC model [Karloff et al., 2010] MapReduce computation + load balancing 3

  4. THE MPC MODEL N: total input size (in bits) p:number of servers Servers have unlimited computational power Computation proceeds in synchronous rounds: Local computation Global communication Server 1 Input N . . . . . . . . . . . . Server p Round 1 Round 2 4

  5. MPC PARAMETERS Each server receives in total a bounded number of bits: O(N/p p ) 0 < 1 Complexity parameters: Number of computation rounds r Space exponent (governs data replication) What are the space exponent/round tradeoffs for query processing in the MPC model ? 5

  6. OUR RESULTS ONE ROUND: Lower bounds on the space exponent for any (randomized) algorithm that computes a Conjunctive Query The lower bound holds for a class of inputs (matching databases), for which we show tight upper bounds MULTIPLE ROUNDS: Almost tight space exponent/round tradeoffs for tree-like Conjunctive Queries under a weaker communication model 6

  7. OUTLINE 1. Warm-up: The Triangle Query 2. One Communication Round 3. Multiple Communication Rounds 7

  8. CONJUNCTIVE QUERIES We mainly study full Conjuctive Queries w/o self-joins: Q(x, y, z, w, v) = R(x,y,z), S(x,w,v), T(v,z) The hypergraph of the query Q: Variables as vertices Atoms as hyperedges S x w R y v z T 8

  9. THE TRIANGLE QUERY (1) Find all triangles Q(x,y,z) = R(x,y), S(y,z), T(z,x) 2-round Algorithm: ROUND 1: [R hash-join S] R(a, b) h(b) S(b, c) h(b) Join locally U(a, b, c) = {R(a, b), S(b, c)} ROUND 2: [T hash-join T] U(a, b, c) h(c) T(c, a) h(c) Join locally Q(a,b,c) = {U(a, b ,c), T(c, a)} Replication = 0 9

  10. THE TRIANGLE QUERY (2) 1-round Algorithm: The p servers form a cube: [p1/3] [p1/3] [p1/3] Send each tuple to servers: R(a, b) (h1(a), h2(b), - ) S(b, c) (-, h2(b), h3(c) ) each tuple replicated p1/3times T(c, a) (h1(a), -, h3(c) ) Replication = 1/3 [Ganguly 92, Afrati 10, Suri 11] (h1(a), h2(b), h3(c)) 10

  11. LOWER BOUND FOR TRIANGLES (1) Theorem: No (randomized) algorithm can compute triangles in one round with space exponent < 1/3 Say that R, S, T are random permutations over [n]2 Expected #triangles = 1 Lemma: For any deterministic algorithm and =0, the p servers report in expectation O(1/p1/2 ) tuples Each relation contains N = (n logn) bits of information Any server knows a 1/pfraction of input: N/pbits 11

  12. LOWER BOUND FOR TRIANGLES (2) axy = Pr[server knows tuple R(x,y)] axy 1/n x,y axy = O(n/p) Similarly for S(y,z), T(z,x): byz, czx Friedgut s inequality: x,y,z axy byz czx ( x,y axy2)1/2 ( y,z byz2)1/2 ( z,x czx2) #know-triangles = O(1/p3/2) Summing over all servers, O(1/p1/2) known output tuples 12

  13. OUTLINE 1. Warm-up: The Triangle Query 2. One Communication Round 3. Multiple Communication Rounds 13

  14. MATCHING DATABASES Every relation R(A1, , Ak) contains exactly n tuples Every attribute Aicontains each value in {1, , n} only once A matching database has no skew Relation R(X, Y, Z) 1 1 2 3 X Y Z 2 1 3 2 2 2 3 n-1 n-1 n n n n 14

  15. FRACTIONAL VERTEX COVER Vertex cover number : minimum number of variables that cover every hyperedge Fractional vertex cover number *: minimum weight of variables such that each hyperedge has weight at least 1 Vertex Cover = 2 1/2 Fractional Vertex Cover * = 3/2 x 0 w 1/2 y 0 v 1/2 z Q(x, y, z, w, v) = R(x,y,z), S(x,w,v), T(v,z) 15

  16. LOWER BOUNDS Theorem: Any randomized algorithm in the MPC model will fail to compute a Conjunctive Query Q with: 1 round < 1 1/ *(Q) Input a matching database 16

  17. UPPER BOUNDS Theorem: The HYPERCUBE (randomized) algorithm can compute any Conjunctive Query Q with: 1 round 1 1/ *(Q) Input a matching database (no skew) Exponentially small probability of failure (on input N) 17

  18. HYPERCUBE ALGORITHM Q(x1, , xk) = S1( ), , Sl( ) Compute * and minimum cover: v1,v2, , vk Assign to each variable xi a share exponent e(i) = vi / * Assign each of the p servers to points on a k-dimensional hypercube: [p] = [pe(1)] [pe(k)] Hash each tuple to the appropriate subcube Q(x,y,z,w,v)=R(x,y,z),S(x,w,v),T(v,z) * = 3/2 : vx = vv = vz = vy = vw = 0 e(x) = e(v) = e(z) = 1/3 e(y) = e(w) = 0 [p] =[p1/3] [p0] [p1/3] [p0] [p1/3] e.g. S(a,b,c) (hx(a), 1, -, 1, hv(c)) 18

  19. EXAMPLES Cycle query: Ck(x1, , xk) = S1(x1, x2), , Sk(xk, x1) * = k/2 = 1 - 2/k Star query: Tk(z, x1, , xk) = S1(z, x1), , Sk(z, xk) * = 1 = 0 Line query: Lk(x0, x1, , xk) = S1(x0, x1), , Sk(xk-1, xk) * = k/2 = 1 - 1/ k/2 19

  20. OUTLINE 1. Warmup: The Triangle Query 2. One Communication Round 3. Multiple Communication Rounds 20

  21. MULTIPLE ROUNDS Our results apply to a weaker model (tuple-based MPC): only join tuples can be sent in rounds > 1 e.g. {R(a,b), S(b,c)} routing of each tuple t depends only on t Theorem: For every tree-like query Q, any tuple-based MPC algorithm requires at least log(diam(Q)) / log 2/(1- ) rounds This bound agrees with the upper bound within 1 round diam(Q): largest distance between two vertices in the hypergraph tree-like queries: #variables + #atoms - (arities) = 1 21

  22. EXAMPLE Line query: Lk(x0,x1, , xk) = S1(x0,x1), , Sk(xk-1,xk) tree-like: #variables = k+1 #atoms = k (arities) = 2k diam(Lk) = k For space exponent = 0, we need at least log(k)/log(2/(1-0)) = log(k) rounds diameter = k xk x1 x2 x3 x4 22

  23. CONNECTED COMPONENTS As a corollary of our results for multiple rounds, we obtain lower bounds beyond conjunctive queries: Theorem: Any tuple-based MPC algorithm that computes the Connected Components in any undirected graph with space exponent any <1 requires requires (logp) communicationrounds 23

  24. CONCLUSIONS Tight lower and upper bounds for one communication round in the MPC model The first lower bounds for multiple communication rounds Connected components cannot be computed in a constant number of rounds Open Problems: Lower and upper bounds for skewed data Lower bounds for > 1 rounds in the general model 24

  25. Thank you ! 25

Related