Provenance Analysis of Algorithms - Understanding Data Dependencies
Exploring the concept of provenance analysis in algorithms to understand how output items depend on input items. This analysis goes beyond traditional activity logs, focusing on structured collections of items and exploring various applications for causal and quantitative analysis. The critical test is to describe the relationship between input and output items accurately.
Download Presentation
Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
E N D
Presentation Transcript
? Towards some provenance analysis of algorithms V. Tannen University of Pennsylvania 10/1/13 WebDam 1
Data provenance analysis Wish to extend data provenance analysis to (as) general (as possible) information processing. Much such analysis was (and still is) done in the style of activity logs Many people are happy with this style of provenance as long as what is recorded in each log entry is rich enough information A new perspective arose in databases with the work of Buneman, Khanna, and Tan. This and subsequent developments allow us to be more ambitious about applications of provenance analysis. 10/1/13 WebDam 2
Provenance Input Output (structured) collection of items Query Processing How does each output item depend on the input items? Too vague! Look at applications to clarify. 10/1/13 WebDam 3
Provenance applications: some causal analysis Consider an output item o . Deletion propagation: If some input items disappear does o disappear from the output? Binary trust: Is o obtained only from trusted input items? Access control: Associate with each input item a clearance procedure. Would a given clearance procedure succeed for o ? Probabilistic analysis: How does the event o is in the output depend on all the events item blahis in the input 10/1/13 WebDam 4
Provenance applications: some quantitative analysis Consider an output item o . Cost: Assign a cost value to each input item. Compute the corresponding cost of o . Degrees of trust: Assign a degree of trust to each input item. Compute the degree of trust that corresponds to o . 10/1/13 WebDam 5
Provenance analysis Provenance: an abstract way of describing the relationship between output items and input items. Critical test: it can be specialized to solve causal or quantitative applications like those we saw. In all those applications we must keep track of all the alternative ways that an output items is computed, and for each of these, of the joint way in which input items are used. 10/1/13 WebDam 6
Some things we have learned, so far For queries. (But see also longtime work in PL on security based on information flow.) Why-provenance of Buneman, Khanna, and Tan offered a perspective radically different than that of activity logs: different ways (different witnesses) of obtaining the same output item should be recorded separately. For causal analysis (deletion propagation, binary trust, access control, probabilistic analysis) minimal witnesses suffice. This is the same as recording provenance with positive boolean expressions. For quantitative analysis (cost, degrees of trust) witness (why-) provenance does not suffice (because it does not count how many times an input item is used in a witness). But we have Sorp. A commutative semiring is absorptive if a+ab=a for all a,b. Sorp(X) is the free absorptive semiring generated by X. It specializes correctly to cost and degrees of trust. Was designed in [DMST 2013, submitted]. 10/1/13 WebDam 7
Loops and fixpoints We understand one particular case: datalog Would like more, at least while positive queries for instance. But eventually graph algorithms such as Dijkstra, Prim, Kruskal and more? Problem: provenance can be infinite if we count all alternatives. Finite version: PosBool(X) for finite X. Good for causal analysis but not for quantitative analysis. xy yz xzu Sorp(X) for finite X is another finite version that is also good for quantitative analysis. x2y + xy3+xyu 10/1/13 WebDam 8
C E 40 10 30 30 10 A D 20 20 30 B F The shortest path from A to F is A-C-E-F and has length 60. What if CE is lost? Then it s A-B-D-F or A-C-D-F, both length 70. If CE, BD, and DF are lost then it s A-B-D-E-F, length 80. Parts of input lost? Sounds like provenance analysis! 10/1/13 WebDam 9
C E s 40 u v y 10 30 30 t 10 A D 20 20 30 x z w B F Label edges with provenance tokens x, y, z, u, v, w, s, t. Positive boolean provenance suffices for this application. Similar to provenance for aggregation [ADT PODS11] compute an expression for the length of the shortest path in involving numbers annotated with provenance. E.g., (xy yu) 50 10/1/13 WebDam 10
X1 EP(X): provenance of there exists a path from A to X x1 a1 LSP(X): length of shortest path from A to X provenance-annotation expression a2 Y x2 X2 EP(Y) = (EP(X1) x1) (EP(X2) x2) LSP(Y) = [(x1 LSP(X1)) + (EP(X1) x1) a1] min [(x2 LSP(X2)) + (EP(X2) x2) a2] 10/1/13 WebDam 11
C E s 40 u v y 10 30 30 t 10 A D 20 20 30 x z w B F EP(D) = xz yu LSP(D) = = [u(y 10) + yu 30] min [z(x 20) + xz 20] = = yu 40 min xz 40 = = xz yu 40 10/1/13 WebDam 12
C E s 40 u v y 10 30 30 t 10 A D 20 20 30 x z w B F EP(F) = xzw yuw yst xzvt yuvt LSP(F) = = [yuw xzw 70] min [ [yst 50 min (yuvt xzvt) 70] + yst yuvt 10 ] 10/1/13 WebDam 13
Too many input items to track! Handling large aggregations: algebraic laws help in shrinking size. Abstraction: group them by categories, track only the categories Provisioning [DIMT CIDR13]. Seemingly causal analysis only. Scenarios correspond to input items! 10/1/13 WebDam 14
C E s 40 u v y 10 30 30 t 10 A D 20 20 30 x z w B F y = u = v = t = true LSP(F) = = w 70 min [ [ s 50 min true 70 ] + true 10 ] 10/1/13 WebDam 15
Thank you! 10/1/13 WebDam 16