A Unified Continuous Greedy Algorithm for Submodular Maximization

A Unified Continuous Greedy Algorithm for
Submodular Maximization
Moran Feldman
Roy Schwartz
Joseph (Seffi) Naor
Technion – Israel Institute of Technology
Set Functions
 
Definition
Given a ground set 
E
, a set function 
f 
: 2
E
 
 
 assigns a
number to every subset of the ground set.
 
 
Properties
2
The Improtance of Submodularity
 
Alternative (more intuitive) Definition
A function 
f
 is submodular if for sets 
A
 
 
B
 
 
E
 and 
e
 
 
B
:
f
(
A 
 e
) - 
f
(
A
) 
 
f
(
B
 
 e
) - 
f
(
B
).
The “economy of scale” feeling of this definition made submodular
functions common in economics and game theory.
 
Submodular Function in Combinatorics
Submodular functions appear frequently in combinatorial settings.
Here are two simple examples:
3
Polytope Constraints
 
We abuse notation and identify a set 
S
 with its
characteristic vector in [0, 1]
E
.
4
 
Using this notation,
we can define IP like
problems:
 
More generally, maximizing a
submodular function subject
to a polytope 
P
 constraint is
the problem:
 
Difficulty:
Generalizes “integer programming”.
Unlikely to have a reasonable approximation.
Relaxation
 
Replace the constraint 
x
 
 {0,1}
E
 with x 
 [0,1]
E
.
Use the multilinear extension 
F
 (a.k.a. extension by
expectation) [Calinescu et al. 07] as objective.
Given a vector 
x
, let 
R
(
x
) denote a random set containing every
element 
e
 
 
E
 with probability 
x
e
 independently.
F
(
x
)
 
= E[
f
(
R
(
x
))].
5
 
The Problem
Approximating the relaxed program.
 
Motivation
For many polytopes, a fractional solution can be rounded
without losing too much in the objective.
Matroid Polytopes – no loss 
[Calinescu et al. 07].
Constant number of knapsacks – (1 – 
ε
) loss [Kulik et al. 09].
Unsplittable flow in trees – O(1) loss [
Chekuri et al. 11].
The Continuous Greedy Algorithm
 
The Algorithm [Vondrak 08]
Let 
δ
 > 0 be a small number.
1.
Initialize: 
y
(0) 
 
 and 
t
 
 0.
2.
While 
t
 < 1 do:
3.
   For every 
e
 
 
E
, let 
w
e
 = 
F
(
y
(t) 
 
e
) – 
F
(
y
(t)).
4.
   Find a solution 
x
 in 
P
 
 [0, 1]
E
 maximizing 
w
x
.
5.
   
y
(
t
 + 
δ
) 
 
y
(
t
) + 
δ
 
x
6.
   Set 
t 
 t 
+
 δ
7.
Return 
y
(
t
)
 
Remark
If 
w
e
 cannot be evaluated directly, it can be approximated
arbitrarily well via sampling.
6
7
The Continuous Greedy Algorithm - Demonstration
 
Observations
The algorithm is somewhat like gradient descending.
The algorithm moves only in positive directions because the
extension 
F
 is guaranteed to be concave in such directions.
 
y
(0)
 
y
(0.01)
 
y
(0.02)
 
y
(0.03)
 
y
(0.04)
The Continuous Greedy Algorithm - Analysis
 
Theorem
Assuming,
f
 is a normalized monotone submodular function.
P
 is a solvable polytope.
The continuous greedy algorithm gives 1 – 1/
e – o
(
n
-1
)
approximation.
 
 
There are two important lemmata in the proof of the theorem.
 
Lemma 1
There is a good direction, i.e., 
w
x
 
 
f
(
OPT
) – 
F
(
y
(
t
))
.
Proof Idea
OPT itself is a feasible direction, and its value is at least 
f
(
OPT
) – 
F
(
y
(
t
)).
8
The Continuous Greedy Algorithm – Analysis (cont.)
 
Lemma 2
The improvement is related to 
w
x
, i.e.,
F
(
y
(
t
 + 
δ
)) 
 
F
(
y
(
t
)) + 
δ
 
w
x
.
Proof
Since 
δ
 is small, 
F
(
y
(
t
 + 
δ
)) 
- 
F
(
y
(
t
)) 
 
δ
 
e
 
e
F
(
y
(
t
)) ∙ 
x
e
.
We need to relate 
e
F
(
y
(
t
)) and 
w
e
:
F
 is multilinear, hence, 
w
e
 = F
(
y
(t) 
 
e
) – 
F
(
y
(t)) = (1 – 
y
e
(
t
)) ∙
[
F
(
y
(t) 
 
e
) – 
F
(
y
(t) - 
e
)] = (1 – 
y
e
(
t
)) ∙ 
e
F
(
y
(
t
)).
F
 is monotone, hence 
e
F
(
y
(
t
)) is non-negative, and 
w
e
 
e
F
(
y
(
t
)).
9
Insight
 
Recall the equality 
w
e
 = 
(1 – 
y
e
(
t
)) ∙ 
e
F
(
y
(
t
)).
We used the monotonicity of 
f
 to get: 
w
e
 
 
e
F
(
y
(
t
)).
This is the most significant obstacle to extend the last
algorithm to non-monotone functions.
Idea:
Currently the improvement in each step is proportional to:
e
 
e
F
(
y
(
t
)) ∙ 
x
e
We want it to be proportional to
e
 
w
e
x
e
 = 
e
 (1 – 
y
e
(
t
)) ∙ 
e
F
(
y
(
t
)) ∙ 
x
e
This can be done by increasing 
y
e
(
t
) only by
δ
x
e
 ∙ (1 – 
y
e
(
t
)), instead of by 
δ
x
e
.
10
The Measured Continuous Greedy Algorithm
 
The Algorithm
Let 
δ
 > 0 be a small number.
1.
Initialize: 
y
(0) 
 
 and 
t
 
 0.
2.
While 
t
 < 
T
 do:
3.
   For every 
e
 
 
E
, let 
w
e
 = 
F
(
y
(t) 
 
e
) – 
F
(
y
(t)).
4.
   Find a solution 
x
 in 
P
 
 [0, 1]
E
 maximizing 
w
x
.
5.
   For every 
e
 
 
E
, 
y
e
(
t
 + 
δ
) 
 
y
e
(
t
) + 
δ
 
x
e
 
∙ (1 – 
y
e
(
t
))
.
6.
   Set 
t 
 t 
+
 δ
7.
Return 
y
(
t
)
 
Remark
The algorithm never leaves the box [0, 1]
E
, so it can be
used with arbitrary values of 
T
.
11
The Measured Continuous Greedy Algorithm - Analysis
 
Theorem
Assuming,
f
 is a non-negative submodular function.
P
 is a solvable down-montone polytope.
The approximation ratio of the measured continuous
greedy algorithm with 
T 
= 1
 
is 1/
e – o
(
n
-1
).
 
Remarks
The solution is no longer a convex combination of 
P
points.
For 
T 
 
1, the output is in 
P 
since 
P
 is down-monotone.
 
12
The Measured Continuous Greedy Algorithm – Analysis (cont.)
 
Lemma 2
The improvement is related to 
w
x
, i.e.,
F
(
y
(
t
 + 
δ
)) 
 
F
(
y
(
t
)) + 
δ
 
w
x
.
Proof
The insight removes the only place in the previous proof
of this lemma that used the monotonicity of 
f
.
 
Lemma 1
There is a good direction, i.e., 
w
x
 
 
e
-
t
f
(
OPT
) – 
F
(
y
(
t
))
.
Proof Idea
Again, we show 
OPT
 itself is a good direction with at least
that value.
 
13
Result for Monotone Functions
 
For non-monotone functions, the approximation
ratio is maximized for 
T
 = 1.
For monotone 
f
, we get the an approximation
ratio of 1-
e
-
T
.
For 
T
 = 1, this is the same ratio of the previous
algorithm.
The approximation ratio improves as 
T
 increases.
In general, 
T
 > 1 might cause the algorithm to
produce solutions outside the polytope.
However, for some polytopes, somewhat larger
values of 
T
 can be used.
14
15
The Submodular Welfare Problem
 
Instance
A set 
P
 of 
n
 players , and a set 
Q
 of 
m
 items.
Normalized monotone submodular utility function
w
j
: 2
Q
 
 
+
 for each player.
 
Objective
Let 
Q
j
 
 
Q
 
denote the set of items the 
j
th
 player gets.
The utility of the 
j
th
 player is 
w
j
(
Q
j
).
Distribute the items among the players, maximizing the sum of
utilities.
 
Approximation
Can be represented as a problem of the previous form.
The algorithm can be executed till time -
n
 ∙ ln (1 – 1/
n
).
The expected value of the solution is at least: 1 – (1 – 1/
n
)
n
.
Open Problem
 
The measured continuous greedy algorithm
provides tight approximation for monotone
functions [Vondrak 06].
Is this also the case for non-monotone
functions?
The current approximation ratio of 
e
-1
 is a
natural number.
16
 
Slide Note
Embed
Share

The paper discusses a unified continuous greedy algorithm for submodular maximization, exploring properties of set functions, submodular functions in combinatorics, polytope constraints, relaxation techniques, and the continuous greedy algorithm. It covers definitions, importance of submodularity, and practical applications in economics, game theory, and optimization.

  • Submodular Maximization
  • Greedy Algorithm
  • Set Functions
  • Combinatorics
  • Optimization

Uploaded on Feb 24, 2025 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. A Unified Continuous Greedy Algorithm for Submodular Maximization Moran Feldman Joseph (Seffi) Naor Roy Schwartz Technion Israel Institute of Technology

  2. Set Functions Definition Given a ground set E, a set function f : 2E assigns a number to every subset of the ground set. Properties Property Definition f( ) = 0 Normalization Monotonicity For every two sets A B E: f(A) f(B) Submodularity For all sets A, B E: f(A) + f(B) f(A B) + f(A B) 2

  3. The Improtance of Submodularity Alternative (more intuitive) Definition A function f is submodular if for sets A B E and e B: f(A e) - f(A) f(B e) - f(B). The economy of scale feeling of this definition made submodular functions common in economics and game theory. Submodular Function in Combinatorics Submodular functions appear frequently in combinatorial settings. Here are two simple examples: Ground Set Submodular Function Nodes of a graph The number of edges leaving a set of nodes. Collection of sets The number of elements in the union of a sub-collection. 3

  4. Polytope Constraints We abuse notation and identify a set S with its characteristic vector in [0, 1]E. Using this notation, we can define IP like problems: More generally, maximizing a submodular function subject to a polytope P constraint is the problem: ( ) x + max f . t . s 3 1 x 2 x 2 x 1 2 3 ( ) x max f + 3 3 x x x 1 2 3 E 1 , 0 E 1 , 0 . t . s x P x Difficulty: Generalizes integer programming . Unlikely to have a reasonable approximation. 4

  5. Relaxation Replace the constraint x {0,1}E with x [0,1]E. Use the multilinear extension F (a.k.a. extension by expectation) [Calinescu et al. 07] as objective. Given a vector x, let R(x) denote a random set containing every element e E with probability xe independently. F(x)= E[f(R(x))]. The Problem Approximating the relaxed program. Motivation For many polytopes, a fractional solution can be rounded without losing too much in the objective. Matroid Polytopes no loss [Calinescu et al. 07]. Constant number of knapsacks (1 ) loss [Kulik et al. 09]. Unsplittable flow in trees O(1) loss [Chekuri et al. 11]. 5

  6. The Continuous Greedy Algorithm The Algorithm [Vondrak 08] Let > 0 be a small number. 1. Initialize: y(0) and t 0. 2. While t < 1 do: 3. For every e E, let we = F(y(t) e) F(y(t)). 4. Find a solution x in P [0, 1]E maximizing w x. 5. y(t + ) y(t) + x 6. Set t t + 7. Return y(t) Remark If we cannot be evaluated directly, it can be approximated arbitrarily well via sampling. 6

  7. The Continuous Greedy Algorithm - Demonstration 4 x 1 x 2 x y(0.04) 3 x y(0.03) y(0.02) y(0.01) y(0) Observations The algorithm is somewhat like gradient descending. The algorithm moves only in positive directions because the extension F is guaranteed to be concave in such directions. 7

  8. The Continuous Greedy Algorithm - Analysis Theorem Assuming, f is a normalized monotone submodular function. P is a solvable polytope. The continuous greedy algorithm gives 1 1/e o(n-1) approximation. There are two important lemmata in the proof of the theorem. Lemma 1 There is a good direction, i.e., w x f(OPT) F(y(t)). Proof Idea OPT itself is a feasible direction, and its value is at least f(OPT) F(y(t)). 8

  9. The Continuous Greedy Algorithm Analysis (cont.) Lemma 2 The improvement is related to w x, i.e., F(y(t + )) F(y(t)) + w x. Proof Since is small, F(y(t + )) - F(y(t)) e eF(y(t)) xe. We need to relate eF(y(t)) and we: F is multilinear, hence, we = F(y(t) e) F(y(t)) = (1 ye(t)) [F(y(t) e) F(y(t) - e)] = (1 ye(t)) eF(y(t)). F is monotone, hence eF(y(t)) is non-negative, and we eF(y(t)). ( ) ( ) e t y F ye ( ) t ( ) t ( ( ) t ) ( ( ) t ) e 1 ye F y F y 9

  10. Insight Recall the equality we = (1 ye(t)) eF(y(t)). We used the monotonicity of f to get: we eF(y(t)). This is the most significant obstacle to extend the last algorithm to non-monotone functions. Idea: Currently the improvement in each step is proportional to: e eF(y(t)) xe We want it to be proportional to ewe xe = e (1 ye(t)) eF(y(t)) xe This can be done by increasing ye(t) only by xe (1 ye(t)), instead of by xe. 10

  11. The Measured Continuous Greedy Algorithm The Algorithm Let > 0 be a small number. 1. Initialize: y(0) and t 0. 2. While t < T do: 3. For every e E, let we = F(y(t) e) F(y(t)). 4. Find a solution x in P [0, 1]E maximizing w x. 5. For every e E, ye(t + ) ye(t) + xe (1 ye(t)). 6. Set t t + 7. Return y(t) Remark The algorithm never leaves the box [0, 1]E, so it can be used with arbitrary values of T. 11

  12. The Measured Continuous Greedy Algorithm - Analysis Theorem Assuming, f is a non-negative submodular function. P is a solvable down-montone polytope. The approximation ratio of the measured continuous greedy algorithm with T = 1is 1/e o(n-1). Remarks The solution is no longer a convex combination of P points. For T 1, the output is in P since P is down-monotone. 12

  13. The Measured Continuous Greedy Algorithm Analysis (cont.) Lemma 2 The improvement is related to w x, i.e., F(y(t + )) F(y(t)) + w x. Proof The insight removes the only place in the previous proof of this lemma that used the monotonicity of f. Lemma 1 There is a good direction, i.e., w x e-t f(OPT) F(y(t)). Proof Idea Again, we show OPT itself is a good direction with at least that value. 13

  14. Result for Monotone Functions For non-monotone functions, the approximation ratio is maximized for T = 1. For monotone f, we get the an approximation ratio of 1-e-T. For T = 1, this is the same ratio of the previous algorithm. The approximation ratio improves as T increases. In general, T > 1 might cause the algorithm to produce solutions outside the polytope. However, for some polytopes, somewhat larger values of T can be used. 14

  15. The Submodular Welfare Problem Instance A set P of n players , and a set Q of m items. Normalized monotone submodular utility function wj: 2Q + for each player. Objective Let Qj Q denote the set of items the jth player gets. The utility of the jth player is wj(Qj). Distribute the items among the players, maximizing the sum of utilities. Approximation Can be represented as a problem of the previous form. The algorithm can be executed till time -n ln (1 1/n). The expected value of the solution is at least: 1 (1 1/n)n. 15

  16. Open Problem The measured continuous greedy algorithm provides tight approximation for monotone functions [Vondrak 06]. Is this also the case for non-monotone functions? The current approximation ratio of e-1 is a natural number. 16

  17. Questions ?

More Related Content

giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#