Understanding the PSD Relaxation for the Max-Cut Problem
Exploring the positive semidefinite (PSD) relaxation approach for the Max-Cut problem, which involves formulating the problem with PSD matrices and creating constraints to approximate the optimal solution efficiently. The relaxation technique, through manipulating high-dimensional vectors and angles, aims to maximize the number of cut edges in a graph partition.
Download Presentation
Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
E N D
Presentation Transcript
Approximating Max Cut with SDP The landmark paper of GW on max cut.
The Max Cut Problem Given an undirected graph G(V,E) partition V into S and V-S. Maximize the number of edges with one endpoint in S and the other in V-S. This problem in NPC. The Minimization problem poltnomial time solvable. Due to [GW]. A landmark.
PSD matrices By the Ellipsoids algorithm to solve it you just need to check if some fractional solution is valid. And if not show a violated constrain. But can we pose A 0 (A is PSD) as linear constrains? Fix the fractional entries of A and find if there is a violated constrain for vT A v 0.
PSD matrices Negative eigenvalue A v= v for <0 shows that vT A v= vT v = (assuming v has norm 1) is a violated constraint. Thus A 0 can be solved in poly time.
The approximation for MAX-CUT Let Y be a matrix. The PSD program is: Maximize (1 yji)/2 Y 0. We know that Y 0 can be solved in poly time.
The PSD First, why is it a relaxation of Max- Cut. There is a matrix so that BT B= Y and so (Bj)T Bi= yji Maximize (1 yji)/2 Y 0.
Let the optimal cut be S,V-S Let (Bj)T=(1,0,0,0, ,0) if j S and (Bj)T=(-1,0,0,0, ,0) if j V-S For an edge i,j if i and j are in different sides we get 1 and 0 otherwise.
What do we do with high dimension vectors? Consider the Matrix B so that (Bj)T Bi= yji This gives a vector Bi for every vertex i. Moreover the PSD programs makes (Bj)T Bi as close to -1 as it can
For two vectors Bj,Bi There is a two dimensional plane that contains these vectors. Hence there is an angle between them. The PSD says: let the vectors of an edge i,j. Make the angle between then as close as possible to 180. Say that we take a random hyperplane. Because of the large degree between i and j so that i,j is an edge, there is high probability the hyperplane will separate i and j.
Analysis Choose a random hyperplane. Exercise: In the 2 dimensional plane it is a random line and so the probability that i and j will be separated is 2 /2 with the angle between i and j. We need to pose this in terms of the PSD program.
On the plane Say that we choose the vectors of norm 1, the inner products is just cos( ) with the angle between Bjand Bi . The probability that i,j are separated is 2 /2 =arccos((Bj)T Bi)=Arccos(yji) We use the inequality: arccos(yji)/ 0.878(1 yij)/2 Which clearly implies a 0.878 approximation ratio.