For 2a, it should say "expressed as a sum of products of up to k of f's Fourier coefficients".
Thanks to Jonah for pointing this out.
Wednesday, April 30, 2008
Tuesday, April 29, 2008
Clarification on HW6
Ok, yeah, part (b) was super-vague, sorry about that. Here is a more precise question, I think.
Consider the matrix ; if this is psd, then we know, e.g., that can be written as the dot product of vectors and , and also as the dot products of vectors and etc. Similarly can be written as dot , and also as dot . Hence we now have these vectors 's, each subscripted by at most two vertices, with some constraints relating them.
The goal of the problem is to show that the SDP is at least as strong as a natural vertex cover SDP, which assigns a vector to each node to minimize subject to
Does your SDP also imply the triangle inequalities:
Consider the matrix ; if this is psd, then we know, e.g., that can be written as the dot product of vectors and , and also as the dot products of vectors and etc. Similarly can be written as dot , and also as dot . Hence we now have these vectors 's, each subscripted by at most two vertices, with some constraints relating them.
The goal of the problem is to show that the SDP is at least as strong as a natural vertex cover SDP, which assigns a vector to each node to minimize subject to
- for all edges .
- (say).
Does your SDP also imply the triangle inequalities:
- ?
Monday, April 28, 2008
Course evaluation time...
Hi all: when you get a chance, could you go to https://my.cmu.edu/site/main/page.academics and give yr feedback on the course. thanks!
Sunday, April 27, 2008
Integrality Gaps for Sparsest Cut
A quick comment about the sparsest cut integrality gaps we saw last time, and the connection to the dual. (BTW, the notes for Lec 27 are up, thanks Varun!)
In Lecture 27, the arguments for the gaps went thus: at least of the demands need to travel through edges in the network, hence sending fraction of each unit demand uses volume. And since we dealt with unit capacities, there was only volume in the network, hence the maximum concurrent flow is at most . For the 5-node example, , and ; for the expander, , , and .
But suppose we are more sophisticated:
What we have just proved is weak LP-duality for the case of sparsest cut. The fact that equals is given by strong duality.
In Lecture 27, the arguments for the gaps went thus: at least of the demands need to travel through edges in the network, hence sending fraction of each unit demand uses volume. And since we dealt with unit capacities, there was only volume in the network, hence the maximum concurrent flow is at most . For the 5-node example, , and ; for the expander, , , and .
But suppose we are more sophisticated:
- Suppose we assign lengths to each edge , so that the shortest path according to these edge lengths from to is now . Think of each of the edges as pipes with capacity and length .
- Now if the pair sends flow, each unit of flow must travel distance. Hence the total volume used by this pair is . And the total volume used is .
- But the total volume in the network is now .
What we have just proved is weak LP-duality for the case of sparsest cut. The fact that equals is given by strong duality.
Monday, April 21, 2008
Hardness of Independent Set
I wanted to say a few more things about the hardness of Max-Independent-Set (equivalently, Min-Vertex-Cover). Recall that in Lecture 25, we sketched the proof of factor NP-hardness for this problem; specifically, Hastad's result shows that distinguishing vs. is NP-hard for all .
This is a great hardness factor, but (as they used to say) "the gap is in the wrong place" -- or at least, it's not in the strongest possible place. It is of great interest to try to obtain a hardness result of the form vs. . This will give a worse "factor", but the result is essentially stronger. Note that we can't hope for vs. NP-hardness; Boppana and Halldorsson showed, roughly, that vs. is in P (using Ramsey theory) and this was improved by Alon and Kahale to, roughly, vs. , using the Karger-Motwani-Sudan SDP coloring algorithm.
One reason hardness results like vs. "small" are better is that they better "factor" hardness for Min-Vertex-Cover. Recall that, trivially, a vs. hardness result for Min-Independent-Set is a vs. hardness result for Max-Vertex-Cover, and thus a -factor hardness for Max-Vertex-Cover. For example, Hastad's hardness gives only factor hardness for Vertex-Cover, but a result like vs. hardness for Independent-Set would give factor hardness for Vertex-Cover.
For a long time, the best NP-hardness result like this we knew for Independent-Set came simply from taking Hastad's vs. hardness result for Max-E3Lin and applying the FGLSS reduction to it. You should (easily) check for yourself that this gives vs. NP-hardness for Independent-Set. Hence it gives vs. NP-hardness for Vertex-Cover; i.e., factor .
In 2002, in the same conference as Khot's Unique Games Conjecture paper, Dinur and Safra published a stronger NP-hardness result for Independent-Set. This was a pretty amazing paper, because the outline of the proof was:
a) Prove a super-weak variant of the Unique Games Conjecture;
b) Do a very difficult Dictator Test based on the combinatorics of -intersecting families.
Part (a), as mentioned, was done contemporaneously with the UGC paper, while part (b) was done before the work on Ek-Independent-Set described in our Lectures 8 and 9.
I like to remember their result as vs. NP-hardness for Independent-Set (omitting the 's), and hence factor hardness for Vertex-Cover. In fact, they pushed it a little further. Specifically, for , they showed vs. NP-hardness for Independent-Set. This is, like, .382 vs. .159. This yields factor 1.36 or so NP-hardness for Vertex-Cover.
That stands as the best NP-hardness result known. Relatively soon after the UGC paper, Khot and Regev showed showed that assuming UGC, vs. hardness holds for Min-Independent-Set, and thus factor hardness for Vertex-Cover. Indeed, they showed UGC-hardness of vs. for Min-Ek-Independent-Set.
This is a great hardness factor, but (as they used to say) "the gap is in the wrong place" -- or at least, it's not in the strongest possible place. It is of great interest to try to obtain a hardness result of the form vs. . This will give a worse "factor", but the result is essentially stronger. Note that we can't hope for vs. NP-hardness; Boppana and Halldorsson showed, roughly, that vs. is in P (using Ramsey theory) and this was improved by Alon and Kahale to, roughly, vs. , using the Karger-Motwani-Sudan SDP coloring algorithm.
One reason hardness results like vs. "small" are better is that they better "factor" hardness for Min-Vertex-Cover. Recall that, trivially, a vs. hardness result for Min-Independent-Set is a vs. hardness result for Max-Vertex-Cover, and thus a -factor hardness for Max-Vertex-Cover. For example, Hastad's hardness gives only factor hardness for Vertex-Cover, but a result like vs. hardness for Independent-Set would give factor hardness for Vertex-Cover.
For a long time, the best NP-hardness result like this we knew for Independent-Set came simply from taking Hastad's vs. hardness result for Max-E3Lin and applying the FGLSS reduction to it. You should (easily) check for yourself that this gives vs. NP-hardness for Independent-Set. Hence it gives vs. NP-hardness for Vertex-Cover; i.e., factor .
In 2002, in the same conference as Khot's Unique Games Conjecture paper, Dinur and Safra published a stronger NP-hardness result for Independent-Set. This was a pretty amazing paper, because the outline of the proof was:
a) Prove a super-weak variant of the Unique Games Conjecture;
b) Do a very difficult Dictator Test based on the combinatorics of -intersecting families.
Part (a), as mentioned, was done contemporaneously with the UGC paper, while part (b) was done before the work on Ek-Independent-Set described in our Lectures 8 and 9.
I like to remember their result as vs. NP-hardness for Independent-Set (omitting the 's), and hence factor hardness for Vertex-Cover. In fact, they pushed it a little further. Specifically, for , they showed vs. NP-hardness for Independent-Set. This is, like, .382 vs. .159. This yields factor 1.36 or so NP-hardness for Vertex-Cover.
That stands as the best NP-hardness result known. Relatively soon after the UGC paper, Khot and Regev showed showed that assuming UGC, vs. hardness holds for Min-Independent-Set, and thus factor hardness for Vertex-Cover. Indeed, they showed UGC-hardness of vs. for Min-Ek-Independent-Set.
Homework 5 graded
Homework 5 is graded and will be handed back tomorrow. The average was 79%.
A few comments:
1. Most people got this right (one way or another).
2. For part b), the two equally simplest counterexamples are {, , }, or {, , }. Aaron, Kanat, and Mike all gave the same, far-from-simplest counterexample. What a coincidence...
3. Many people failed to prove a characterization (if and only if statement) in c), and just gave an implication. Many people also had complicated solutions for the support-3 problem. Ali and Amitabh gave roughly the same solution, which is probably the slickest. Suppose f = a + + , where the coefficients are nonzero and , , are distinct. By Parseval,
(*).
Substituting in , we get . Squaring this and subtracting (*) yields
(**).
Since , , are distinct, there must be an index which is in exactly 1 or 2 of them. Substituting in the string which is 1 everywhere except that it's on the th coordinate, we conclude , where exactly 1 or 2 of the signs are . Squaring this and subtracting (*) yields
(***),
where exactly two of the signs are . Now add (**) and (***) and conclude that one of , , or must equal 0, a contradiction.
4. Most people got (a) and (b). For (c), several people estimated the degree-1 Fourier coefficients of Majority and concluded that it's, say, -quasirandom. Only Eric truly ventured to high degree, pointing out that can be seen to be , and therefore by 5a, Maj is -quasirandom. In fact, one can show that the largest Fourier coefficients for Maj are the ones at level , and hence Maj is -quasirandom.
5. Most people more or less got this.
6. Most people mostly got this; the key in b) is picking the right diameter.
7. Most people got this too, although many failed to do both parts of f).
A few comments:
1. Most people got this right (one way or another).
2. For part b), the two equally simplest counterexamples are {, , }, or {, , }. Aaron, Kanat, and Mike all gave the same, far-from-simplest counterexample. What a coincidence...
3. Many people failed to prove a characterization (if and only if statement) in c), and just gave an implication. Many people also had complicated solutions for the support-3 problem. Ali and Amitabh gave roughly the same solution, which is probably the slickest. Suppose f = a + + , where the coefficients are nonzero and , , are distinct. By Parseval,
(*).
Substituting in , we get . Squaring this and subtracting (*) yields
(**).
Since , , are distinct, there must be an index which is in exactly 1 or 2 of them. Substituting in the string which is 1 everywhere except that it's on the th coordinate, we conclude , where exactly 1 or 2 of the signs are . Squaring this and subtracting (*) yields
(***),
where exactly two of the signs are . Now add (**) and (***) and conclude that one of , , or must equal 0, a contradiction.
4. Most people got (a) and (b). For (c), several people estimated the degree-1 Fourier coefficients of Majority and concluded that it's, say, -quasirandom. Only Eric truly ventured to high degree, pointing out that can be seen to be , and therefore by 5a, Maj is -quasirandom. In fact, one can show that the largest Fourier coefficients for Maj are the ones at level , and hence Maj is -quasirandom.
5. Most people more or less got this.
6. Most people mostly got this; the key in b) is picking the right diameter.
7. Most people got this too, although many failed to do both parts of f).
Tuesday, April 15, 2008
Monday, April 14, 2008
Lecture 24: UGC-Hardness of Max-Cut
The completion of the proof of the reduction from Unique Label-Cover to Max-Cut can be found here.
Thursday, April 3, 2008
Subscribe to:
Posts (Atom)