tag:blogger.com,1999:blog-69424935167750943412014-12-19T16:53:44.941-05:0015-854B: Advanced Approximation Algorithms<a href="http://www.cs.cmu.edu/~anupamg/adv-approx/">http://www.cs.cmu.edu/~anupamg/adv-approx/</a> <br>
<small>This material is based upon work supported by the National Science Foundation under Grant No. CCF-0747250. Any opinions, findings and conclusions or recomendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation (NSF).</small>Ryan O'Donnellhttp://www.blogger.com/profile/01760886084136827344noreply@blogger.comBlogger56125tag:blogger.com,1999:blog-6942493516775094341.post-55228788294667543982008-04-30T20:43:00.002-04:002008-04-30T20:44:30.136-04:00Another clarification on HW6For 2a, it should say "expressed as a sum of products of <span style="font-weight: bold;">up to<span style="font-style: italic;"> </span></span>k of f's Fourier coefficients".<br /><br /><br />Thanks to Jonah for pointing this out.Ryan O'Donnellhttp://www.blogger.com/profile/01760886084136827344noreply@blogger.com4tag:blogger.com,1999:blog-6942493516775094341.post-54394458499665295882008-04-29T12:30:00.007-04:002008-04-29T16:22:48.392-04:00Clarification on HW6Ok, yeah, part (b) was super-vague, sorry about that. Here is a more precise question, I think.<br /><br />Consider the matrix $M_2(y)$; if this is psd, then we know, e.g., that $y_{abcd}$ can be written as the dot product of vectors $z_{ab}$ and $z_{cd}$, and also as the dot products of vectors $z_{bc}$ and $z_{ad}$ etc. Similarly $y_{ab}$ can be written as $z_{a}$ dot $z_b$, and also as $z_{ab}$ dot $z_\emptyset$. Hence we now have these vectors $z$'s, each subscripted by at most two vertices, with some constraints relating them.<br /><br />The goal of the problem is to show that the SDP $Q_1(K_{VC})$ is at least as strong as a natural vertex cover SDP, which assigns a vector $v_i$ to each node $i$ to minimize $\sum_i v_i \cdot v_i$ subject to<br /><ul><li>$(v_\emptyset - v_a)(v_\emptyset - v_b) = 0$ for all edges $ab$.</li><li>$v_\emptyset \cdot v_\emptyset = 1$ (say).</li></ul>Can you show that the SDP you've created implies these constraints?<br /><br />Does your SDP also imply the triangle inequalities:<br /><ul><li>$||v_a - v_b||^2 + ||v_b - v_c||^2 \geq ||v_a - v_c||^2$?</li></ul>If things are still vague (or if you see some mistakes above), please email/post yr questions.Anupamnoreply@blogger.com0tag:blogger.com,1999:blog-6942493516775094341.post-77646047300503564402008-04-28T09:33:00.003-04:002008-04-28T09:36:32.892-04:00Course evaluation time...Hi all: when you get a chance, could you go to <a href="https://my.cmu.edu/site/main/page.academics">https://my.cmu.edu/site/main/page.academics</a> and give yr feedback on the course. thanks!Anupamnoreply@blogger.com0tag:blogger.com,1999:blog-6942493516775094341.post-11169632244914200612008-04-27T22:38:00.006-04:002008-04-27T23:01:22.631-04:00Integrality Gaps for Sparsest CutA quick comment about the sparsest cut integrality gaps we saw last time, and the connection to the dual. (BTW, the notes for Lec 27 are up, thanks Varun!)<br /><br />In Lecture 27, the arguments for the gaps went thus: at least $k$ of the demands need to travel through $p$ edges in the network, hence sending $\tau$ fraction of each unit demand uses $\tau \cdot k \cdot p$ volume. And since we dealt with unit capacities, there was only $m$ volume in the network, hence the maximum concurrent flow is at most $m/(pk)$. For the 5-node example, $p=2$, $k=4$ and $m=6$; for the expander, $k = \Omega(n^2)$, $p = \Omega(\log n)$, and $m = O(n)$.<br /><br />But suppose we are more sophisticated:<br /><ol><li>Suppose we assign lengths $d_e$ to each edge $e$, so that the shortest path according to these edge lengths from $i$ to $j$ is now $d_{ij}$. Think of each of the edges $(i,j)$ as pipes with capacity $C_{ij}$ and length $d_{ij}$.<br /><br /></li><li>Now if the pair $ij$ sends $\tau D_{ij}$ flow, each unit of flow must travel $d_{ij}$ distance. Hence the total volume used by this pair is $\tau D_{ij} \cdot d_{ij}$. And the <span style="font-style: italic;">total volume </span>used is $\sum_{ij} \tau \cdot D_{ij} \cdot d_{ij}$.<br /><br /></li><li>But the total volume in the network is now $\sum_{(i,j) \in E} C_{ij} d_{ij}$.</li></ol>Hence, the maximum concurrent flow $\tau^*$ is at most $\sum_{(i,j) \in E} C_{ij} d_{ij}$ divided by $\sum_{ij} \tau \cdot D_{ij} \cdot d_{ij}$. Moreover, <span style="font-style: italic;">this is true for any setting of the edge lengths</span> $d_e$. Hence, $\tau$ is at most the minimum of $(C.d)/(D.d)$ over all metrics $d$, or in other words, $\tau \leq \lambda^*$!<br /><br />What we have just proved is weak LP-duality for the case of sparsest cut. The fact that $\tau^*$ <span style="font-style: italic;">equals </span>$\lambda^*$ is given by strong duality.Anupamnoreply@blogger.com0tag:blogger.com,1999:blog-6942493516775094341.post-14002291716694159102008-04-21T22:29:00.003-04:002008-04-21T22:56:13.503-04:00Hardness of Independent SetI wanted to say a few more things about the hardness of Max-Independent-Set (equivalently, Min-Vertex-Cover). Recall that in Lecture 25, we sketched the proof of factor $1/n^{1-\epsilon}$ NP-hardness for this problem; specifically, Hastad's result shows that distinguishing $1/n^{\epsilon}$ vs. $1/n^{1-\epsilon}$ is NP-hard for all $\epsilon > 0$.<br /><br />This is a great hardness factor, but (as they used to say) "the gap is in the wrong place" -- or at least, it's not in the strongest possible place. It is of great interest to try to obtain a hardness result of the form $\Omega(1)$ vs. $o(1)$. This will give a worse "factor", but the result is essentially stronger. Note that we can't hope for $\Omega(1)$ vs. $1/n^{1-\epsilon}$ NP-hardness; Boppana and Halldorsson showed, roughly, that $1/k$ vs. $1/n^{(k-2)/(k-1)}$ is in P (using Ramsey theory) and this was improved by Alon and Kahale to, roughly, $1/k$ vs. $1/n^{(k-2)/(k+1)}$, using the Karger-Motwani-Sudan SDP coloring algorithm.<br /><br />One reason hardness results like $\Omega(1)$ vs. "small" are better is that they better "factor" hardness for Min-Vertex-Cover. Recall that, trivially, a $c$ vs. $s$ hardness result for Min-Independent-Set is a $1-c$ vs. $1 - s$ hardness result for Max-Vertex-Cover, and thus a $(1-s)/(1-c)$-factor hardness for Max-Vertex-Cover. For example, Hastad's hardness gives only $1+o(1)$ factor hardness for Vertex-Cover, but a result like $1/3$ vs. $o(1)$ hardness for Independent-Set would give factor $3/2 - o(1)$ hardness for Vertex-Cover.<br /><br />For a long time, the best NP-hardness result like this we knew for Independent-Set came simply from taking Hastad's $1 - \epsilon$ vs. $1/2 + \epsilon$ hardness result for Max-E3Lin and applying the FGLSS reduction to it. You should (easily) check for yourself that this gives $1/4 - \epsilon$ vs. $1/8 + \epsilon$ NP-hardness for Independent-Set. Hence it gives $3/4 + \epsilon$ vs. $7/8 - \epsilon$ NP-hardness for Vertex-Cover; i.e., factor $7/6 + \epsilon$.<br /><br />In 2002, in the same conference as Khot's Unique Games Conjecture paper, Dinur and Safra published a <a href="http://www.cs.huji.ac.il/%7Edinuri/mypapers/vc.pdf">stronger NP-hardness result for Independent-Set</a>. This was a pretty amazing paper, because the outline of the proof was:<br /><br />a) Prove a super-weak variant of the Unique Games Conjecture;<br />b) Do a very difficult Dictator Test based on the combinatorics of $2$-intersecting families.<br /><br />Part (a), as mentioned, was done contemporaneously with the UGC paper, while part (b) was done <span style="font-style: italic;">before</span> the work on Ek-Independent-Set described in our Lectures 8 and 9. <br /><br />I like to remember their result as $1/3$ vs. $1/9$ NP-hardness for Independent-Set (omitting the $\epsilon$'s), and hence factor $(8/9)/(2/3) = 4/3$ hardness for Vertex-Cover. In fact, they pushed it a little further. Specifically, for $p = (3 - \sqrt{5})/2$, they showed $p$ vs. $4p^3 - 3p^4$ NP-hardness for Independent-Set. This is, like, .382 vs. .159. This yields factor 1.36 or so NP-hardness for Vertex-Cover.<br /><br /><br />That stands as the best NP-hardness result known. Relatively soon after the UGC paper, <a href="http://www.cs.tau.ac.il/%7Eodedr/goto.php?name=3color_link&link=http://eccc.uni-trier.de/eccc-reports/2005/TR05-039/index.html">Khot and Regev showed</a> showed that assuming UGC, $1/2 - \epsilon$ vs. $\epsilon$ hardness holds for Min-Independent-Set, and thus factor $2-\epsilon$ hardness for Vertex-Cover. Indeed, they showed UGC-hardness of $1/k - \epsilon$ vs. $\epsilon$ for Min-Ek-Independent-Set.Ryan O'Donnellhttp://www.blogger.com/profile/01760886084136827344noreply@blogger.com0tag:blogger.com,1999:blog-6942493516775094341.post-63928431880274051542008-04-21T17:39:00.002-04:002008-04-21T17:40:42.852-04:00End of Lecture 25<a href="http://www.cs.cmu.edu/%7Eanupamg/adv-approx/hastad-wigderson.pdf">Here </a>is the end of the proof I didn't get a chance to finish from Lecture 25.Ryan O'Donnellhttp://www.blogger.com/profile/01760886084136827344noreply@blogger.com0tag:blogger.com,1999:blog-6942493516775094341.post-72437608596319370322008-04-21T16:39:00.002-04:002008-04-21T16:56:33.517-04:00Homework 5 gradedHomework 5 is graded and will be handed back tomorrow. The average was 79%.<br /><br />A few comments:<br /><br />1. Most people got this right (one way or another).<br /><br />2. For part b), the two equally simplest counterexamples are {$x_1$, $x_1$, $1$}, or {$x_1$, $x_1$, $x_1$}. Aaron, Kanat, and Mike all gave the same, far-from-simplest counterexample. What a coincidence...<br /><br />3. Many people failed to prove a <span style="font-style: italic;">characterization</span> (if and only if statement) in c), and just gave an implication. Many people also had complicated solutions for the support-3 problem. Ali and Amitabh gave roughly the same solution, which is probably the slickest. Suppose f = a $\chi_S$ + $b \chi_T$ + $c \chi_U$, where the coefficients $a, b, c$ are nonzero and $S$, $T$, $U$ are distinct. By Parseval,<br /><br />$a^2 + b^2 + c^2 = 1$ (*).<br /><br />Substituting in $(1, ..., 1)$, we get $a + b + c \in {-1,1}$. Squaring this and subtracting (*) yields<br /><br />$ab + bc + ac = 0$ (**).<br /><br />Since $S$, $T$, $U$ are distinct, there must be an index $i$ which is in exactly 1 or 2 of them. Substituting in the string which is 1 everywhere except that it's $-1$ on the $i$th coordinate, we conclude $\sigma_1 a + \sigma_2 b + \sigma_3 c \in {-1,1}$, where exactly 1 or 2 of the signs $\sigma_i$ are $-1$. Squaring this and subtracting (*) yields<br /><br />$\tau_1 ab + \tau_2 bc + \tau_3 ac = 0$ (***),<br /><br />where exactly two of the signs $\tau_i$ are $-1$. Now add (**) and (***) and conclude that one of $ab$, $bc$, or $ac$ must equal 0, a contradiction.<br /><br />4. Most people got (a) and (b). For (c), several people estimated the degree-1 Fourier coefficients of Majority and concluded that it's, say, $(1/n, 1/2)$-quasirandom. Only Eric truly ventured to high degree, pointing out that $Inf_i(Maj)$ can be seen to be $\Theta(1/\sqrt{n})$, and therefore by 5a, Maj is $(\Theta(1/\sqrt{n}),0)$-quasirandom. In fact, one can show that the largest Fourier coefficients for Maj are the ones at level $1$, and hence Maj is $(1/n, 0)$-quasirandom.<br /><br />5. Most people more or less got this.<br /><br />6. Most people mostly got this; the key in b) is picking the right diameter.<br /><br />7. Most people got this too, although many failed to do both parts of f).Ryan O'Donnellhttp://www.blogger.com/profile/01760886084136827344noreply@blogger.com0tag:blogger.com,1999:blog-6942493516775094341.post-88929765002634226962008-04-15T15:14:00.000-04:002008-04-15T15:15:44.539-04:00Homework 6 out<a href="http://www.cs.cmu.edu/%7Eanupamg/adv-approx/homework6.pdf">Homework 6</a> is out: it's due May 1st.Anupamnoreply@blogger.com0tag:blogger.com,1999:blog-6942493516775094341.post-80942548406851561222008-04-14T11:36:00.001-04:002008-04-14T11:37:20.825-04:00Lecture 24: UGC-Hardness of Max-CutThe completion of the proof of the reduction from Unique Label-Cover to Max-Cut can be found <a href="http://www.cs.washington.edu/education/courses/533/05au/lecture19.pdf">here</a>.Ryan O'Donnellhttp://www.blogger.com/profile/01760886084136827344noreply@blogger.com0tag:blogger.com,1999:blog-6942493516775094341.post-18932381221573819522008-04-03T15:05:00.002-04:002008-04-03T15:06:00.356-04:00Homework 4A high-scoring one: the average was 88. Notes and comments soon.Anupamnoreply@blogger.com0tag:blogger.com,1999:blog-6942493516775094341.post-35236923396526292912008-03-31T17:11:00.002-04:002008-03-31T17:14:16.122-04:00Hwk 5 clarificationsHW5 has been extended to Thursday Apr 3rd. Here are some typos to watch out for ---<br /><ul><li>Q6, the demands are all integers.</li><li>Q7e, it should be $\epsilon/\log n$, and not $\epsilon \log n$.<br /></li></ul>Anupamnoreply@blogger.com0tag:blogger.com,1999:blog-6942493516775094341.post-5305196473144214632008-03-31T16:34:00.002-04:002008-03-31T16:35:07.772-04:00Tomorrow's classTomorrow we're going to dive more deeply into hardness of approximation results for CSPs. Please take a look again at the ones we've already proved: for Max-Coverage (Lecture 3) and for Min-Ek-Hypergraph-Independent-Set (Lectures 8 and 9).<a href="http://www.cs.cmu.edu/%7Eanupamg/adv-approx/lecture8.pdf"></a> (Ryan O'Donnellhttp://www.blogger.com/profile/01760886084136827344noreply@blogger.com0tag:blogger.com,1999:blog-6942493516775094341.post-12636745136199013782008-03-29T22:34:00.000-04:002008-03-29T22:35:00.658-04:00Lecture notes 16 and 17are posted. Sorry for the delay.Ryan O'Donnellhttp://www.blogger.com/profile/01760886084136827344noreply@blogger.com0tag:blogger.com,1999:blog-6942493516775094341.post-46533454016922916432008-03-24T09:28:00.005-04:002008-03-25T11:44:37.937-04:00Prep for lecture tomorrow: recap Lecture 18For tomorrow's lecture (#19), please recall the definition of metrics embedding into trees (problem #6 from <a href="http://www.cs.cmu.edu/%7Eanupamg/adv-approx/homework4.pdf">HW4</a>). Also, have a look at the <a href="http://approximability.blogspot.com/2008/03/lecture-18-last-bits-of-proof.html">correctness proof</a> for low-diameter decompositions posted on Thursday, and try to prove the following extension of that result:<br /><br /><span style="font-weight: bold;">Extension: </span>For a vertex $v$ and radius $r$, let $B(v,r) = \{ w | d_{vw} \leq r\}$ be the "ball" of all vertices within distance $r$ to $v$. Extend the proof from Lecture 16 to show that the probability of an edge $(u,v)$ being cut is at most<br /><br /><div style="text-align: center;">$O(1) \times (d_{uv} / r) \times \log (|B(u,2r)| / |B(u,r/4)|)$.<br /><div style="text-align: left;"><br />(Hint: The <a href="http://approximability.blogspot.com/2008/03/lecture-18-last-bits-of-proof.html">previous proof</a> sums $1/j$ for all $j$ from $1$ to $n$ to get $H_n$. Can you show it suffices to sum over a smaller range? You may also want to consider cases depending whether or not $d_{uv}$ is at least $r/4$.)<br /></div></div>Anupamnoreply@blogger.com0tag:blogger.com,1999:blog-6942493516775094341.post-37751979327828098902008-03-21T01:08:00.001-04:002008-03-21T01:09:19.578-04:00Homework 5Sorry for the delay, <a href="http://www.cs.cmu.edu/%7Eanupamg/adv-approx/homework5.pdf">HW5</a> is up on the web page.Anupamnoreply@blogger.com3tag:blogger.com,1999:blog-6942493516775094341.post-58494391871739671592008-03-20T15:03:00.005-04:002008-03-20T17:52:54.953-04:00Lecture 18: Last bits of the proofRecall the low-diameter decomposition procedure: we picked a random radius $X$ and a random permutation $\pi$. We then constructed the random partition by considering all the vertices one by one, in the order given by $\pi$: when considering $w$, we assigned all the yet-unassigned vertices $v$ with $d(w,v) \leq X$ to $w$'s partition.<br /><br />Just for the analysis: suppose we renumber the vertices $w_1, w_2, \dots, w_n$ in order of their distance from the edge $e = (u,v)$.<br /><br />The crucial definitions (that avoid pain later) are the following:<br /><ol><li>At some time instant in this procedure, one (or both) of $u$ or $v$ gets assigned to some $w_i$. Say that $w_i$ settles the edge $(u,v)$.<br /><br /></li><li>At the moment the edge is settled, if only one endpoint of this edge gets assigned, then we say that $w_i$ cuts the edge $(u,v)$.<br /></li></ol>Note that each edge is settled at exactly one time instant in the procedure, and it may or may not be cut at that point in time. Of course, once the edge is settled (with or without being cut), it is never cut in the future.<br /><br />Consider $w_j$ and let its distance $d(w_j,u) = a_j$ and $d(w_j, v) = b_j$. Assume $a_j < b_j$, the other case is identical. If $w_j$ cuts $(u,v)$ when the random values are $X$ and $\pi$, the following two properties must hold:<br /><ol><li>The random variable $X$ must lie in the interval $[a_j, b_j]$ (else either none or both of $(u,v)$ would get marked).<br /><br /></li><li>The node $w_j$ must come before $w_1, ..., w_{j-1}$ in the permutation $\pi$.<br /><br />Suppose not, and one of them came before $w_j$ in the permutation. Since all these vertices are closer to the edge than $w_j$ is, then for the current value of $X$, they would have settle the edge (either capture one or both of the endpoints) at some previous time point, and hence $w_j$ would not settle---and hence not cut---the edge $(u,v)$.<br /></li></ol>Now the rest of the argument is as in class: Pr[ edge $e$ is cut ] = $\sum_j$ Pr[ $w_j$ cuts the edge $e$]. Moreover, Pr[$w_j$ cuts $e$] $\leq$ Pr[$X \in [a_j, b_j]$ and $w_j$ comes before $w_1, ... w_{j-1}$ in the permutation $\pi$] $\leq$ $(d_{uv}/(r/2)) \times (1/j)$. And summing this over all vertices gives us $2 (d_{uv}/r) H_n$.Anupamnoreply@blogger.com0tag:blogger.com,1999:blog-6942493516775094341.post-72817495810330934642008-03-16T23:28:00.002-04:002008-03-16T23:31:56.120-04:00Homework 4, problem 2b commentJust show you can get a gap <span style="font-style: italic;">at least</span> this bad. (In other words, although you will need to construct a good SDP solution for $C_5$, you need not prove it is optimal. Unless you want to. Say, by using duality.)Ryan O'Donnellhttp://www.blogger.com/profile/01760886084136827344noreply@blogger.com0tag:blogger.com,1999:blog-6942493516775094341.post-66345280430471464852008-03-16T23:20:00.003-04:002008-03-16T23:27:27.064-04:00Homework 3 gradedI'll return them on Tuesday. The average was 79%. Some comments:<br /><br />1a) Several people erroneously claimed a log*|D| approximation here. (In fact, no factor approximation is possible.) The reason the AKC algorithm does not work is as follows: Assume we have correctly guessed R = Opt. At first, we indeed know that there k vertices which R-cover all of D. So the first Set-Cover approximation returns some set of vertices A R-covering D. But now, we <span style="font-style: italic;">do not know</span> that there are k vertices which R-cover A. So we are stuck.<br /><br />2. This was surprisingly not done so well. Most people seemed to get the key point that ($c_0$, $s_0$) hardness implies hardness along the two line segments (in your diagram) joining this point to (1,1) and to (0,0). But people seemed to have a very hard time deducing the easy region (due to the Vertex-Cover algorithm), which is the triangle with vertices (1,1), (1, 0), and (1-1/k, 0). <br /><br />3. Most everybody got this one, except for failing to point out in part b the simple but key fact that the reduction preserves Opt = 1.<br /><br />4. As mentioned earlier, this question was kind of screwed up. In honor of that, everyone received 3 bonus points. Mainly, we just wanted you to revisit the Max-Coverage proof, pay attention to the parameters, and see how the parameters from #3 fit in.<br /><br />5. Almost no one tried this, although it is not too hard.<br /><br />6. Most people who tried this more or less got it, although some failed to argue why shifting preserves the weight of the set. For a solution, see <a href="http://www.cs.washington.edu/homes/venkat/pubs/papers/hvc-k-1.ps">the original paper</a>.Ryan O'Donnellhttp://www.blogger.com/profile/01760886084136827344noreply@blogger.com0tag:blogger.com,1999:blog-6942493516775094341.post-17193570852871232782008-03-16T16:48:00.008-04:002008-03-16T17:34:21.687-04:00Mathematical odds and sods for SDPHello all, hope you had a good spring break. Scribe notes for the last two lectures are up. Sorry for the delay; my fault, not the scribes'.<br /><br />I thought I would record here a few small mathematical facts that came up in our lectures on SDP algorithms for Max-Cut and Coloring.<br /><br /><span style="font-weight: bold;">The GW constant</span>: Recall the GW constant,<br /><br />$\alpha := min_{-1 \leq \rho \leq 1} [\arccos(\rho)/\pi]/[1/2 - \rho/2]$.<br /><br />Using calculus and some trigonometric substitutions, you can determine that $\alpha = 2/(\pi sin \theta)$, where $\theta$ is the positive solution of $\theta = \tan(\theta/2)$. As far as I know, it is not known if $\alpha$ is irrational, although presumably it is transcendental.<br /><br /><span style="font-weight: bold;">Gaussians: </span>Remember that a real random variable <span style="font-weight: bold;">X </span>has the "standard Gaussian (normal) distribution" if its pdf is $\phi(x) := (1/\sqrt{2\pi}) \exp(-x^2/2)$. The annoying thing about this pdf is that it does not have a closed-form integrals although several closely related expressions do. For example, $x\phi(x)$ has $-\phi(x)$ as an antiderivative. It's easy to check that <span style="font-weight: bold;">E</span>[$X^k$] = 0 whenever $k$ is an odd positive integer, and with enough integration-by-parts effort you can check that <span style="font-weight: bold;">E</span>[$X^k]= (k-1)(k-3) \cdot \cdot \cdot 3 \cdot 1$ whenever $k$ is an even positive integer. In particular, <span style="font-weight: bold;">X</span> has mean 0 and variance 1.<br /><br />In class we were particularly interested in the "tail", $\mathcal{N}(t) := $ <span style="font-weight: bold;">Pr</span>[<span style="font-weight: bold;">X</span> $\geq t$] $= \int_{t}^\infty \phi(x) dx$, where $t$ is positive. Here is how to accurately estimate this quantity:<br /><br />For an upper bound, just insert a factor of $x/t$ into the integrand, which is always at least $1$ on the range. Then you can immediately integrate and get the upper bound<br /><br />$\mathcal{N}(t) \leq (1/t) \phi(t)$.<br /><br />For a lower bound, you need a cleverer trick. Insert a factor of $1 - 3/x^4$ into the integral (!), which is always less than $1$. You can check that the resulting quantity, $(1-3/x^4)\phi(x)$ has an explicit antiderivative: $(1/x^3 - 1/x)\phi(x)$. Thus you can conclude<br /><br />$\mathcal{N}(t) \geq (1/t - 1/t^3) \phi(t)$.<br /><br />(I think this trick is due to Feller.) Thus we have excellent bounds for $\mathcal{N}(t)$ for large $t$: in particular,<br /><br />$\mathcal{N}(t) \sim (1/t) \phi(t)$<br /><br />as mentioned in class.<br /><br /><br /><span style="font-weight: bold;">How to draw a Gaussian random variable: </span>In practice, just use "randn" in Matlab :) In theory, suppose you have access to an oracle which gives you a uniform real from the interval [0,1]. Then the easiest thing is to draw <span style="font-style: italic;">two</span> random numbers S and<span style="font-weight: bold;"><span style="font-weight: bold;"></span></span> $\theta$ from [0,1] and then set <span style="font-weight: bold;">g</span> = $\sqrt{\ln(1/s)} \cos(2\pi \theta)$, <span style="font-weight: bold;">h </span>= $\sqrt{\ln(1/s)} \sin(2\pi \theta)$. Then <span style="font-weight: bold;">g</span> and <span style="font-weight: bold;">h</span> are two independent Gaussians! (This is the "Box-Muller Transformation".) It is a nice exercise to see why this is true.<br /><br />Of course, in practice in theory (if you will), the usual model of randomness is to assume you have random <span style="font-style: italic;">bits</span>. Also, of course, you can't compute ln, cos, sin, $\pi$, etc. to arbitrary precision. So in reality, you need to make some approximations. <br /><br />Just as with the issue of solving SDPs exactly, these approximations introduce additive errors of $\epsilon$, with running time overhead of polylog($1/\epsilon$). And just as with the SDP-solving issue, we will try to never speak of such annoying details again :)Ryan O'Donnellhttp://www.blogger.com/profile/01760886084136827344noreply@blogger.com0tag:blogger.com,1999:blog-6942493516775094341.post-81833776629563933592008-03-16T16:47:00.001-04:002008-03-16T16:47:52.336-04:00Compendium of approximability[somehow I deleted the original of this posting...]<br /><br /><a href="http://www.nada.kth.se/%7Eviggo/wwwcompendium/node276.html">This site</a>, by Pierluigi Crescenzi and Viggo Kann describes the best known approximation guarantees and inapproximability results for a very large number of problems.<br /><br /><br /><br />It's quite out of date, but still a useful resource.Ryan O'Donnellhttp://www.blogger.com/profile/01760886084136827344noreply@blogger.com0tag:blogger.com,1999:blog-6942493516775094341.post-61229585219959709392008-03-04T18:24:00.003-05:002008-03-04T18:33:59.487-05:00k-Set-CoverJeremiah asked me today about the status of the problem "k-Set-Cover" for small k. This is the Min-Set-Cover problem where all the sets have cardinality at most k. <br /><br />I wasn't exactly sure on the status, actually. On the homework we showed that the greedy algorithm gives an $H_k$ ($k$th harmonic number) factor approximation. I guessed that this was known to be more or less sharp.<br /><br />Turns out that is true to some extent. With some augmentations to Feige's result, <a href="http://www.cs.berkeley.edu/%7Eluca/pubs/stoc01.ps">Trevisan</a> showed that the problem is hard to approximate within a factor of $\ln k - \Omega(\ln \ln k)$. <br /><br />While this looks pretty sharp, it's actually useless for the relatively interesting case of small constant $k$. For $k = 2$ the problem is AKA "Edge Cover", and it's solvable in P. <br /><br />For $k \geq 3$ the problem is known to be NP-hard, and furthermore known to be "APX-hard", meaning that it is hard to approximate within a factor of $1 + \epsilon$ for some universal positive $\eps > 0$. (Result due to Papadimitriou and Yannakakis?) Improving this to an explicit decent value of $\epsilon$, depending on $k$, would be a nice problem.<br /><br />On the positive side, the case of most interest to Jeremiah was $k = 3$. Here the best result is due to <a href="http://citeseer.ist.psu.edu/rd/5552853%2C517859%2C1%2C0.25%2CDownload/http://citeseer.ist.psu.edu/compress/0/papers/cs/11873/http:zSzzSzwww.cse.psu.eduzSz%7EfurerzSzPaperszSz.zSzDuh.ps.gz/duh97approximation.ps">Duh and F<span class="m">ΓΌ</span>rer</a>: a factor 4/3 algorithm. Actually, they get a factor $H_k - 1/2$ algorithm for any $k$.<br /><br />There have been subsequent marginal improvements. A result slightly better than 19/12 is known for $k = 4$, and the world record is $H_k - .59$ for all $k \geq 6$; that appears in this recent paper of <a href="http://www.springerlink.com/content/n820gv7g30mv1356/">Athanassopoulos, Caragiannis, and Kaklamanis</a>.Ryan O'Donnellhttp://www.blogger.com/profile/01760886084136827344noreply@blogger.com0tag:blogger.com,1999:blog-6942493516775094341.post-52057495148768575902008-02-29T11:30:00.003-05:002008-02-29T12:05:33.102-05:00Homework 2: Notes and CommentsHere are solution sketches and comments on the rest of problems in HW2.<br /><ol><li>For the naive algorithm on unweighted graphs, several answers said: "if an edge is uncovered, the algorithm will pick two vertices when the optimal algorithm picks one vertex". While this is true (and is indeed the basis for the proof), you need to show that we are not over-counting. One way is to clearly state a lower bound on OPT (the size of any maximal matching), and state why the algorithm is at most twice the lower bound.<br /><br />For the randomized VC algorithm, see the solutions to problem 2b from <a href="http://www.cs.cmu.edu/afs/cs.cmu.edu/academic/class/15859-f04/www/solutions/sol5.txt">this</a> previous semester's course.<br /><br />For the primal-dual algorithm, we write down the dual which maximizes $\sum_e y_e$ such that for each vertex $\sum_{e: v \in e} y_e \leq c_v$, and the $y_e$ values are non-negative. We start with the zero primal and dual solutions, and raise all the $y_e$'s uniformly until some constraint becomes tight (i.e., for some vertex $v$, $\sum_{e: v \in e} y_e = c_v$. Pick the vertex in the primal solution, and "freeze" all the variables involved in this constraint. Now continue raising all the unfrozen edge variables, picking vertices and freezing edges when other contraints become right etc, until all the edge variables become tight: we now have have a feasible primal vertex cover $C$. (why?) Now the cost of this solution is $\sum_{v \in C} c_v = \sum_{v \in C} \sum_{e: v \in e} y_e$ $\leq \sum_{e} \sum_{v \in C \cap e} y_e \leq 2 \sum_e y_e$, which is the dual value. (The last inequality is because each edge contains at most 2 vertices.) Hence the vertex cover costs twice the feasible dual solution.<br /><br />For the greedy algorithm, if you have a star with center cost $1+\epsilon$, and $n$ leaves with costs $1, 1/2, 1/3, \ldots, 1/n$, then you will pick all the leaves instead of the center, and hence only get an $H_n$ approx. An $O(\log m) = O(\log n)$ upper bound follows from the standard set cover analysis.<br /><br /></li><li>One point often missed: this problem defined $C^*$ and $F^*$ to be the facility and connection costs for the optimal <span style="font-style: italic;">integer </span>solution, and not for the optimal LP solution.<br /><br />Also, if you claim that after filtering you can use the standard Set Cover randomized rounding to obtain a solution, you need to show how the LP solution for facility location can be mapped to a LP solution for Set Cover.<br /><br /></li><li>No surprises here.<br /><br /></li><li>Build a bipartite graph on clauses and variables, and use Hall's theorem to infer the existence of a perfect matching.<br /></li></ol>Anupamnoreply@blogger.com1tag:blogger.com,1999:blog-6942493516775094341.post-27698931717820694502008-02-28T11:46:00.001-05:002008-02-28T11:50:54.321-05:00Homework 2 gradedSorry for the delay. We'll hand it back today. Overall average was 84%.Ryan O'Donnellhttp://www.blogger.com/profile/01760886084136827344noreply@blogger.com0tag:blogger.com,1999:blog-6942493516775094341.post-58366566228505523292008-02-28T10:43:00.004-05:002008-02-28T11:46:00.590-05:00Solutions to Homework 2, #67a. Take any $s$ sets in $\mathcal{F}_{s,t,\ell}$. By definition, each misses at most $\ell$ elements from $[t + s\ell]$. Thus their intersection misses at most $s \ell$ elements from $[t + s\ell]$ and thus has size at least $t$.<br /><br />7b. Extend the definition of $S_{ij}$ to sets in the natural way. It's obvious that if $A$ is in $\mathcal{F}_{s,t,\ell}$ then $S_{ij}(A)$ is too. Hence shifting has no effect on $\mathcal{F}_{s,t,\ell}$.<br /><br />7c. This one definitely takes much more explanation, but I don't think it's actually tricky -- you just have to follow your nose all the way through. However, you <span style="FONT-STYLE: italic">do</span> have to avoid the following two extremely common mistakes:<br /><br />"$S_{ij}\mathcal{F}$ is given by taking $S_{ij}(A)$ for all $A \in \mathcal{F}$." -- Not true.<br /><br />"For any $A' \in S_{ij}\mathcal{F}$, there is an $A \in \mathcal{F}$ such that $S_{ij}(A) = A'$." -- False false false! Consider $i = 1$, $j = 2$, $n = 2$, $\mathcal{F} = \{ \{1\}, \{2\} \}$. Then $S_{12} \mathcal{F} = \mathcal{F}$, and yet for $\{2\} \in S_{ij} \mathcal{F}$, there is no set $A \in \mathcal{F}$ such that $S_{12}(A) = \{2\}$!<br /><br />Here is my two-part proof:<br /><br /><span style="FONT-WEIGHT: bold">Part 1. </span><br />Suppose by way of contradiction that there are $s$ sets $B_1, ..., B_s$ in $S_{ij} \mathcal{F}$ whose intersection is of size less than $t$. Let $A_1, ..., A_s \in \mathcal{F}$ be the sets they "came from". So for each $k$, either $B_k = S_{ij}(A_k)$ or $B_k = A_k$, the latter occurring when $S_{ij}(A_k)$ is also in $\mathcal{F}$. Clearly, the $A_k$'s are all be distinct, and so the intersection of the $A_k$'s has size at least $t$.<br /><br />How could the $A_k$'s have intersection at least $t$ and yet the $B_k$'s have intersection less than $t$? Certainly nothing changes on coordinates other than $i$ or $j$. It's also clear $S_{ij}$ can't cause coordinate $i$ to <span style="FONT-STYLE: italic">leave</span> the intersection. This means that it <span style="FONT-STYLE: italic">must</span> be the case that coordinate $j$ left the intersection. In particular, $j$ is <span style="FONT-STYLE: italic">in</span> the intersection of the $A_k$'s and <span style="FONT-STYLE: italic">not in</span> the intersection of the $B_k$'s.<br /><br />Further, since we know that $j$ is in the intersection of the $A_k$'s, it follows that $i$ is <span style="FONT-STYLE: italic">not</span> in the intersection of the $A_k$'s. Otherwise, we have both $i$ and $j$ in each $A_k$, which surely means that $i$ and $j$ are both in each $B_k$. But we've already established that $j$ is not in the intersection of the $B_k$'s.<br /><br />Finally, since $i$ is not in the intersection of the $A_k$'s, and since shifting caused the intersection size to go down, $i$ had better not be in the intersection of the $B_k$'s.<br /><br />To summarize, $j$ but not $i$ is in the intersection of the $A_k$'s, and neither $i$ nor $j$ is in the intersection of the $B_k$'s. Further, it must be the case that the $A_k$'s have intersection size <span style="FONT-STYLE: italic">exactly</span> $t$.<br /><br /><span style="FONT-WEIGHT: bold">Part 2.</span><br />Focus on the coordinates $i$ and $j$ in the $A_k$'s now. For each $A_k$ with a $0$ in the $i$ coordinate and a $1$ in the $j$ coordinate (and there is at least one such $A_k$), how come all those $1$'s didn't shift over into the $i$th coordinate, causing all the $B_k$'s to have $1$'s in their $i$th coordinate?<br /><br />It must be because there is at least one $A_k$ -- say $A_1$ WOLOG -- such that $S_{ij}(A_1)$ was already in $\mathcal{F}$. So let's look at the intersection of the following $s$ distinct sets in $\mathcal{F}$: $S_{ij}(A_1), A_2, ..., A_s$. These $s$ sets must also have intersection size at least $t$. Now the intersection of these is very similar to the intersection of $A_1, ..., A_s$; however, $S_{ij}(A_j)$ does not have a $0$ in the $j$th coordinate. This means $j$ is not in the intersection of $S_{ij}(A_1), A_2, ..., A_s$.<br /><br />But this is a risky situation, because we concluded in Part 1 that $A_1, ..., A_s$ had intersection size <span style="FONT-STYLE: italic">only</span> $t$, and this included the fact that $j$ was in the intersection. The only way $S_{ij}(A_1), A_2, ..., A_s$ could still have intersection size $t$ is if miraculously, putting in $S_{ij}(A_1)$ causes $i$ to be in the intersection, where it wasn't before.<br /><br />But the only way for <span style="FONT-STYLE: italic">this</span> to happen is if $A_1$ was the <span style="FONT-STYLE: italic">only</span> set among $A_1, ..., A_s$ with the pattern $0, 1$ in coordinates $i, j$. In this case, every <span style="FONT-STYLE: italic">other </span>$A_k$ has the pattern $1, 1$, which certainly implies $B_k = A_k$. Further, we have $B_1 = A_1$ by assumption (since $S_{ij}(A_1)$ was already in $\mathcal{F}$). So in fact all $B_k$'s equal their $A_k$'s. This is a contradiction because in Part 1 we concluded that $j$ was in the intersection of the $A_k$'s but not in the intersection of the $B_k$'s.Ryan O'Donnellhttp://www.blogger.com/profile/01760886084136827344noreply@blogger.com0tag:blogger.com,1999:blog-6942493516775094341.post-11021445074766374042008-02-26T17:17:00.007-05:002008-02-26T17:37:40.438-05:00Lecture 13: The Hardness EndgameHere's the final steps to the hardness of Priority Steiner tree. The details are in the paper <a href="http://www.cs.cmu.edu/%7Eanupamg/adv-approx/priority.pdf">On the Approximability of Network Design Problems</a>, J. Chuzhoy, A. Gupta, J. Naor and A. Sinha, SODA 2006.<br /><br /><span style="font-weight: bold;">What we saw Today in Class. </span>Recall that we assumed that for some $h$, we had set cover instances with universe of size $u$, $s$ sets, each set of size $z$, where<br /><ul><li>the Yes instances had a solution using at most $X$ sets, but<br /></li><li>the No instances had the property that even picking $h\cdot X$ sets would leave a $2^{-c \cdot h}$-fraction of the universe uncovered. (Let us call this the <span style="font-style: italic; color: rgb(153, 0, 0);">$h$-factor property</span>.)<br /></li></ul>The Yes instances mapped to instances of PST where the solution cost was at most $X$.<br /><br />In the No instances, we proved a claim that if the total cost of edges bought in levels $1, 2, \ldots, j-1$ was at most $hX/2$, then the cost incurred at level $j$ itself would be at least $(u/4z)\cdot 2^{-ch}$. In particular, if $k$ were (at least) $(4zhX/u)\cdot 2^{ch}$, this claim implies that the total cost in the No instance is at least $hX/2$.<br /><br />And hence we have a gap of $h/2$ between the Yes and the No instances.<br /><br /><span style="font-weight: bold;">Parameters (The Questions).</span> So how do we set the parameters so that<br /><ol><li>the set cover instance satisfies the $h$-factor property.<br /><br /></li><li>The number of levels $k$ is at least $(4zhX/u) \cdot 2^{ch}$.<br /><br /></li><li>the size of the construction $N \approx (2z)^k \cdot s$ is reasonable.<br /></li></ol><span style="font-weight: bold;">Our Friend the Set Cover Instance.</span> The construction of the set cover instance is the usual thing as discussed in Lecture 11 (or in Homework 3): note that we can start from a SAT instance of size $n$, and get a label cover instance with $n'$ nodes in each partition, each node participating in $d'$ constraints, label set $L'$, key set $K'$, where $n' = n^{- \log \eta}$, and $d', |L'|, |K'|$ are all $poly(1/\eta)$.<br /><br />In turn, this gives us a set cover instance with universe size $u = n'd'2^{|K'|}$, number of sets $s = n'|L'|$, size of each set being $z = d' 2^{|K'| - 1}$, the optimal solution size is $X = 2n'$, and it satisfies the $h$-factor property above if $1/h^2 > O(\eta)$.<br /><br /><span style="font-weight: bold;">Parameters (The Answers).</span> To answer the questions above:<br /><ol><li>the set cover instance satisfies the $h$-factor property as long as $h^2 < O(1/\eta)$. This is satisfied if we set $\eta \approx h \approx \Theta(\log \log n)$.<br /><br /> </li><li>Note that the parameters derived above satisfy $zX/u = 4$, and hence $k = 16h 2^{ch} \approx \Theta(\log n)$ is sufficient.<br /><br /></li><li>the size of the construction is $N \approx s (2z)^k = n^{- \log \eta} \cdot poly(1/\eta)^{(2^h)} \approx n^{\log \log \log n}$.<br /></li></ol>Finally, the gap between the Yes and No instances is $h = \Omega(\log \log n) = \log \log N$ as well. This completes the proof.<br /><br />Questions and comments about today's lecture go here, as usual.Anupamnoreply@blogger.com0