- Q6, the demands are all integers.
- Q7e, it should be $\epsilon/\log n$, and not $\epsilon \log n$.

## Monday, March 31, 2008

### Hwk 5 clarifications

HW5 has been extended to Thursday Apr 3rd. Here are some typos to watch out for ---

### Tomorrow's class

## Saturday, March 29, 2008

## Monday, March 24, 2008

### Prep for lecture tomorrow: recap Lecture 18

For tomorrow's lecture (#19), please recall the definition of metrics embedding into trees (problem #6 from HW4). Also, have a look at the correctness proof for low-diameter decompositions posted on Thursday, and try to prove the following extension of that result:

Extension: For a vertex $v$ and radius $r$, let $B(v,r) = \{ w | d_{vw} \leq r\}$ be the "ball" of all vertices within distance $r$ to $v$. Extend the proof from Lecture 16 to show that the probability of an edge $(u,v)$ being cut is at most

Extension: For a vertex $v$ and radius $r$, let $B(v,r) = \{ w | d_{vw} \leq r\}$ be the "ball" of all vertices within distance $r$ to $v$. Extend the proof from Lecture 16 to show that the probability of an edge $(u,v)$ being cut is at most

$O(1) \times (d_{uv} / r) \times \log (|B(u,2r)| / |B(u,r/4)|)$.

(Hint: The previous proof sums $1/j$ for all $j$ from $1$ to $n$ to get $H_n$. Can you show it suffices to sum over a smaller range? You may also want to consider cases depending whether or not $d_{uv}$ is at least $r/4$.)

(Hint: The previous proof sums $1/j$ for all $j$ from $1$ to $n$ to get $H_n$. Can you show it suffices to sum over a smaller range? You may also want to consider cases depending whether or not $d_{uv}$ is at least $r/4$.)

## Friday, March 21, 2008

## Thursday, March 20, 2008

### Lecture 18: Last bits of the proof

Recall the low-diameter decomposition procedure: we picked a random radius $X$ and a random permutation $\pi$. We then constructed the random partition by considering all the vertices one by one, in the order given by $\pi$: when considering $w$, we assigned all the yet-unassigned vertices $v$ with $d(w,v) \leq X$ to $w$'s partition.

Just for the analysis: suppose we renumber the vertices $w_1, w_2, \dots, w_n$ in order of their distance from the edge $e = (u,v)$.

The crucial definitions (that avoid pain later) are the following:

Consider $w_j$ and let its distance $d(w_j,u) = a_j$ and $d(w_j, v) = b_j$. Assume $a_j < b_j$, the other case is identical. If $w_j$ cuts $(u,v)$ when the random values are $X$ and $\pi$, the following two properties must hold:

Just for the analysis: suppose we renumber the vertices $w_1, w_2, \dots, w_n$ in order of their distance from the edge $e = (u,v)$.

The crucial definitions (that avoid pain later) are the following:

- At some time instant in this procedure, one (or both) of $u$ or $v$ gets assigned to some $w_i$. Say that $w_i$ settles the edge $(u,v)$.
- At the moment the edge is settled, if only one endpoint of this edge gets assigned, then we say that $w_i$ cuts the edge $(u,v)$.

Consider $w_j$ and let its distance $d(w_j,u) = a_j$ and $d(w_j, v) = b_j$. Assume $a_j < b_j$, the other case is identical. If $w_j$ cuts $(u,v)$ when the random values are $X$ and $\pi$, the following two properties must hold:

- The random variable $X$ must lie in the interval $[a_j, b_j]$ (else either none or both of $(u,v)$ would get marked).
- The node $w_j$ must come before $w_1, ..., w_{j-1}$ in the permutation $\pi$.

Suppose not, and one of them came before $w_j$ in the permutation. Since all these vertices are closer to the edge than $w_j$ is, then for the current value of $X$, they would have settle the edge (either capture one or both of the endpoints) at some previous time point, and hence $w_j$ would not settle---and hence not cut---the edge $(u,v)$.

## Sunday, March 16, 2008

### Homework 4, problem 2b comment

Just show you can get a gap at least this bad. (In other words, although you will need to construct a good SDP solution for $C_5$, you need not prove it is optimal. Unless you want to. Say, by using duality.)

### Homework 3 graded

I'll return them on Tuesday. The average was 79%. Some comments:

1a) Several people erroneously claimed a log*|D| approximation here. (In fact, no factor approximation is possible.) The reason the AKC algorithm does not work is as follows: Assume we have correctly guessed R = Opt. At first, we indeed know that there k vertices which R-cover all of D. So the first Set-Cover approximation returns some set of vertices A R-covering D. But now, we do not know that there are k vertices which R-cover A. So we are stuck.

2. This was surprisingly not done so well. Most people seemed to get the key point that ($c_0$, $s_0$) hardness implies hardness along the two line segments (in your diagram) joining this point to (1,1) and to (0,0). But people seemed to have a very hard time deducing the easy region (due to the Vertex-Cover algorithm), which is the triangle with vertices (1,1), (1, 0), and (1-1/k, 0).

3. Most everybody got this one, except for failing to point out in part b the simple but key fact that the reduction preserves Opt = 1.

4. As mentioned earlier, this question was kind of screwed up. In honor of that, everyone received 3 bonus points. Mainly, we just wanted you to revisit the Max-Coverage proof, pay attention to the parameters, and see how the parameters from #3 fit in.

5. Almost no one tried this, although it is not too hard.

6. Most people who tried this more or less got it, although some failed to argue why shifting preserves the weight of the set. For a solution, see the original paper.

1a) Several people erroneously claimed a log*|D| approximation here. (In fact, no factor approximation is possible.) The reason the AKC algorithm does not work is as follows: Assume we have correctly guessed R = Opt. At first, we indeed know that there k vertices which R-cover all of D. So the first Set-Cover approximation returns some set of vertices A R-covering D. But now, we do not know that there are k vertices which R-cover A. So we are stuck.

2. This was surprisingly not done so well. Most people seemed to get the key point that ($c_0$, $s_0$) hardness implies hardness along the two line segments (in your diagram) joining this point to (1,1) and to (0,0). But people seemed to have a very hard time deducing the easy region (due to the Vertex-Cover algorithm), which is the triangle with vertices (1,1), (1, 0), and (1-1/k, 0).

3. Most everybody got this one, except for failing to point out in part b the simple but key fact that the reduction preserves Opt = 1.

4. As mentioned earlier, this question was kind of screwed up. In honor of that, everyone received 3 bonus points. Mainly, we just wanted you to revisit the Max-Coverage proof, pay attention to the parameters, and see how the parameters from #3 fit in.

5. Almost no one tried this, although it is not too hard.

6. Most people who tried this more or less got it, although some failed to argue why shifting preserves the weight of the set. For a solution, see the original paper.

### Mathematical odds and sods for SDP

Hello all, hope you had a good spring break. Scribe notes for the last two lectures are up. Sorry for the delay; my fault, not the scribes'.

I thought I would record here a few small mathematical facts that came up in our lectures on SDP algorithms for Max-Cut and Coloring.

The GW constant: Recall the GW constant,

$\alpha := min_{-1 \leq \rho \leq 1} [\arccos(\rho)/\pi]/[1/2 - \rho/2]$.

Using calculus and some trigonometric substitutions, you can determine that $\alpha = 2/(\pi sin \theta)$, where $\theta$ is the positive solution of $\theta = \tan(\theta/2)$. As far as I know, it is not known if $\alpha$ is irrational, although presumably it is transcendental.

Gaussians: Remember that a real random variable X has the "standard Gaussian (normal) distribution" if its pdf is $\phi(x) := (1/\sqrt{2\pi}) \exp(-x^2/2)$. The annoying thing about this pdf is that it does not have a closed-form integrals although several closely related expressions do. For example, $x\phi(x)$ has $-\phi(x)$ as an antiderivative. It's easy to check that E[$X^k$] = 0 whenever $k$ is an odd positive integer, and with enough integration-by-parts effort you can check that E[$X^k]= (k-1)(k-3) \cdot \cdot \cdot 3 \cdot 1$ whenever $k$ is an even positive integer. In particular, X has mean 0 and variance 1.

In class we were particularly interested in the "tail", $\mathcal{N}(t) := $ Pr[X $\geq t$] $= \int_{t}^\infty \phi(x) dx$, where $t$ is positive. Here is how to accurately estimate this quantity:

For an upper bound, just insert a factor of $x/t$ into the integrand, which is always at least $1$ on the range. Then you can immediately integrate and get the upper bound

$\mathcal{N}(t) \leq (1/t) \phi(t)$.

For a lower bound, you need a cleverer trick. Insert a factor of $1 - 3/x^4$ into the integral (!), which is always less than $1$. You can check that the resulting quantity, $(1-3/x^4)\phi(x)$ has an explicit antiderivative: $(1/x^3 - 1/x)\phi(x)$. Thus you can conclude

$\mathcal{N}(t) \geq (1/t - 1/t^3) \phi(t)$.

(I think this trick is due to Feller.) Thus we have excellent bounds for $\mathcal{N}(t)$ for large $t$: in particular,

$\mathcal{N}(t) \sim (1/t) \phi(t)$

as mentioned in class.

How to draw a Gaussian random variable: In practice, just use "randn" in Matlab :) In theory, suppose you have access to an oracle which gives you a uniform real from the interval [0,1]. Then the easiest thing is to draw two random numbers S and $\theta$ from [0,1] and then set g = $\sqrt{\ln(1/s)} \cos(2\pi \theta)$, h = $\sqrt{\ln(1/s)} \sin(2\pi \theta)$. Then g and h are two independent Gaussians! (This is the "Box-Muller Transformation".) It is a nice exercise to see why this is true.

Of course, in practice in theory (if you will), the usual model of randomness is to assume you have random bits. Also, of course, you can't compute ln, cos, sin, $\pi$, etc. to arbitrary precision. So in reality, you need to make some approximations.

Just as with the issue of solving SDPs exactly, these approximations introduce additive errors of $\epsilon$, with running time overhead of polylog($1/\epsilon$). And just as with the SDP-solving issue, we will try to never speak of such annoying details again :)

I thought I would record here a few small mathematical facts that came up in our lectures on SDP algorithms for Max-Cut and Coloring.

The GW constant: Recall the GW constant,

$\alpha := min_{-1 \leq \rho \leq 1} [\arccos(\rho)/\pi]/[1/2 - \rho/2]$.

Using calculus and some trigonometric substitutions, you can determine that $\alpha = 2/(\pi sin \theta)$, where $\theta$ is the positive solution of $\theta = \tan(\theta/2)$. As far as I know, it is not known if $\alpha$ is irrational, although presumably it is transcendental.

Gaussians: Remember that a real random variable X has the "standard Gaussian (normal) distribution" if its pdf is $\phi(x) := (1/\sqrt{2\pi}) \exp(-x^2/2)$. The annoying thing about this pdf is that it does not have a closed-form integrals although several closely related expressions do. For example, $x\phi(x)$ has $-\phi(x)$ as an antiderivative. It's easy to check that E[$X^k$] = 0 whenever $k$ is an odd positive integer, and with enough integration-by-parts effort you can check that E[$X^k]= (k-1)(k-3) \cdot \cdot \cdot 3 \cdot 1$ whenever $k$ is an even positive integer. In particular, X has mean 0 and variance 1.

In class we were particularly interested in the "tail", $\mathcal{N}(t) := $ Pr[X $\geq t$] $= \int_{t}^\infty \phi(x) dx$, where $t$ is positive. Here is how to accurately estimate this quantity:

For an upper bound, just insert a factor of $x/t$ into the integrand, which is always at least $1$ on the range. Then you can immediately integrate and get the upper bound

$\mathcal{N}(t) \leq (1/t) \phi(t)$.

For a lower bound, you need a cleverer trick. Insert a factor of $1 - 3/x^4$ into the integral (!), which is always less than $1$. You can check that the resulting quantity, $(1-3/x^4)\phi(x)$ has an explicit antiderivative: $(1/x^3 - 1/x)\phi(x)$. Thus you can conclude

$\mathcal{N}(t) \geq (1/t - 1/t^3) \phi(t)$.

(I think this trick is due to Feller.) Thus we have excellent bounds for $\mathcal{N}(t)$ for large $t$: in particular,

$\mathcal{N}(t) \sim (1/t) \phi(t)$

as mentioned in class.

How to draw a Gaussian random variable: In practice, just use "randn" in Matlab :) In theory, suppose you have access to an oracle which gives you a uniform real from the interval [0,1]. Then the easiest thing is to draw two random numbers S and $\theta$ from [0,1] and then set g = $\sqrt{\ln(1/s)} \cos(2\pi \theta)$, h = $\sqrt{\ln(1/s)} \sin(2\pi \theta)$. Then g and h are two independent Gaussians! (This is the "Box-Muller Transformation".) It is a nice exercise to see why this is true.

Of course, in practice in theory (if you will), the usual model of randomness is to assume you have random bits. Also, of course, you can't compute ln, cos, sin, $\pi$, etc. to arbitrary precision. So in reality, you need to make some approximations.

Just as with the issue of solving SDPs exactly, these approximations introduce additive errors of $\epsilon$, with running time overhead of polylog($1/\epsilon$). And just as with the SDP-solving issue, we will try to never speak of such annoying details again :)

### Compendium of approximability

[somehow I deleted the original of this posting...]

This site, by Pierluigi Crescenzi and Viggo Kann describes the best known approximation guarantees and inapproximability results for a very large number of problems.

It's quite out of date, but still a useful resource.

This site, by Pierluigi Crescenzi and Viggo Kann describes the best known approximation guarantees and inapproximability results for a very large number of problems.

It's quite out of date, but still a useful resource.

## Tuesday, March 4, 2008

### k-Set-Cover

Jeremiah asked me today about the status of the problem "k-Set-Cover" for small k. This is the Min-Set-Cover problem where all the sets have cardinality at most k.

I wasn't exactly sure on the status, actually. On the homework we showed that the greedy algorithm gives an $H_k$ ($k$th harmonic number) factor approximation. I guessed that this was known to be more or less sharp.

Turns out that is true to some extent. With some augmentations to Feige's result, Trevisan showed that the problem is hard to approximate within a factor of $\ln k - \Omega(\ln \ln k)$.

While this looks pretty sharp, it's actually useless for the relatively interesting case of small constant $k$. For $k = 2$ the problem is AKA "Edge Cover", and it's solvable in P.

For $k \geq 3$ the problem is known to be NP-hard, and furthermore known to be "APX-hard", meaning that it is hard to approximate within a factor of $1 + \epsilon$ for some universal positive $\eps > 0$. (Result due to Papadimitriou and Yannakakis?) Improving this to an explicit decent value of $\epsilon$, depending on $k$, would be a nice problem.

On the positive side, the case of most interest to Jeremiah was $k = 3$. Here the best result is due to Duh and FΓΌrer: a factor 4/3 algorithm. Actually, they get a factor $H_k - 1/2$ algorithm for any $k$.

There have been subsequent marginal improvements. A result slightly better than 19/12 is known for $k = 4$, and the world record is $H_k - .59$ for all $k \geq 6$; that appears in this recent paper of Athanassopoulos, Caragiannis, and Kaklamanis.

I wasn't exactly sure on the status, actually. On the homework we showed that the greedy algorithm gives an $H_k$ ($k$th harmonic number) factor approximation. I guessed that this was known to be more or less sharp.

Turns out that is true to some extent. With some augmentations to Feige's result, Trevisan showed that the problem is hard to approximate within a factor of $\ln k - \Omega(\ln \ln k)$.

While this looks pretty sharp, it's actually useless for the relatively interesting case of small constant $k$. For $k = 2$ the problem is AKA "Edge Cover", and it's solvable in P.

For $k \geq 3$ the problem is known to be NP-hard, and furthermore known to be "APX-hard", meaning that it is hard to approximate within a factor of $1 + \epsilon$ for some universal positive $\eps > 0$. (Result due to Papadimitriou and Yannakakis?) Improving this to an explicit decent value of $\epsilon$, depending on $k$, would be a nice problem.

On the positive side, the case of most interest to Jeremiah was $k = 3$. Here the best result is due to Duh and FΓΌrer: a factor 4/3 algorithm. Actually, they get a factor $H_k - 1/2$ algorithm for any $k$.

There have been subsequent marginal improvements. A result slightly better than 19/12 is known for $k = 4$, and the world record is $H_k - .59$ for all $k \geq 6$; that appears in this recent paper of Athanassopoulos, Caragiannis, and Kaklamanis.

Subscribe to:
Posts (Atom)