Friday, February 29, 2008

Homework 2: Notes and Comments

Here are solution sketches and comments on the rest of problems in HW2.
  1. For the naive algorithm on unweighted graphs, several answers said: "if an edge is uncovered, the algorithm will pick two vertices when the optimal algorithm picks one vertex". While this is true (and is indeed the basis for the proof), you need to show that we are not over-counting. One way is to clearly state a lower bound on OPT (the size of any maximal matching), and state why the algorithm is at most twice the lower bound.

    For the randomized VC algorithm, see the solutions to problem 2b from this previous semester's course.

    For the primal-dual algorithm, we write down the dual which maximizes $\sum_e y_e$ such that for each vertex $\sum_{e: v \in e} y_e \leq c_v$, and the $y_e$ values are non-negative. We start with the zero primal and dual solutions, and raise all the $y_e$'s uniformly until some constraint becomes tight (i.e., for some vertex $v$, $\sum_{e: v \in e} y_e = c_v$. Pick the vertex in the primal solution, and "freeze" all the variables involved in this constraint. Now continue raising all the unfrozen edge variables, picking vertices and freezing edges when other contraints become right etc, until all the edge variables become tight: we now have have a feasible primal vertex cover $C$. (why?) Now the cost of this solution is $\sum_{v \in C} c_v = \sum_{v \in C} \sum_{e: v \in e} y_e$ $\leq \sum_{e} \sum_{v \in C \cap e} y_e \leq 2 \sum_e y_e$, which is the dual value. (The last inequality is because each edge contains at most 2 vertices.) Hence the vertex cover costs twice the feasible dual solution.

    For the greedy algorithm, if you have a star with center cost $1+\epsilon$, and $n$ leaves with costs $1, 1/2, 1/3, \ldots, 1/n$, then you will pick all the leaves instead of the center, and hence only get an $H_n$ approx. An $O(\log m) = O(\log n)$ upper bound follows from the standard set cover analysis.

  2. One point often missed: this problem defined $C^*$ and $F^*$ to be the facility and connection costs for the optimal integer solution, and not for the optimal LP solution.

    Also, if you claim that after filtering you can use the standard Set Cover randomized rounding to obtain a solution, you need to show how the LP solution for facility location can be mapped to a LP solution for Set Cover.

  3. No surprises here.

  4. Build a bipartite graph on clauses and variables, and use Hall's theorem to infer the existence of a perfect matching.

Thursday, February 28, 2008

Homework 2 graded

Sorry for the delay. We'll hand it back today. Overall average was 84%.

Solutions to Homework 2, #6

7a. Take any $s$ sets in $\mathcal{F}_{s,t,\ell}$. By definition, each misses at most $\ell$ elements from $[t + s\ell]$. Thus their intersection misses at most $s \ell$ elements from $[t + s\ell]$ and thus has size at least $t$.

7b. Extend the definition of $S_{ij}$ to sets in the natural way. It's obvious that if $A$ is in $\mathcal{F}_{s,t,\ell}$ then $S_{ij}(A)$ is too. Hence shifting has no effect on $\mathcal{F}_{s,t,\ell}$.

7c. This one definitely takes much more explanation, but I don't think it's actually tricky -- you just have to follow your nose all the way through. However, you do have to avoid the following two extremely common mistakes:

"$S_{ij}\mathcal{F}$ is given by taking $S_{ij}(A)$ for all $A \in \mathcal{F}$." -- Not true.

"For any $A' \in S_{ij}\mathcal{F}$, there is an $A \in \mathcal{F}$ such that $S_{ij}(A) = A'$." -- False false false! Consider $i = 1$, $j = 2$, $n = 2$, $\mathcal{F} = \{ \{1\}, \{2\} \}$. Then $S_{12} \mathcal{F} = \mathcal{F}$, and yet for $\{2\} \in S_{ij} \mathcal{F}$, there is no set $A \in \mathcal{F}$ such that $S_{12}(A) = \{2\}$!

Here is my two-part proof:

Part 1.
Suppose by way of contradiction that there are $s$ sets $B_1, ..., B_s$ in $S_{ij} \mathcal{F}$ whose intersection is of size less than $t$. Let $A_1, ..., A_s \in \mathcal{F}$ be the sets they "came from". So for each $k$, either $B_k = S_{ij}(A_k)$ or $B_k = A_k$, the latter occurring when $S_{ij}(A_k)$ is also in $\mathcal{F}$. Clearly, the $A_k$'s are all be distinct, and so the intersection of the $A_k$'s has size at least $t$.

How could the $A_k$'s have intersection at least $t$ and yet the $B_k$'s have intersection less than $t$? Certainly nothing changes on coordinates other than $i$ or $j$. It's also clear $S_{ij}$ can't cause coordinate $i$ to leave the intersection. This means that it must be the case that coordinate $j$ left the intersection. In particular, $j$ is in the intersection of the $A_k$'s and not in the intersection of the $B_k$'s.

Further, since we know that $j$ is in the intersection of the $A_k$'s, it follows that $i$ is not in the intersection of the $A_k$'s. Otherwise, we have both $i$ and $j$ in each $A_k$, which surely means that $i$ and $j$ are both in each $B_k$. But we've already established that $j$ is not in the intersection of the $B_k$'s.

Finally, since $i$ is not in the intersection of the $A_k$'s, and since shifting caused the intersection size to go down, $i$ had better not be in the intersection of the $B_k$'s.

To summarize, $j$ but not $i$ is in the intersection of the $A_k$'s, and neither $i$ nor $j$ is in the intersection of the $B_k$'s. Further, it must be the case that the $A_k$'s have intersection size exactly $t$.

Part 2.
Focus on the coordinates $i$ and $j$ in the $A_k$'s now. For each $A_k$ with a $0$ in the $i$ coordinate and a $1$ in the $j$ coordinate (and there is at least one such $A_k$), how come all those $1$'s didn't shift over into the $i$th coordinate, causing all the $B_k$'s to have $1$'s in their $i$th coordinate?

It must be because there is at least one $A_k$ -- say $A_1$ WOLOG -- such that $S_{ij}(A_1)$ was already in $\mathcal{F}$. So let's look at the intersection of the following $s$ distinct sets in $\mathcal{F}$: $S_{ij}(A_1), A_2, ..., A_s$. These $s$ sets must also have intersection size at least $t$. Now the intersection of these is very similar to the intersection of $A_1, ..., A_s$; however, $S_{ij}(A_j)$ does not have a $0$ in the $j$th coordinate. This means $j$ is not in the intersection of $S_{ij}(A_1), A_2, ..., A_s$.

But this is a risky situation, because we concluded in Part 1 that $A_1, ..., A_s$ had intersection size only $t$, and this included the fact that $j$ was in the intersection. The only way $S_{ij}(A_1), A_2, ..., A_s$ could still have intersection size $t$ is if miraculously, putting in $S_{ij}(A_1)$ causes $i$ to be in the intersection, where it wasn't before.

But the only way for this to happen is if $A_1$ was the only set among $A_1, ..., A_s$ with the pattern $0, 1$ in coordinates $i, j$. In this case, every other $A_k$ has the pattern $1, 1$, which certainly implies $B_k = A_k$. Further, we have $B_1 = A_1$ by assumption (since $S_{ij}(A_1)$ was already in $\mathcal{F}$). So in fact all $B_k$'s equal their $A_k$'s. This is a contradiction because in Part 1 we concluded that $j$ was in the intersection of the $A_k$'s but not in the intersection of the $B_k$'s.

Tuesday, February 26, 2008

Lecture 13: The Hardness Endgame

Here's the final steps to the hardness of Priority Steiner tree. The details are in the paper On the Approximability of Network Design Problems, J. Chuzhoy, A. Gupta, J. Naor and A. Sinha, SODA 2006.

What we saw Today in Class. Recall that we assumed that for some $h$, we had set cover instances with universe of size $u$, $s$ sets, each set of size $z$, where
  • the Yes instances had a solution using at most $X$ sets, but
  • the No instances had the property that even picking $h\cdot X$ sets would leave a $2^{-c \cdot h}$-fraction of the universe uncovered. (Let us call this the $h$-factor property.)
The Yes instances mapped to instances of PST where the solution cost was at most $X$.

In the No instances, we proved a claim that if the total cost of edges bought in levels $1, 2, \ldots, j-1$ was at most $hX/2$, then the cost incurred at level $j$ itself would be at least $(u/4z)\cdot 2^{-ch}$. In particular, if $k$ were (at least) $(4zhX/u)\cdot 2^{ch}$, this claim implies that the total cost in the No instance is at least $hX/2$.

And hence we have a gap of $h/2$ between the Yes and the No instances.

Parameters (The Questions). So how do we set the parameters so that
  1. the set cover instance satisfies the $h$-factor property.

  2. The number of levels $k$ is at least $(4zhX/u) \cdot 2^{ch}$.

  3. the size of the construction $N \approx (2z)^k \cdot s$ is reasonable.
Our Friend the Set Cover Instance. The construction of the set cover instance is the usual thing as discussed in Lecture 11 (or in Homework 3): note that we can start from a SAT instance of size $n$, and get a label cover instance with $n'$ nodes in each partition, each node participating in $d'$ constraints, label set $L'$, key set $K'$, where $n' = n^{- \log \eta}$, and $d', |L'|, |K'|$ are all $poly(1/\eta)$.

In turn, this gives us a set cover instance with universe size $u = n'd'2^{|K'|}$, number of sets $s = n'|L'|$, size of each set being $z = d' 2^{|K'| - 1}$, the optimal solution size is $X = 2n'$, and it satisfies the $h$-factor property above if $1/h^2 > O(\eta)$.

Parameters (The Answers). To answer the questions above:
  1. the set cover instance satisfies the $h$-factor property as long as $h^2 < O(1/\eta)$. This is satisfied if we set $\eta \approx h \approx \Theta(\log \log n)$.

  2. Note that the parameters derived above satisfy $zX/u = 4$, and hence $k = 16h 2^{ch} \approx \Theta(\log n)$ is sufficient.

  3. the size of the construction is $N \approx s (2z)^k = n^{- \log \eta} \cdot poly(1/\eta)^{(2^h)} \approx n^{\log \log \log n}$.
Finally, the gap between the Yes and No instances is $h = \Omega(\log \log n) = \log \log N$ as well. This completes the proof.

Questions and comments about today's lecture go here, as usual.

Homework 3 correction -- Problem 4

Hi -- as many of you noticed, the last part of Problem 4 does not look so doable :)

The result is in fact true -- indeed, $\Omega(\log n)$ hardness for Set-Cover is known even subject to P $\neq$ NP. However the method illustrated probably won't cut it.

Please do one of the following instead:

1. Use this method to prove $f(n)$-hardness for Set-Cover, for as large a function $f(n)$ as you can -- subject to NP not in DTIME($2^{o(n)}$) or an even better DTIME assumption.

OR

2. Prove $\Omega(\log n)$-hardness for Set-Cover assuming NP not in BPTIME($n^{polylog n}$) -- using any method.

Tuesday, February 19, 2008

Integrality Gap for Group Steiner

The integrality gap of $\Omega(\log^2 k)$ was proved in Integrality Ratio for Group Steiner Trees and Directed Steiner Trees, E. Halperin, G. Kortsarz, R. Krauthgamer, A. Srinivasan, and N. Wang. SIAM Journal on Computing, Vol. 36, 1494-1511, 2007. Here's the idea.
  1. Create a balanced $\delta$-ary tree with height $H+1$ levels, where the edge costs decrease exponentially: edges at level $h$ cost $1/2^h$. (Hence edges incident to the root cost $1/2$, and edges incident to the leaves cost $1/2^{H}$.)

  2. The arity of the tree is $\delta = c \log n$ for some constant $c$. So the height is $H = \log n/\log \delta$. The number of leaves is $n$, the total number of nodes is $n(1+o(1))$. The number of groups/colors/elements is $k = 2^{2H}$.

  3. Do the following independently for each of the $k$ elements $x$. The element $x$ starts off at the root. For each of the $\delta$ child edges, it flips a fair coin independently: if the coin comes up heads, a copy of $x$ slides down that edge, where it begins such a random process anew. Note that the expected number of copies of $x$ at level $i$ is $(\delta/2)^i$. The group for $x$ is the set of all the leaves that contain a copy of $x$.
Small fractional solution. It is not that difficult to show that this instance has a fractional solution of value $O(H)$ with high probability (where the probability is over the random choice of groups above):
  • Consider some node at level $i$ in this tree. Conditioned on this node getting a copy of $x$, the expected number of leaves containing $x$ below it is $\mu_h = (\delta/2)^{H-h}$. With high probability, the actual number lies in $(\mu_h/3, 3\mu_h)$. (Use Chernoff bounds!)

  • For an edge $e$ going from level $i-1$ to level $i$, set $x_e = 9/\mu_e$ --- using the above facts, this is a feasible solution (whp) with cost $9H$.
No small integer solution. Next, we show that "with high probability, all integer solutions have cost $\Omega(H^2 \log k) = \Omega(H \log^2 k)$." This proves the $\Omega(\log^2 k)$ integrality gap for the problem.
  • Since each leaf-edge costs $1/2^H$, no tree of cost $C$ can use more than $C2^H$ of these. There are $\delta^H$ leaf-edges, so the number of possible trees of cost $C$ is $[(\delta^H) choose (C2^H)] \leq \exp(CH2^H log \delta)$. Call this value $M(C)$.

  • Suppose $Fail(C)$ is the probability that for a fixed tree of cost $C$, it misses the group for $x$ created above. By independence of groups, the probability that this fixed tree hits all the groups is at most $(1 - Fail(C))^k \leq \exp(-k \times Fail(C)) = Hitall(C)$.

  • Hence, taking a trivial union bound over all trees, the chance that there exists some tree that hits all the groups is at most $M(C) \times Hitall(C)$.

  • If this product is $o(1)$ for some $C$, then with probability $1 - o(1)$ no tree of cost $C$ is feasible. We would like to show this holds true for some constant $\epsilon > 0$ and $C = \epsilon H^2 \log k$.
Now for the most technical part.

One needs to prove that for some fixed tree of cost at most $\epsilon H^2 \log k$, and some $x$, none of the leafs in this tree are hit by the random process above. The proof strategy seems natural (in hindsight), but also somewhat delicate: details in Section 2.3.1 of the paper above. Question: can one get a slightly worse bound but without the case analysis at the end? That may be useful to know...

This analysis of the integrality gap (and an extension of it) is actually used for the proof of soundness in the hardness reduction we saw today! I will write/say some more about this later.

Monday, February 18, 2008

Tomorrow's lecture

For tomorrow, could you brush up what you know Max $k$-ary Consistent Labeling(K,L) from HW1 problem #7 (also defined in the list of problems)? Thanks!

In particular, consider the $2$-ary (also called "binary" case): the constraint hyperedges now become normal edges, and the hypergraph becomes a normal graph. (Note that "strongly" and "weakly" satisfying mean the same thing now.)

Suppose such an instance is a bipartite graph $(U,V,E)$, with $|U| = |V| = n$; moreover, the instance is "regular" (i.e., the number of constraints involving each vertex is some $d$). Remind yourself that for such instances, it is NP-hard to infer whether the value is $1$ or less than $\eta < 1$.

Thursday, February 14, 2008

Notes on the group Steiner tree proof

A few notes on the rounding proof (and how to complete it):
  1. When you round down the $x_e$ values to the power of $2$ below it, note that you might violate the minimality of the solution: e.g., if an edge has value $x_e = 3/4$, and has two children of value $x_{e1} = 1/2$ and $x_{e2} = 1/4$, then rounding down would give $x'_e = 1/2$ but would maintain $x'_{e1} = 1/2$ and $x'_{e2} = 1/4$.

    This kind of situation would cause some concern if we want to "merge" edges with identical values in the last step of the preprocessing: how would we merge $e$ and $e1$ in this case? The answer is: we would just "contract" the edge $e1$, and hence any children of $e1$ would now become children of $e$. The key observation is the random process on this new contracted graph connects the same set of leaves to the root as on the original uncontracted graph, since $x'_{e1} = x'_e$.

    (Aside: the procedure is fairly robust, so you could alter the procedure and add a step to regain the minimality. Or you could just forget about the last "merging" step entirely. Note that we now no longer have a tree of "height" $\log |g|$, and the closing arguments would now change slightly.)

  2. The argument to compute $\Delta$: suppose the height of the "merged" tree is $H$.

    For some vertex $u \in g$, let us first show that $\Delta_u = \sum_{w \sim u} Pr[E_u \cap E_w] \le O(H) \times x'_u$. Summing this over all $u \in g$ gives us $O(H)$, since the $x'_u$'s sum up to at most $1$, and hence completes the proof.

    For any $w \in g$, let $e$ be the lowest edge shared by the path from $u$ to the root, and the path from $w$ to the root. A moment's thought tells us that $Pr[ E_u ] = x'_u$, and $Pr[ E_w | E_u ] = x'_w/x'_e$ $Pr[ E_w | E_u ] = x'_u/x'_e$. Hence, $Pr[E_u \cap E_w] = (x'_u x'_w / x'_e)$.

    Now, if we sum over all $w$ such that the paths from $u$ to root, and $w$ to root have the edge $e$ as their least common ancestor edge as above. Now use the fact that $\sum_{such w} x'_w \leq O(x'_e)$, since all the flow going to such $w$'s must have been supported on the edge $e$ before rounding down. Hence, the sum of $Pr[E_u \cap E_w]$ over all the relevant $w$ for some edge $e$ is $O(x'_u)$.

    Finally, the tree is of height $H$, which means there are at most $H$ such terms, giving us $O(H) x'_u$ as claimed.

  3. Aravind Srinivasan has some notes on Janson's inequality: it explains how the "simple" structure of the events $E_i$ and (the independence of the random variables determining these events) allow us to prove such a strong inequality. Joel Spencer also has slides on the proof: in fact, he states it in exactly the form we needed today, with his "bad" events $B_i$ being our "good" events $E_v$!
If anything else was unclear, please post questions, comments, or clarifications!

Tuesday, February 12, 2008

Lecture 9: Finishing Ek-Indep. Set hardness; AKC hardness

Any questions about Lecture 9 can go here.


I wanted to include a niggling detail about the hardness for Independent Set. Recall that in our reduction, our hyperedges involved $k/2$ distinct sets from $\{0,1\}^L_v$ and $k/2$ distinct sets from $\{0,1\}^L_{v'}$, whenever $(u,v)$ and $(u,v')$ were both in $E$.

It is important (why?) to allow the case $v' = v$. However this requires us to take a little care. First of all, in this case, we need to take the hyperedges to be of the form $\{A_1^v, ..., A_k^v\}$ where all $k$ sets are all distinct. This will still be a hyperedge iff $\pi_{v \to u}(\cap_{i=1}^{k/2} A_i)$ is disjoint from $\pi_{v \to u}(\cap_{i=k/2+1}^{k} A_i)$.

Suppose that $v$ is a "good" vertex. We know that $\mathcal{F}_v$ is too big to be $k/2$-wise $t$-intersecting, so it contains some "good" sets $A_1^{v}, ..., A_{k/2}^v$ with intersection size less than $t$. We took Sugg($v$) to be this intersection. As we saw in class, if $v$ has a neighbor $u$, and $v'$ is another good neighbor of $u$, then Sugg($v$) and Sugg($v'$) are "consistent" label sets (i.e., when projected to $u$, they agree on at least one key). In particular, Sugg($v$) and Sugg($v'$) must certainly be nonempty.

The tiny tricky part now is the following: This argument relies on $v' \neq v$! If $v' = v$ then you only have $k/2$ "good" sets in play, and you can't make any sort of argument based on size-$k$ hyperedges. In particular, if $v$ has no other "good" distance-2 neighbors, Sugg($v$) may well be empty!


But: it's not like it's a problem. In this case, just put an arbitrary element into Sugg($v$). Then the soundness argument is fine; any neighbor $u$ of $v$ must have $v$ as its buddy, and then of course the $(u,v)$ constraint will be satisfied with probability $1$!

Saturday, February 9, 2008

Lecture notes

for the previous week (Lec5 and Lec6) are up. Tuesday's lecture (Lec7) will be up shortly. Sorry for the delay...

Friday, February 8, 2008

Left-Shifting

Could use a quick reality check on the definition of left-shift (q6b); there seems to be more than one consistent interpretation.

What would the following set become with i=1, j=3?
{ 011, 101, 111, 001 }

Thanks. :-)

Thursday, February 7, 2008

Lecture 8: Hardness of Max-Ek-Independent-Set

Any questions about today's lecture? I will repeat here the hardness reduction:

Given: Max-Label-Cover(K,L) instance $G = (U,V,E)$ with projection constraints $\pi_{v \to u}$.

Reduction to a weighted k-uniform hypergraph $H$:
. The vertex set is $V \times \{0,1\}^L$ (there are $|V| 2^{|L|}$ vertices). We call each $\{v\} \times \{0,1\}^L$ the "block" of vertices corresponding to $v \in V$. We name the strings/sets in this block $A^{(v)}$.
. The weight on each vertex $A^{(v)}$ is its $p$-biased weight, $p = 1 - 2/k - \delta$. The total weight in each block is $1$ and the overall total weight is $v$.
. For each pair of vertices $v, v' \in V$ sharing a common $u$ (i.e., with $(u,v), (u,v') \in E$), we will put some hyperedges on vertices in the union of the $v$ and $v'$ blocks.
. Specifically, we put a hyperedge on the $k$-set $\{A_1^{(v)}, ..., A_{k/2}^{(v)}, B_{1}^{(v')}, ... B_{k/2}^{(v')}\}$ iff the following condition holds:

$\pi_{v\to u}(\cap A_{i}^{(v)})$ is disjoint from $\pi_{v' \to u}(\cap B_{i}^{(v')})$.

(Supertechnical note. The A's should all be distinct, and the B's should all be distinct. Also, we allow $v = v'$, in which case the A's and B's should be mutually distinct.)


This construction is polynomial time (since $k$, $|K|$, $|L|$ are all constants; supersupertechnical note: we assume $\delta$ is rational).

You should again try to convince yourself of the "completeness" part of the proof: Opt(G) = 1 implies Opt(H) $\geq p = 1 - 2/k - \delta$.

Wednesday, February 6, 2008

Solution to 7a on Homework 1

Here is a solution to problem 7a on homework 1. To get the full $1 - 1/(k-1) - \delta$ vs. $\epsilon$ hardness result for Max-Ek-Independent-Set (a problem we will discuss in tomorrow's class), you need to use something similar, and that something similar might be on the next homework...

Monday, February 4, 2008

How to do a hardness reduction

Just a quick reminder on how to do a hardness reduction. It may seem obvious, but these reductions are going to get even more complicated later, and there was already some confusion on the homework.

To show c' vs. s' NP-hardness for problem B, based on known c vs. s NP-hardness for problem A:

1. Design a polynomial time reduction R mapping A-instances into B-instances.
2. Show that R can be carried out in polynomial time.
3. Show that for any A-instance X, Opt(X) $\geq$ c implies Opt(R(X)) $\geq$ c'.
4. Show that for any A-instance X, Opt(X) $\lt$ s implies Opt(R(X)) $\lt$ s'.

Done.

Comments on each of the steps:

1. This is generally the second-hardest part.

2. This is generally trivial.

3. This is generally true "by design". One typically shows that given any solution x to A-instance X with $val_X$(x) $\geq c$, one can convert it to a solution y for R(X) with $val_{R(X)}$(y) $\geq c'$.

Usually this conversion is explicit and easy, but please don't bother to note whether or not it is polynomial-time -- it doesn't matter. Noting this only confuses things.

Furthermore, especially in the case that c = c' = 1, it is usually easy to prove some kind of "reverse", like, if Opt(R(X)) = 1 then Opt(X) must also be 1. But again, this is irrelevant and it's best not to write it without having a good reason.

4. This is generally the hardest part. One almost always proves it in the contrapositive: If Opt(R(X)) $\geq s'$, then Opt(X) must be $\geq s$. Generally, one does this by showing how to convert any solution y for R(X) with $val_{R(X)}$(y) $\geq s'$ into a solution x for X with $val_{X}$(x) $\geq s$.

Again, it does not matter if this conversion is polynomial-time; like in step 3, one only needs to show an existence result. (Indeed, often the conversion is randomized.)

Saturday, February 2, 2008

Homework 1 graded

I'll bring them around on Tuesday. The average was 80%.

Some comments on the problems:

1. Some people made the unwarranted assumption that A running in a reasonable amount of time is independent of A outputting a solution of value at least s.

2. Most people got this completely right; the only interesting part was to see how slickly you could prove (b). People were occasionally a little unclear on why their random assignment in (d)
satisfied a constraint with probability >= 1/|K|.

3. Throughout, people were sometimes unclear on what exactly one needs to do to prove a hardness reduction. I think I will post a short note about this in a future blog entry.

Also, in the first part of (a) you needed to be showing, say, 1/3 vs. $(1/3)(1-\epsilon_0)$ hardness for Independent-Set, not 1 vs. $1 - \epsilon_0$. The latter decision problem is completely trivial -- just output 'yes' if the input graph has no edges!

People were also frequently failed to clearly explain what graph product they were using.

4. Most people did pretty well on this one -- thanks to a couple of people for pointing out I should have said 'reduce from Hamiltonian-Cycle'. In part (b), a couple of people added a star rather than what I and most others thought of -- disconnected edges; cute. Only one person was heroic and honest enough to point out that you have to deal with integrality issues vis-a-vis 1/delta when you're adding your stars/edges. I didn't take off any points for the rest of you here.

5a) One cute aspect here is that although from the analysis (repeat Opt $\cdot \ln(n/Opt)$ times to get down to Opt left, then pick Opt more times), it looks like the algorithm might have to "guess" Opt, it doesn't actually -- although the analysis changes, the actual algorithm is the same.

6. No one explained why the sum of the prices equals the cost of the algorithm's solution. It's only about a sentence, but it's still important. I ended up not taking off points for this though.

7. Only one person tried 7a (and got it somewhat right), even though I think it's much shorter than 7b (which does have a lot of painful arithmetic in it). I'll post a solution for it later.