Friday, February 29, 2008

Homework 2: Notes and Comments

Here are solution sketches and comments on the rest of problems in HW2.
  1. For the naive algorithm on unweighted graphs, several answers said: "if an edge is uncovered, the algorithm will pick two vertices when the optimal algorithm picks one vertex". While this is true (and is indeed the basis for the proof), you need to show that we are not over-counting. One way is to clearly state a lower bound on OPT (the size of any maximal matching), and state why the algorithm is at most twice the lower bound.

    For the randomized VC algorithm, see the solutions to problem 2b from this previous semester's course.

    For the primal-dual algorithm, we write down the dual which maximizes eye such that for each vertex e:veyecv, and the ye values are non-negative. We start with the zero primal and dual solutions, and raise all the ye's uniformly until some constraint becomes tight (i.e., for some vertex v, e:veye=cv. Pick the vertex in the primal solution, and "freeze" all the variables involved in this constraint. Now continue raising all the unfrozen edge variables, picking vertices and freezing edges when other contraints become right etc, until all the edge variables become tight: we now have have a feasible primal vertex cover C. (why?) Now the cost of this solution is vCcv=vCe:veye evCeye2eye, which is the dual value. (The last inequality is because each edge contains at most 2 vertices.) Hence the vertex cover costs twice the feasible dual solution.

    For the greedy algorithm, if you have a star with center cost 1+ε, and n leaves with costs 1,1/2,1/3,,1/n, then you will pick all the leaves instead of the center, and hence only get an Hn approx. An O(logm)=O(logn) upper bound follows from the standard set cover analysis.

  2. One point often missed: this problem defined C* and F* to be the facility and connection costs for the optimal integer solution, and not for the optimal LP solution.

    Also, if you claim that after filtering you can use the standard Set Cover randomized rounding to obtain a solution, you need to show how the LP solution for facility location can be mapped to a LP solution for Set Cover.

  3. No surprises here.

  4. Build a bipartite graph on clauses and variables, and use Hall's theorem to infer the existence of a perfect matching.

Thursday, February 28, 2008

Homework 2 graded

Sorry for the delay. We'll hand it back today. Overall average was 84%.

Solutions to Homework 2, #6

7a. Take any s sets in s,t,. By definition, each misses at most elements from [t+s]. Thus their intersection misses at most s elements from [t+s] and thus has size at least t.

7b. Extend the definition of Sij to sets in the natural way. It's obvious that if A is in s,t, then Sij(A) is too. Hence shifting has no effect on s,t,.

7c. This one definitely takes much more explanation, but I don't think it's actually tricky -- you just have to follow your nose all the way through. However, you do have to avoid the following two extremely common mistakes:

"Sij is given by taking Sij(A) for all A." -- Not true.

"For any AʹSij, there is an A such that Sij(A)=Aʹ." -- False false false! Consider i=1, j=2, n=2, ={{1},{2}}. Then S12=, and yet for {2}Sij, there is no set A such that S12(A)={2}!

Here is my two-part proof:

Part 1.
Suppose by way of contradiction that there are s sets B1,...,Bs in Sij whose intersection is of size less than t. Let A1,...,As be the sets they "came from". So for each k, either Bk=Sij(Ak) or Bk=Ak, the latter occurring when Sij(Ak) is also in . Clearly, the Ak's are all be distinct, and so the intersection of the Ak's has size at least t.

How could the Ak's have intersection at least t and yet the Bk's have intersection less than t? Certainly nothing changes on coordinates other than i or j. It's also clear Sij can't cause coordinate i to leave the intersection. This means that it must be the case that coordinate j left the intersection. In particular, j is in the intersection of the Ak's and not in the intersection of the Bk's.

Further, since we know that j is in the intersection of the Ak's, it follows that i is not in the intersection of the Ak's. Otherwise, we have both i and j in each Ak, which surely means that i and j are both in each Bk. But we've already established that j is not in the intersection of the Bk's.

Finally, since i is not in the intersection of the Ak's, and since shifting caused the intersection size to go down, i had better not be in the intersection of the Bk's.

To summarize, j but not i is in the intersection of the Ak's, and neither i nor j is in the intersection of the Bk's. Further, it must be the case that the Ak's have intersection size exactly t.

Part 2.
Focus on the coordinates i and j in the Ak's now. For each Ak with a 0 in the i coordinate and a 1 in the j coordinate (and there is at least one such Ak), how come all those 1's didn't shift over into the ith coordinate, causing all the Bk's to have 1's in their ith coordinate?

It must be because there is at least one Ak -- say A1 WOLOG -- such that Sij(A1) was already in . So let's look at the intersection of the following s distinct sets in : Sij(A1),A2,...,As. These s sets must also have intersection size at least t. Now the intersection of these is very similar to the intersection of A1,...,As; however, Sij(Aj) does not have a 0 in the jth coordinate. This means j is not in the intersection of Sij(A1),A2,...,As.

But this is a risky situation, because we concluded in Part 1 that A1,...,As had intersection size only t, and this included the fact that j was in the intersection. The only way Sij(A1),A2,...,As could still have intersection size t is if miraculously, putting in Sij(A1) causes i to be in the intersection, where it wasn't before.

But the only way for this to happen is if A1 was the only set among A1,...,As with the pattern 0,1 in coordinates i,j. In this case, every other Ak has the pattern 1,1, which certainly implies Bk=Ak. Further, we have B1=A1 by assumption (since Sij(A1) was already in ). So in fact all Bk's equal their Ak's. This is a contradiction because in Part 1 we concluded that j was in the intersection of the Ak's but not in the intersection of the Bk's.

Tuesday, February 26, 2008

Lecture 13: The Hardness Endgame

Here's the final steps to the hardness of Priority Steiner tree. The details are in the paper On the Approximability of Network Design Problems, J. Chuzhoy, A. Gupta, J. Naor and A. Sinha, SODA 2006.

What we saw Today in Class. Recall that we assumed that for some h, we had set cover instances with universe of size u, s sets, each set of size z, where
  • the Yes instances had a solution using at most X sets, but
  • the No instances had the property that even picking hX sets would leave a 2-ch-fraction of the universe uncovered. (Let us call this the h-factor property.)
The Yes instances mapped to instances of PST where the solution cost was at most X.

In the No instances, we proved a claim that if the total cost of edges bought in levels 1,2,,j-1 was at most hX/2, then the cost incurred at level j itself would be at least (u/4z)2-ch. In particular, if k were (at least) (4zhX/u)2ch, this claim implies that the total cost in the No instance is at least hX/2.

And hence we have a gap of h/2 between the Yes and the No instances.

Parameters (The Questions). So how do we set the parameters so that
  1. the set cover instance satisfies the h-factor property.

  2. The number of levels k is at least (4zhX/u)2ch.

  3. the size of the construction N(2z)ks is reasonable.
Our Friend the Set Cover Instance. The construction of the set cover instance is the usual thing as discussed in Lecture 11 (or in Homework 3): note that we can start from a SAT instance of size n, and get a label cover instance with nʹ nodes in each partition, each node participating in dʹ constraints, label set Lʹ, key set Kʹ, where nʹ=n-logη, and dʹ,Lʹ,Kʹ are all poly(1/η).

In turn, this gives us a set cover instance with universe size u=nʹdʹ2Kʹ, number of sets s=nʹLʹ, size of each set being z=dʹ2Kʹ-1, the optimal solution size is X=2nʹ, and it satisfies the h-factor property above if 1/h2>O(η).

Parameters (The Answers). To answer the questions above:
  1. the set cover instance satisfies the h-factor property as long as h2<O(1/η). This is satisfied if we set ηhΘ(loglogn).

  2. Note that the parameters derived above satisfy zX/u=4, and hence k=16h2chΘ(logn) is sufficient.

  3. the size of the construction is Ns(2z)k=n-logηpoly(1/η)(2h)nlogloglogn.
Finally, the gap between the Yes and No instances is h=Ω(loglogn)=loglogN as well. This completes the proof.

Questions and comments about today's lecture go here, as usual.

Homework 3 correction -- Problem 4

Hi -- as many of you noticed, the last part of Problem 4 does not look so doable :)

The result is in fact true -- indeed, Ω(logn) hardness for Set-Cover is known even subject to P NP. However the method illustrated probably won't cut it.

Please do one of the following instead:

1. Use this method to prove f(n)-hardness for Set-Cover, for as large a function f(n) as you can -- subject to NP not in DTIME(2o(n)) or an even better DTIME assumption.

OR

2. Prove Ω(logn)-hardness for Set-Cover assuming NP not in BPTIME(npolylogn) -- using any method.

Tuesday, February 19, 2008

Integrality Gap for Group Steiner

The integrality gap of Ω(log2k) was proved in Integrality Ratio for Group Steiner Trees and Directed Steiner Trees, E. Halperin, G. Kortsarz, R. Krauthgamer, A. Srinivasan, and N. Wang. SIAM Journal on Computing, Vol. 36, 1494-1511, 2007. Here's the idea.
  1. Create a balanced δ-ary tree with height H+1 levels, where the edge costs decrease exponentially: edges at level h cost 1/2h. (Hence edges incident to the root cost 1/2, and edges incident to the leaves cost 1/2H.)

  2. The arity of the tree is δ=clogn for some constant c. So the height is H=logn/logδ. The number of leaves is n, the total number of nodes is n(1+o(1)). The number of groups/colors/elements is k=22H.

  3. Do the following independently for each of the k elements x. The element x starts off at the root. For each of the δ child edges, it flips a fair coin independently: if the coin comes up heads, a copy of x slides down that edge, where it begins such a random process anew. Note that the expected number of copies of x at level i is (δ/2)i. The group for x is the set of all the leaves that contain a copy of x.
Small fractional solution. It is not that difficult to show that this instance has a fractional solution of value O(H) with high probability (where the probability is over the random choice of groups above):
  • Consider some node at level i in this tree. Conditioned on this node getting a copy of x, the expected number of leaves containing x below it is μh=(δ/2)H-h. With high probability, the actual number lies in (μh/3,3μh). (Use Chernoff bounds!)

  • For an edge e going from level i-1 to level i, set xe=9/μe --- using the above facts, this is a feasible solution (whp) with cost 9H.
No small integer solution. Next, we show that "with high probability, all integer solutions have cost Ω(H2logk)=Ω(Hlog2k)." This proves the Ω(log2k) integrality gap for the problem.
  • Since each leaf-edge costs 1/2H, no tree of cost C can use more than C2H of these. There are δH leaf-edges, so the number of possible trees of cost C is [(δH)choose(C2H)]exp(CH2Hlogδ). Call this value M(C).

  • Suppose Fail(C) is the probability that for a fixed tree of cost C, it misses the group for x created above. By independence of groups, the probability that this fixed tree hits all the groups is at most (1-Fail(C))kexp(-k×Fail(C))=Hitall(C).

  • Hence, taking a trivial union bound over all trees, the chance that there exists some tree that hits all the groups is at most M(C)×Hitall(C).

  • If this product is o(1) for some C, then with probability 1-o(1) no tree of cost C is feasible. We would like to show this holds true for some constant ε>0 and C=εH2logk.
Now for the most technical part.

One needs to prove that for some fixed tree of cost at most εH2logk, and some x, none of the leafs in this tree are hit by the random process above. The proof strategy seems natural (in hindsight), but also somewhat delicate: details in Section 2.3.1 of the paper above. Question: can one get a slightly worse bound but without the case analysis at the end? That may be useful to know...

This analysis of the integrality gap (and an extension of it) is actually used for the proof of soundness in the hardness reduction we saw today! I will write/say some more about this later.

Monday, February 18, 2008

Tomorrow's lecture

For tomorrow, could you brush up what you know Max k-ary Consistent Labeling(K,L) from HW1 problem #7 (also defined in the list of problems)? Thanks!

In particular, consider the 2-ary (also called "binary" case): the constraint hyperedges now become normal edges, and the hypergraph becomes a normal graph. (Note that "strongly" and "weakly" satisfying mean the same thing now.)

Suppose such an instance is a bipartite graph (U,V,E), with U=V=n; moreover, the instance is "regular" (i.e., the number of constraints involving each vertex is some d). Remind yourself that for such instances, it is NP-hard to infer whether the value is 1 or less than η<1.

Thursday, February 14, 2008

Notes on the group Steiner tree proof

A few notes on the rounding proof (and how to complete it):
  1. When you round down the xe values to the power of 2 below it, note that you might violate the minimality of the solution: e.g., if an edge has value xe=3/4, and has two children of value xe1=1/2 and xe2=1/4, then rounding down would give xʹe=1/2 but would maintain xʹe1=1/2 and xʹe2=1/4.

    This kind of situation would cause some concern if we want to "merge" edges with identical values in the last step of the preprocessing: how would we merge e and e1 in this case? The answer is: we would just "contract" the edge e1, and hence any children of e1 would now become children of e. The key observation is the random process on this new contracted graph connects the same set of leaves to the root as on the original uncontracted graph, since xʹe1=xʹe.

    (Aside: the procedure is fairly robust, so you could alter the procedure and add a step to regain the minimality. Or you could just forget about the last "merging" step entirely. Note that we now no longer have a tree of "height" logg, and the closing arguments would now change slightly.)

  2. The argument to compute Δ: suppose the height of the "merged" tree is H.

    For some vertex ug, let us first show that Δu=wuPr[EuEw]O(H)×xʹu. Summing this over all ug gives us O(H), since the xʹu's sum up to at most 1, and hence completes the proof.

    For any wg, let e be the lowest edge shared by the path from u to the root, and the path from w to the root. A moment's thought tells us that Pr[Eu]=xʹu, and Pr[EwEu]=xʹw/xʹe Pr[EwEu]=xʹu/xʹe. Hence, Pr[EuEw]=(xʹuxʹw/xʹe).

    Now, if we sum over all w such that the paths from u to root, and w to root have the edge e as their least common ancestor edge as above. Now use the fact that suchwxʹwO(xʹe), since all the flow going to such w's must have been supported on the edge e before rounding down. Hence, the sum of Pr[EuEw] over all the relevant w for some edge e is O(xʹu).

    Finally, the tree is of height H, which means there are at most H such terms, giving us O(H)xʹu as claimed.

  3. Aravind Srinivasan has some notes on Janson's inequality: it explains how the "simple" structure of the events Ei and (the independence of the random variables determining these events) allow us to prove such a strong inequality. Joel Spencer also has slides on the proof: in fact, he states it in exactly the form we needed today, with his "bad" events Bi being our "good" events Ev!
If anything else was unclear, please post questions, comments, or clarifications!

Tuesday, February 12, 2008

Lecture 9: Finishing Ek-Indep. Set hardness; AKC hardness

Any questions about Lecture 9 can go here.


I wanted to include a niggling detail about the hardness for Independent Set. Recall that in our reduction, our hyperedges involved k/2 distinct sets from {0,1}L_ and k/2 distinct sets from {0,1}L_vʹ, whenever (u,v) and (u,vʹ) were both in E.

It is important (why?) to allow the case vʹ=v. However this requires us to take a little care. First of all, in this case, we need to take the hyperedges to be of the form {A1v,...,Akv} where all k sets are all distinct. This will still be a hyperedge iff πvu(i=1k/2Ai) is disjoint from πvu(i=k/2+1kAi).

Suppose that v is a "good" vertex. We know that v is too big to be k/2-wise t-intersecting, so it contains some "good" sets A1v,...,Ak/2v with intersection size less than t. We took Sugg(v) to be this intersection. As we saw in class, if v has a neighbor u, and vʹ is another good neighbor of u, then Sugg(v) and Sugg(vʹ) are "consistent" label sets (i.e., when projected to u, they agree on at least one key). In particular, Sugg(v) and Sugg(vʹ) must certainly be nonempty.

The tiny tricky part now is the following: This argument relies on vʹv! If vʹ=v then you only have k/2 "good" sets in play, and you can't make any sort of argument based on size-k hyperedges. In particular, if v has no other "good" distance-2 neighbors, Sugg(v) may well be empty!


But: it's not like it's a problem. In this case, just put an arbitrary element into Sugg(v). Then the soundness argument is fine; any neighbor u of v must have v as its buddy, and then of course the (u,v) constraint will be satisfied with probability 1!

Saturday, February 9, 2008

Lecture notes

for the previous week (Lec5 and Lec6) are up. Tuesday's lecture (Lec7) will be up shortly. Sorry for the delay...

Friday, February 8, 2008

Left-Shifting

Could use a quick reality check on the definition of left-shift (q6b); there seems to be more than one consistent interpretation.

What would the following set become with i=1, j=3?
{ 011, 101, 111, 001 }

Thanks. :-)

Thursday, February 7, 2008

Lecture 8: Hardness of Max-Ek-Independent-Set

Any questions about today's lecture? I will repeat here the hardness reduction:

Given: Max-Label-Cover(K,L) instance G=(U,V,E) with projection constraints πvu.

Reduction to a weighted k-uniform hypergraph H:
. The vertex set is V×{0,1}L (there are V2L vertices). We call each {v}×{0,1}L the "block" of vertices corresponding to vV. We name the strings/sets in this block A(v).
. The weight on each vertex A(v) is its p-biased weight, p=1-2/k-δ. The total weight in each block is 1 and the overall total weight is v.
. For each pair of vertices v,vʹV sharing a common u (i.e., with (u,v),(u,vʹ)E), we will put some hyperedges on vertices in the union of the v and vʹ blocks.
. Specifically, we put a hyperedge on the k-set {A1(v),...,Ak/2(v),B1(vʹ),...Bk/2(vʹ)} iff the following condition holds:

πvu(Ai(v)) is disjoint from πvʹu(Bi(vʹ)).

(Supertechnical note. The A's should all be distinct, and the B's should all be distinct. Also, we allow v=vʹ, in which case the A's and B's should be mutually distinct.)


This construction is polynomial time (since k, K, L are all constants; supersupertechnical note: we assume δ is rational).

You should again try to convince yourself of the "completeness" part of the proof: Opt(G) = 1 implies Opt(H) p=1-2/k-δ.

Wednesday, February 6, 2008

Solution to 7a on Homework 1

Here is a solution to problem 7a on homework 1. To get the full 1-1/(k-1)-δ vs. ε hardness result for Max-Ek-Independent-Set (a problem we will discuss in tomorrow's class), you need to use something similar, and that something similar might be on the next homework...

Monday, February 4, 2008

How to do a hardness reduction

Just a quick reminder on how to do a hardness reduction. It may seem obvious, but these reductions are going to get even more complicated later, and there was already some confusion on the homework.

To show c' vs. s' NP-hardness for problem B, based on known c vs. s NP-hardness for problem A:

1. Design a polynomial time reduction R mapping A-instances into B-instances.
2. Show that R can be carried out in polynomial time.
3. Show that for any A-instance X, Opt(X) c implies Opt(R(X)) c'.
4. Show that for any A-instance X, Opt(X) < s implies Opt(R(X)) < s'.

Done.

Comments on each of the steps:

1. This is generally the second-hardest part.

2. This is generally trivial.

3. This is generally true "by design". One typically shows that given any solution x to A-instance X with valX(x) c, one can convert it to a solution y for R(X) with valR(X)(y) cʹ.

Usually this conversion is explicit and easy, but please don't bother to note whether or not it is polynomial-time -- it doesn't matter. Noting this only confuses things.

Furthermore, especially in the case that c = c' = 1, it is usually easy to prove some kind of "reverse", like, if Opt(R(X)) = 1 then Opt(X) must also be 1. But again, this is irrelevant and it's best not to write it without having a good reason.

4. This is generally the hardest part. One almost always proves it in the contrapositive: If Opt(R(X)) sʹ, then Opt(X) must be s. Generally, one does this by showing how to convert any solution y for R(X) with valR(X)(y) sʹ into a solution x for X with valX(x) s.

Again, it does not matter if this conversion is polynomial-time; like in step 3, one only needs to show an existence result. (Indeed, often the conversion is randomized.)

Saturday, February 2, 2008

Homework 1 graded

I'll bring them around on Tuesday. The average was 80%.

Some comments on the problems:

1. Some people made the unwarranted assumption that A running in a reasonable amount of time is independent of A outputting a solution of value at least s.

2. Most people got this completely right; the only interesting part was to see how slickly you could prove (b). People were occasionally a little unclear on why their random assignment in (d)
satisfied a constraint with probability >= 1/|K|.

3. Throughout, people were sometimes unclear on what exactly one needs to do to prove a hardness reduction. I think I will post a short note about this in a future blog entry.

Also, in the first part of (a) you needed to be showing, say, 1/3 vs. (1/3)(1-ε0) hardness for Independent-Set, not 1 vs. 1-ε0. The latter decision problem is completely trivial -- just output 'yes' if the input graph has no edges!

People were also frequently failed to clearly explain what graph product they were using.

4. Most people did pretty well on this one -- thanks to a couple of people for pointing out I should have said 'reduce from Hamiltonian-Cycle'. In part (b), a couple of people added a star rather than what I and most others thought of -- disconnected edges; cute. Only one person was heroic and honest enough to point out that you have to deal with integrality issues vis-a-vis 1/delta when you're adding your stars/edges. I didn't take off any points for the rest of you here.

5a) One cute aspect here is that although from the analysis (repeat Opt ln(n/Opt) times to get down to Opt left, then pick Opt more times), it looks like the algorithm might have to "guess" Opt, it doesn't actually -- although the analysis changes, the actual algorithm is the same.

6. No one explained why the sum of the prices equals the cost of the algorithm's solution. It's only about a sentence, but it's still important. I ended up not taking off points for this though.

7. Only one person tried 7a (and got it somewhat right), even though I think it's much shorter than 7b (which does have a lot of painful arithmetic in it). I'll post a solution for it later.