Sunday, January 20, 2008

Why decision and search are different for approximation problems

We don't usually think too much about the distinction between decision and search problems for exact NP-optimization problems. But the distinction is relevant when approximation is involved. Here is a problem & solution excised from Homework 1, illustrated the distinction:

1. Decision to search.

a) Suppose the exact decision version of Max-$3$Sat is in P i.e., for any $c$, the $c$ vs. $c$ decision problem is in P. Show that the exact search version of Max-$3$Sat is also in P; i.e., there is a polynomial time algorithm which, given a $3$Sat formula $\phi$, finds an assignment satisfying a maximal number of clauses.

b) Explain the difficulty in showing an analogous reduction from a $.9$-factor decision algorithm (i.e., $c$ vs. $.9c$ decision, for any $c$) to a $.9$-factor search algorithm.

Solution.

a) If $\phi$ has $m$ clauses, it's easy to use binary search to identify the value of the optimum using $\log_2 m$ calls to the decision algorithm. (Or, one could be naive and simply use $m$ calls.) Having found $Opt$, we now find an optimum solution as follows: Run the decision algorithm on $(\phi_0, Opt)$ and $(\phi_1, Opt)$, where $\phi_b$ denotes the $3$Sat formula gotten by fixing $x_1$ to $b$ ($b = 0, 1$) and simplifying. At least one of these must answer `yes'; say $b_1$. Fix $x_1 = b_1$, replace $\phi$ by $\phi_{b_1}$, and now repeat the process with $x_2$, $x_3$, etc. After $2n$ calls to the decision algorithm we get an assignment $(b_1, ..., b_n)$ satisfying an $Opt$ fraction of the constraints.

b) When you call the $.9$-factor decision algorithm with $\alpha$, a `yes' answer implies the optimum is between $.9\alpha$ and $\alpha$, while a `no' answer implies the optimum is less than $\alpha$. By searching, you can find a value $\beta$ such that $.9\beta \leq Opt \leq \beta$. The hope would be to use the same reduction as before to find an assignment with value in this range. Let's assume we decide to fix the variables in some order with no backtracking -- if we were to use backtracking, there's no way we could guarantee polynomial time. For notational simplicity, assume that as before we first try fixing $x_1$. We now might as well try running the $.9$-factor decision algorithm on $(\phi_0, \alpha)$ and $(\phi_1, \alpha)$ for all possible $\alpha$.

It is perfectly possible that $Opt = .9\beta$, every assignment achieving $.9\beta$ requires $x_1 = 0$, but there exists an assignment with $x_1 = 1$ satisfying $.9\beta m - 1$ clauses. Further, in this case, it might hold that the decision algorithm says `no' to $(\phi_0, \alpha)$ for all $\alpha > .9\beta$ whereas it says `yes' to $(\phi_1, \alpha)$ for all $\alpha < .99\beta$. Now how could we fail to make the wrong choice, $x_1 = 1$?

No comments: