Optimally Repurposing Existing Algorithms to Obtain Exponential-Time Approximations

06/27/2023
by   Barış Can Esmer, et al.
0

The goal of this paper is to understand how exponential-time approximation algorithms can be obtained from existing polynomial-time approximation algorithms, existing parameterized exact algorithms, and existing parameterized approximation algorithms. More formally, we consider a monotone subset minimization problem over a universe of size n (e.g., Vertex Cover or Feedback Vertex Set). We have access to an algorithm that finds an α-approximate solution in time c^k · n^O(1) if a solution of size k exists (and more generally, an extension algorithm that can approximate in a similar way if a set can be extended to a solution with k further elements). Our goal is to obtain a d^n · n^O(1) time β-approximation algorithm for the problem with d as small as possible. That is, for every fixed α,c,β≥ 1, we would like to determine the smallest possible d that can be achieved in a model where our problem-specific knowledge is limited to checking the feasibility of a solution and invoking the α-approximate extension algorithm. Our results completely resolve this question: (1) For every fixed α,c,β≥ 1, a simple algorithm (“approximate monotone local search”) achieves the optimum value of d. (2) Given α,c,β≥ 1, we can efficiently compute the optimum d up to any precision ε > 0. Earlier work presented algorithms (but no lower bounds) for the special case α = β = 1 [Fomin et al., J. ACM 2019] and for the special case α = β > 1 [Esmer et al., ESA 2022]. Our work generalizes these results and in particular confirms that the earlier algorithms are optimal in these special cases.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset