On the Hardness of Compressing Weights

07/06/2021
by   Bart M. P. Jansen, et al.
0

We investigate computational problems involving large weights through the lens of kernelization, which is a framework of polynomial-time preprocessing aimed at compressing the instance size. Our main focus is the weighted Clique problem, where we are given an edge-weighted graph and the goal is to detect a clique of total weight equal to a prescribed value. We show that the weighted variant, parameterized by the number of vertices n, is significantly harder than the unweighted problem by presenting an O(n^3 - ε) lower bound on the size of the kernel, under the assumption that NP ⊈ coNP/poly. This lower bound is essentially tight: we show that we can reduce the problem to the case with weights bounded by 2^O(n), which yields a randomized kernel of O(n^3) bits. We generalize these results to the weighted d-Uniform Hyperclique problem, Subset Sum, and weighted variants of Boolean Constraint Satisfaction Problems (CSPs). We also study weighted minimization problems and show that weight compression is easier when we only want to preserve the collection of optimal solutions. Namely, we show that for node-weighted Vertex Cover on bipartite graphs it is possible to maintain the set of optimal solutions using integer weights from the range [1, n], but if we want to maintain the ordering of the weights of all inclusion-minimal solutions, then weights as large as 2^Ω(n) are necessary.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset