Alpha-expansion is Exact on Stable Instances

11/06/2017
by   Hunter Lang, et al.
0

Approximate algorithms for structured prediction problems---such as the popular alpha-expansion algorithm (Boykov et al. 2001) in computer vision---typically far exceed their theoretical performance guarantees on real-world instances. These algorithms often find solutions that are very close to optimal. The goal of this paper is to partially explain the performance of alpha-expansion on MAP inference in Ferromagnetic Potts models (FPMs). Our main results use the connection between energy minimization in FPMs and the Uniform Metric Labeling problem to give a stability condition under which the alpha-expansion algorithm provably recovers the optimal MAP solution. This theoretical result complements the numerous empirical observations of alpha-expansion's performance. Additionally, we give a different stability condition under which an LP-based algorithm recovers the optimal solution.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro