Practical Algorithms for Learning Near-Isometric Linear Embeddings

01/01/2016
by   Jerry Luo, et al.
0

We propose two practical non-convex approaches for learning near-isometric, linear embeddings of finite sets of data points. Given a set of training points X, we consider the secant set S(X) that consists of all pairwise difference vectors of X, normalized to lie on the unit sphere. The problem can be formulated as finding a symmetric and positive semi-definite matrix Ψ that preserves the norms of all the vectors in S(X) up to a distortion parameter δ. Motivated by non-negative matrix factorization, we reformulate our problem into a Frobenius norm minimization problem, which is solved by the Alternating Direction Method of Multipliers (ADMM) and develop an algorithm, FroMax. Another method solves for a projection matrix Ψ by minimizing the restricted isometry property (RIP) directly over the set of symmetric, postive semi-definite matrices. Applying ADMM and a Moreau decomposition on a proximal mapping, we develop another algorithm, NILE-Pro, for dimensionality reduction. FroMax is shown to converge faster for smaller δ while NILE-Pro converges faster for larger δ. Both non-convex approaches are then empirically demonstrated to be more computationally efficient than prior convex approaches for a number of applications in machine learning and signal processing.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset