Small Transformers Compute Universal Metric Embeddings

09/14/2022
by   Anastasis Kratsios, et al.
8

We study representations of data from an arbitrary metric space 𝒳 in the space of univariate Gaussian mixtures with a transport metric (Delon and Desolneux 2020). We derive embedding guarantees for feature maps implemented by small neural networks called probabilistic transformers. Our guarantees are of memorization type: we prove that a probabilistic transformer of depth about nlog(n) and width about n^2 can bi-Hölder embed any n-point dataset from 𝒳 with low metric distortion, thus avoiding the curse of dimensionality. We further derive probabilistic bi-Lipschitz guarantees, which trade off the amount of distortion and the probability that a randomly chosen pair of points embeds with that distortion. If 𝒳's geometry is sufficiently regular, we obtain stronger, bi-Lipschitz guarantees for all points in the dataset. As applications, we derive neural embedding guarantees for datasets from Riemannian manifolds, metric trees, and certain types of combinatorial graphs. When instead embedding into multivariate Gaussian mixtures, we show that probabilistic transformers can compute bi-Hölder embeddings with arbitrarily small distortion.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset