On Sketching the q to p norms
We initiate the study of data dimensionality reduction, or sketching, for the q→ p norms. Given an n × d matrix A, the q→ p norm, denoted A_q → p = _x ∈R^d 0⃗Ax_p/x_q, is a natural generalization of several matrix and vector norms studied in the data stream and sketching models, with applications to datamining, hardness of approximation, and oblivious routing. We say a distribution S on random matrices L ∈R^nd→R^k is a (k,α)-sketching family if from L(A), one can approximate A_q → p up to a factor α with constant probability. We provide upper and lower bounds on the sketching dimension k for every p, q ∈ [1, ∞], and in a number of cases our bounds are tight. While we mostly focus on constant α, we also consider large approximation factors α, as well as other variants of the problem such as when A has low rank.
READ FULL TEXT