We discussed the Johnson-Lindenstrauss lemma in class, where we mapped n points in d dimensions, to points in a k-dimensional space (with k = O(log n)) while approximately preserving the distances.
However, the mapping which multiplied a point u by a matrix G, is expensive to compute since G is a matrix in which almost all entries are non-zero. There has been a lot of research in coming up with mappings which can be computed much faster. In particular, one way is to come up with a sparse matrix G’, so that the mapping still preserves distances approximately, but the matrix-vector product G’u can be computed much faster.
Here is an excelled talk summarizing many of the recent developments in this area: