Category Archives: Uncategorized

More on Expanders

We discussed expanders and their applications in the last class. Today I will cover an explicit construction by Spielman, which is simpler than the well-known combinatorial construction based on the zigzag  product by Reingold, Vadhan and Wigderson.

Here’s a series of two talks by Avi Wigderson explaining much more about the applications of expanders and also the zigzag construction if you are interested:



Higher-Order Cheeger Inequalities

We discussed Cheeger’s inequality in the last lecture and I briefly mentioned some generalizations to higher eigenvalues. Here is a nice talk by Luca Trevisan giving some intuition for one of the generalizations I mentioned:

The talk is very accessible and does not assume any background in spectral graph theory.

Fast Dimension Reduction

We discussed the Johnson-Lindenstrauss lemma in class, where we mapped n points in d dimensions, to points in a k-dimensional space (with k = O(log n)) while approximately preserving the distances.

However, the mapping which multiplied a point u by a matrix G, is expensive to compute since G is a matrix in which almost all entries are non-zero. There has been a lot of research in coming up with mappings which can be computed much faster. In particular, one way is to come up with a sparse matrix G’, so that the mapping still preserves distances approximately, but the matrix-vector product G’u can be ¬†computed much faster.

Here is an excelled talk summarizing many of the recent developments in this area:

Patterns in Coin Tosses

We discussed the following problem in class today:

Consider an infinite sequence of independent coin tosses of a fair coin and define the following two random variables:

Z1 = Number of coin tosses till the pattern HTT first appears.

Z2 = Number of coin tosses till the pattern HTH first appears.

Then, somewhat counterintuitively, E[Z1] and E[Z2] are not the same! Its a common error to assume that since the probability of 3 tosses appearing in either pattern is 1/8, the expected time of first occurrence is also the same. But the time of first occurrence also depends on wether different occurrences of the string can overlap. Here’s the talk by by Peter Donnelly that I mentioned – it discusses this problem and a bunch of other interesting ones.

As an exercise, you can try using the idea of conditional expectations to compute E[Z1] and E[Z2].