expander walk sampling

In the mathematical discipline of graph theory, the expander walk sampling theorem intuitively states that sampling vertices in an expander graph by doing relatively short random walk can simulate sampling the vertices independently from a uniform distribution.

The earliest version of this theorem is due to {{harvtxt|Ajtai|Komlós|Szemerédi|1987}}, and the more general version is typically attributed to {{harvtxt|Gillman|1998}}.

Statement

Let G=(V,E) be an n-vertex expander graph with positively weighted edges, and let A\subset V. Let P

denote the stochastic matrix of the graph, and let \lambda_2 be the second largest eigenvalue of P. Let y_0, y_1, \ldots, y_{k-1} denote the vertices encountered in a (k-1)-step random walk on G starting at vertex y_0, and let \pi (A):= \lim_{k\rightarrow\infty} \frac{1}{k} \sum_{i = 0}^{k-1} \mathbf{1}_A(y_i). Where \mathbf{1}_A(y)\begin{cases} 1, & \text{if }y \in A \\ 0, & \text{otherwise }\end{cases}

(It is well known{{Cite book|last=Doob|first=J.L.|title=Stochastic Processes|publisher=Wiley|year=1953|location=Theorem 6.1}} that almost all trajectories y_0, y_1, \ldots, y_{k-1} converges to some limiting point, \pi (A), as k \rightarrow

\infty

.)

The theorem states that for a weighted graph G=(V,E) and a random walk y_0, y_1, \ldots, y_{k-1} where y_0 is chosen by an initial distribution \mathbf{q}, for all \gamma > 0, we have the following bound:

:\Pr\left[\bigg| \frac{1}{k} \sum_{i=0}^{k-1} \mathbf{1}_A(y_i) - \pi(A)\bigg| \geq \gamma\right] \leq C e^{-\frac{1}{20} (\gamma^2 (1-\lambda_2) k)}.

Where C is dependent on \mathbf{q}, G and A

.

The theorem gives a bound for the rate of convergence to \pi(A) with respect to the length of the random walk, hence giving a more efficient method to estimate \pi(A) compared to independent sampling the vertices of G.

Proof

In order to prove the theorem, we provide a few definitions followed by three lemmas.

Let \it{w}_{xy} be the weight of the edge xy\in E(G) and let \it{w}_x=\sum_{y:xy\in E(G)}\it{w}_{xy}. Denote by \pi(x):=\it{w}_x/\sum_{y\in V} \it{w}_y. Let \frac{\mathbf{q}}{\sqrt\pi} be the matrix with entries\frac{\mathbf{q}(x)}{\sqrt{\pi(x)}} , and let N_{\pi,\mathbf{q}}=||\frac{\mathbf{q}}{\sqrt\pi}||_{2}.

Let D=\text{diag}(1/\it{w}_i ) and M=(\it{w}_{ij}). Let P(r)=PE_r where P is the stochastic matrix, E_r=\text{diag}(e^{r\mathbf{1}_A}) and r \ge 0

. Then:

:P = \sqrt{D}S\sqrt{D^{-1}} \qquad \text{and} \qquad P(r) = \sqrt{DE_r^{-1}}S(r)\sqrt{E_rD^{-1}}

Where S:=\sqrt{D}M\sqrt{D} \text{ and } S(r) := \sqrt{DE_r}M\sqrt{DE_r}. As S and S(r) are symmetric, they have real eigenvalues. Therefore, as the eigenvalues of S(r) and P(r) are equal, the eigenvalues of P(r) are real. Let \lambda(r) and \lambda_2(r) be the first and second largest eigenvalue of P(r) respectively.

For convenience of notation, let t_k=\frac{1}{k} \sum_{i=0}^{k-1} \mathbf{1}_A(y_i), \epsilon=\lambda-\lambda_2

, \epsilon_r=\lambda(r)-\lambda_2(r)

, and let \mathbf{1} be the all-1 vector.

Lemma 1

\Pr\left[t_k- \pi(A) \ge \gamma\right] \leq e^{-rk(\pi(A)+\gamma)+k\log\lambda(r)}(\mathbf{q}P(r)^k\mathbf{1})/\lambda(r)^k

Proof:

By Markov's inequality,

\begin{alignat}{2}

\Pr\left[t_k \ge \pi(A) +\gamma\right] =\Pr[e^{rt_k}\ge e^{rk(\pi(A)+\gamma)}]\leq e^{-rk(\pi(A)+\gamma)}E_\mathbf{q}e^{rt_k}

\end{alignat}

Where E_\mathbf{q} is the expectation of x_0 chosen according to the probability distribution \mathbf{q}. As this can be interpreted by summing over all possible trajectories x_0,x_1,.. .,x_k, hence:

E_{\mathbf{q}}e^{rt}=\sum_{x_1,x_2,...,x_k}e^{rt}\mathbb{q}(x_0)\Pi_{i=1}^kp_{x_{i-1}x_i}=\mathbf{q}P(r)^k\mathbf{1}

Combining the two results proves the lemma.

Lemma 2

For 0\le r \le 1,

:(\mathbf{q}P(r)^k\mathbf{1})/\lambda(r)^k\le (1+r)N_{\pi,\mathbf{q}}

Proof:

As eigenvalues of P(r) and S(r) are equal,

\begin{align}

(\mathbf{q}P(r)^k\mathbf{1})/\lambda(r)^k&= (\mathbf{q}P\sqrt{DE_r^{-1}}S(r)^k

\sqrt{D^{-1}E_r}\mathbf{1})/\lambda(r)^k\\ &\le e^{r/2}||\frac{\mathbf{q}}{\sqrt{\pi}}||_2||S(r)^k||_2||\sqrt{\pi}||_2/\lambda(r)^k\\

&\le e^{r/2}N_{\pi,\mathbf{q}}\\

&\le (1+r)N_{\pi,\mathbf{q}}\qquad \square

\end{align}

Lemma 3

If r is a real number such that 0\le e^r-1\le \epsilon/4,

:\log\lambda(r)\le r\pi(A)+5r^2/\epsilon

Proof summary:

We Taylor expand \log \lambda(y)

about point r=z

to get:

:\log\lambda(r)= \log\lambda(z)+m_z(r-z)+(r-z)^2\int_0^1 (1-t)V_{z+(r-z)t}dt

Where m_x \text{ and } V_x are first and second derivatives of \log \lambda(r) at r=x. We show that m_0=\lim_{k \to \infty}t_k=\pi(A). We then prove that (i) \epsilon_r\ge 3\epsilon/4 by matrix manipulation, and then prove (ii)V_r\le 10/\epsilon using (i) and Cauchy's estimate from complex analysis.

The results combine to show that

:\begin{align}

\log\lambda(r)= \log\lambda(0)+m_0r+r^2\int_0^1 (1-t)V_{rt}dt

\le r\pi(A)+5r^2/\epsilon

\end{align}

:A line to line proof can be found in Gilman (1998)[https://epubs.siam.org/doi/pdf/10.1137/S0097539794268765#page23]

Proof of theorem

Combining lemma 2 and lemma 3, we get that

:\Pr[t_k-\pi(A)\ge \gamma]\le(1+r)N_{\pi,\mathbf{q}}e^{-k(r\gamma-5r^2/\epsilon)}

Interpreting the exponent on the right hand side of the inequality as a quadratic in r and minimising the expression, we see that

:\Pr[t_k-\pi(A)\ge \gamma]\le(1+\gamma\epsilon/10)N_{\pi,\mathbf{q}}e^{-k\gamma^2\epsilon/20}

A similar bound

:\Pr[t_k-\pi(A)\le - \gamma]\le (1+\gamma\epsilon/10)N_{\pi,\mathbf{q}}e^{-k\gamma^2\epsilon/20}

holds, hence setting C=2(1+\gamma\epsilon/10)N_{\pi,\mathbf{q}} gives the desired result.

Uses

This theorem is useful in randomness reductions in the study of derandomization,{{cite arXiv | eprint=2501.07752 | last1=Anand | first1=Emile | title=Towards the Pseudorandomness of Expander Random Walks for Read-Once ACC0 circuits | date=2025 | class=cs.CC }} showing that expander random walks are good pseudorandom generators against various classes of test functions. Sampling from an expander walk is an example of a randomness-efficient sampler. Note that the number of bits used in sampling k independent samples from f is k \log n, whereas if we sample from an infinite family of constant-degree expanders this costs only \log n + O(k). Such families exist and are efficiently constructible, e.g. the Ramanujan graphs of Lubotzky-Phillips-Sarnak.

References

{{Reflist}}

{{refbegin}}

  • {{Cite conference|first1=M.|conference=STOC '87|last1=Ajtai|first2=J.|last2=Komlós|first3=E.|last3=Szemerédi|title=Deterministic simulation in LOGSPACE|book-title=Proceedings of the nineteenth annual ACM symposium on Theory of computing|pages=132–140|year=1987|work=ACM|doi=10.1145/28395.28410|isbn=0897912217|doi-access=free}}
  • {{cite journal

| first=D. | last=Gillman

| s2cid=26319459

| title=A Chernoff Bound for Random Walks on Expander Graphs

| journal=SIAM Journal on Computing

| volume=27

| pages=1203–1220

| year=1998

| publisher=Society for Industrial and Applied Mathematics

| doi=10.1137/S0097539794268765

| issue=4

}}

  • {{Citation|last=Doob|first=J.L.|title=Stochastic Processes|volume=Theorem 6.1|publication-date=1953|publisher=Wiley}}
  • {{cite arXiv| eprint=2501.07752 | last1=Anand | first1=Emile | title=Towards the Pseudorandomness of Expander Random Walks for Read-Once ACC0 circuits | date=2025 | class=cs.CC }}

{{refend}}

Category:Sampling (statistics)