\begin{document}
\documentclass[11pt]{article}

\usepackage{latexsym}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{amsthm}
\usepackage{hyperref}
\usepackage{algorithmic}
\usepackage{algorithm}
\usepackage{complexity}
\usepackage{graphicx}
\usepackage{palatino} %my favourite font
\usepackage{makeidx}
\makeindex

\newcommand{\handout}[5]{
  \noindent
  \begin{center}
    \framebox[\textwidth]{
      \vbox{
        \hbox to \textwidth { {\bf #2}\hfill {\bf Computational Complexity }
       }
        \vspace{4mm}
        \hbox to \textwidth { {\Large \hfill #5  \hfill} }
        \vspace{2mm}
        \hbox to \textwidth { {\em #3 \hfill #4} }
      }
    }
  \end{center}
  \vspace*{4mm}
}

\newcommand{\lecture}[4]{\handout{#1}{#2}{Instructor: #3}{Scribe:
    #4}{#1}}

\newtheorem{theorem}{Theorem}
\newtheorem{theorem*}{Theorem}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{observation}[theorem]{Observation}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{definition}[]{Definition}
\newtheorem{claim}[theorem]{Claim}
\newtheorem{fact}[]{Fact}
\newtheorem{subclaim}[theorem]{Subclaim}

% my custom commands
\newcommand{\inparen}[1]{\left(#1\right)}             %\inparen{x+y}  is (x+y)
\newcommand{\inbrace}[1]{\left\{#1\right\}}           %\inbrace{x+y}  is {x+y}
\newcommand{\insquar}[1]{\left[#1\right]}             %\insquar{x+y}  is [x+y]
\newcommand{\inangle}[1]{\left\langle#1\right\rangle} %\inangle{A}    is <A>
\newcommand{\abs}[1]{\left|#1\right|}                 %\abs{x}        is |x|
\newcommand{\norm}[1]{\left\Vert#1\right\Vert}        %\norm{x}       is ||x||
\newcommand{\union}{\cup}
\newcommand{\Union}{\bigcup}
\newcommand{\intersection}{\cap}
\newcommand{\Intersection}{\bigcap}
\newcommand{\super}[2]{#1^{\inparen{#2}}}             %\super{G}{i-1} is G^{(i-1)}
\newcommand{\setdef}[2]{\inbrace{{#1}\ : \ {#2}}}
\newcommand{\inv}[1]{#1^{-1}}
\newcommand{\inrpdt}[2]{\left\langle{#1},{#2}\right\rangle}%\inrpdt{x}{y} is <x,y>.
\newcommand{\pderiv}[2]{\frac{\partial #1}{\partial #2}}
\newcommand{\iso}{\cong}
\newcommand{\bea}[1]{\begin{eqnarray*} #1 \end{eqnarray*}}
\newcommand{\legendre}[2]{\inparen{\frac{#1}{#2}}}
\newcommand{\zo}{\inbrace{0,1}}
\newcommand{\Rot}{\text{Rot}}
\newcommand{\tensor}{\otimes}


% Commands specific to this file
% TODO: Find the right way to typeset group index
\DeclareMathOperator{\Sym}{Sym}
\newcommand{\gpidx}[2]{\insquar{#1 : #2}}             %\gpidx{H}{K}   is [H : K]
\newcommand{\gpigs}[2]{\gpidx{\super{G}{#1}}{\super{G}{#2}}} %Group index of g super ...
\newcommand{\llhd}{\!\!\lhd\!\!\lhd}
\newcommand{\roundoff}[1]{\left\lfloor #1 \right\rceil}


% \newcommand{\ceil}[1]{\lceil #1 \rceil}
\newcommand{\floor}[1]{\lfloor #1 \rfloor}
\newcommand{\F}{\mathbb{F}}
\newcommand{\N}{\mathbb{N}}
\newcommand{\Q}{\mathbb{Q}}
\newcommand{\Z}{\mathbb{Z}}
\renewcommand{\R}{\mathbb{R}}
\renewcommand{\C}{\mathbb{C}}

%for expected value
\newcommand{\Ex}{{\bf E}}

%for algorithms
\renewcommand{\algorithmicrequire}{\textbf{Input:}}

%Dirac's notation
\newcommand{\bra}[1]{\left\langle#1\right|}
\newcommand{\ket}[1]{\left|#1\right\rangle}

%ZigZag Product
\newcommand{\zigzag}{\textcircled{z}}

\lecture{Expanders: Vertex and Spectral Expansion}{CS640}{Manindra Agrawal}{Ramprasad Saptharishi}

Last class we saw three very powerful and useful combinatorial
objects: extractors, expanders and pseudo-random generators. With the
promise that we shall get back to extractors and PRGs in a while, let
us go into expander graphs. 

\section{Two Notions of Expansion}

\subsection{Vertex Expansion}

Last class we defined a notion of expansion. The definition we saw
last time is called {\em vertex expansion}.\\


\index{expanders!vertex expansion}
A $d$-regular graph $G$ on $n$ vertices is called an
$(n,d,\epsilon)$-vertex expander  if for every subset $S$ of size less than
$n/2$, the number of edges going out of $S$ is atleast $\epsilon
d|S|.$ That is, $E(S,V\setminus S)\geq \epsilon d|S|.$\\

Another definition is what is known as the {\em spectral expansion}.

\subsection{Spectral Expansion}

Let $G$ be a $n$-vertex $d$-regular graph. Let $A$ be the normalized
adjacency matrix, which is $A_{ij} = \frac{1}{d}$ if $(i,j)$ is an
edge and $0$ otherwise; the usual adjacency matrix divided by $d$.

The reason we do this is that since $G$ is $d$-regular, each row has
exactly $d$ non-zero entries of value $\frac{1}{d}$ and so does each
column. Therefore, each row and column sum up to $1.$ And further,
this is a symmetric matrix. 

There is a theorem in linear algebra that states the following:
\begin{theorem}\index{linear algebra!spectral theorem}
Let $A$ be a symmetric matrix with real entries. Then there exists a
basis $\inbrace{v_1, v_2, \cdots, v_n}$ such that each $v_i$ is an
eigenvector with eigenvalue $\lambda_i$ which is real. And further,
the basis is orthonormal, that is $v_i\cdot v_j = 0$ if $i\neq j$ and
$v_i\cdot v_i = 1.$
\end{theorem}

And therefore, our normalized adjacency matrix has a set of
orthonormal eigenvectors as a basis. Without loss of generality, we
can assume that the eigenvalues are decreasing :
$\lambda_1\geq \lambda_2\geq \cdots \geq \lambda_n.$

\begin{claim}
$\lambda_1 = 1.$
\end{claim}
\begin{proof}
Firstly, we need to show that $1$ is infact an eigenvalue. That is
pretty straightforward to see. Consider the vector of all $1$s, which
is sometimes referred to as $\bar{1}.$ 

Note that $A\bar{1} = \bar{1}$ since each row of $A$ adds up to $1.$
Therefore $\bar{1}$ is an eigenvector corresponding to the eigenvalue
$1.$ We now need to show that all eigenvalues have to be at most
$1.$ \\

Let $\lambda$ be any eigenvalue and $x$ be the eigenvector
corresponding to $\lambda.$ Look at the product $Ax.$ Without loss of
generality, assume that the first coordinate of $x$ is the largest
entry in $x.$ Then we have
\bea{
Ax & = & \lambda x\\
\implies (Ax)_1 & = & \lambda x_1
}
Now note that the LHS will merely be the average of some $d$
coordinates of $x$ and the average of $d$ coordinates can never be
larger than the maximum value. And hence, $\lambda \leq 1.$ Infact one
can slightly tweak the argument to show that $|\lambda| \leq 1.$

Therefore all eigenvalues lie between $1$ and $-1$; and $1$ is the
largest eigenvalue. 
\end{proof}

\begin{definition}\index{expanders!$\lambda_2(G)$}
The second largest eigenvalue of a graph $G$ is the second largest
eigenvalue in magnitude of the normalized adjacency matrix $A$ of $G.$
$$
\lambda_2(G) = \max\inbrace{|\lambda_2|,|\lambda_n|}
$$
\end{definition}

With this, we can define our spectral expansion. 
\begin{definition}\index{expanders!spectral expansion}
A $n$-vertex $d$-regular graph $G$ is called a $(n,d,\lambda)$-spectral
expander if $\lambda = \lambda_2(G).$
\end{definition}

The use of this notion of expansion is that it allows us to use linear
algebra to analyse such graphs and their properties. We shall soon see
that the two notions of expansion are essentially equivalent. To easy
the notation used there, we shall used {\em Dirac's Notations} for
vectors; it is more intuitive to understand in that setting. 

\section{Dirac's Notation}

Dirac's notation of representing vectors, inner products, outer
products is widely used in Physics and expecially in the setting of
quantum mechanics where equations start looking messy with ordinary
vectors and transpose etc. Dirac's notation cleans them up and it
makes it a lot easier to understand. 

\begin{itemize}
\item A normal vector $v$ is written $\ket{v}$, pronounced as ``ket
$v$'', which is the column vector. And the row vector $v^T$ is written
as $\bra{v}$ and pronounced as ``bra $v$.'' \index{Dirac's notation!
bra,ket vectors}
\item In this setting, note that the usual inner product is just the
vector product $v^Tv$ which translates to $\bra{v}\ket{v}$ which is
``bra-ket (bracket) $v$.''\index{Dirac's notation!inner and outer products}
\item The outer-product $vv^t$ which gives an $n\times n$ matrix is
written as $\ket{v}\bra{v}.$ This is used less often than the inner
product but is very useful. For example, the adjacency matrix $E$ can
be expressed as 
$$
E = \sum_{(u,v)\in \text{Edges}}\ket{u}\bra{v}
$$
where $\bra{v}$ is the characteristic vector of $v$ that has $1$ in
the $v$-th position and $0$ everywhere else. 
\end{itemize}

This notation really cleans up a whole mess of arrow, or transposes or
dot-products etc. We shall see the power of this notation by using it
to prove the connection between vertex and spectral expansion. 

\section{Vertex Expansion $\Leftrightarrow$ Spectral Expansion}

The following theorem shows that the two notions of expansion are
essentially the same.

\begin{theorem}
Vertex expansion $\Leftrightarrow$ Spectral Expansion
\begin{itemize}
\item If $G$ was a $(n,d,\epsilon)$ vertex expander, then $G$ is also
a $(n,d,\lambda)$ spectral expander where $\lambda \leq 1 - (\epsilon^2/4).$
\item If $G$ was a $(n,d,\lambda)$ spectral expander, then $G$ is also
a $(n,d,\epsilon)$ vertex expander where  $\epsilon \geq (1 - \lambda)/2.$
\end{itemize}
If $\epsilon$ was large, then $\lambda$ will be small and vice-versa
\end{theorem}

We shall prove just the second part of the theorem. 
\begin{proof}
Let $G$ be a $(n,d,\lambda)$ expander. Let $S$ be any subset of
vertices such that $|S| \leq n/2.$ We want to estimate the number of
edges going out of $S.$ Let $T$ be the set of vertices that are not in
$S.$ We need to estimate $E(S,T).$

The first step is to write $E(S,T)$ as a product of
vectors/matrices. Let $u_S$ be the characteristic vector of $S$, the
vector which has a $1$ on exactly those coordinates that belong to
$S.$ In Dirac's notation, $\ket{u_S} = \sum_{i\in S}\ket{i}$ and
similarly $\ket{u_T}.$ Now look at the term $u_S^TAu_T$:
\bea{
u_S^TAu_T  = \bra{u_S}A\ket{u_T}& = & \inparen{\sum_{i\in S}\bra{i}}A\inparen{\sum_{j\in
T}\ket{j}}\\
& = & \frac{1}{d}\inparen{\sum_{i\in S}\bra{i}}\inparen{\sum_{(u,v)\in
E}\ket{u}\bra{v}}\inparen{\sum_{j\in T}\ket{j}}\\
& = & \frac{1}{d}\sum_{(i,j)\in
E(S,T)}\bra{i}\ket{i}\bra{j}\ket{j} \qquad\text{(all other terms
become $0$)}\\
& = & \frac{1}{d}|E(S,T)|
}
Suppose $\inbrace{v_1, v_2, \cdots, v_k}$ is the orthonormal
eigenbasis of $A$, let 
\bea{
\ket{u_S} & = &\sum_{i=1}^n \alpha_i \ket{v_i}\\
\ket{u_T} & = &\sum_{i=1}^n \beta_i \ket{v_i}
}
And therefore, we can calculate $\bra{u_T}A\ket{u_S}$ as
\bea{
\bra{u_T}A\ket{u_S} & = & \inparen{\sum_{i=1}^n\alpha_i\bra{v_i}}\inparen{A(\sum_{j=1}^n\beta_j\ket{v_j})}\\
& =
& \inparen{\sum_{i=1}^n\alpha_i\bra{v_i}}\inparen{\sum_{j=1}^n\lambda_j\beta_j\ket{v_j}}\\
& = & \sum_{i=1}^n\alpha_i\beta_i\lambda_i\bra{v_i}\ket{v_i} \qquad \text{(since $i\neq
j \implies \bra{v_i}\ket{v_j} = 0$)}
}

We know that $\lambda_1 = 1$ and that $v_1 = \frac{1}{\sqrt{n}}\bar{1}
= \frac{1}{\sqrt{n}}\sum_{i=1}^n\ket{i}.$ And further, 
\bea{
\bra{u_S}\ket{v_1} & = & \sum_{i=1}^n\alpha_i\bra{v_i}\ket{v_1}\\
                   & = & \alpha_1\\
\implies \frac{1}{\sqrt{n}}\sum_{i=1}^n\bra{u_S}\ket{i} & =
& \frac{|S|}{\sqrt{n}} = \alpha_1\\
\text{Similarly, }\frac{|T|}{\sqrt{n}} & = & \beta_1
}

Therefore
\bea{
\bra{u_S}A\ket{u_T} - \frac{|S||T|}{n} & =
&\sum_{i=2}^n \alpha_i\beta_i\lambda_i\\
\abs{\bra{u_S}A\ket{u_T} - \frac{|S||T|}{n}}
& \leq &\abs{\sum_{i=2}^n\alpha_i\beta_i\lambda_i}\\
  & \leq & \sum_{i=2}^n|\alpha_i||\beta_i||\lambda_i|\\
& \leq & \lambda \sum_{i=2}^n|\alpha_i||\beta_i|\\
& \leq & \lambda \sqrt{\sum_{i=2}\alpha_i^2}\sqrt{\sum_{i=2}\beta_i^2}
}

Now let us go back to the definition of $\ket{u_S}.$ We know that
$\norm{u_S}^2 = |S| = \bra{u_S}\ket{u_S}.$ Evaluating this as a linear
combination of eigenvectors, we have
\bea{
|S| = \bra{u_S}\ket{u_S} & =
&\inparen{\sum_{i=1}^n\alpha_i\bra{v_i}}\inparen{\sum_{i=1}^n\alpha_i\ket{v_i}} =  \sum_{i=1}^n\alpha_i^2\\
\implies |S| - \alpha_1^2 &= &\sum_{i=2}^n\alpha_i^2\\
\implies \sum_{i=2}^n\alpha_i^2 & = & |S| - \frac{|S|^2}{n} =
|S|\inparen{\frac{n - |S|}{n}} = \frac{|S||T|}{n}\\
\text{Similarly }\sum_{i=2}^n\beta_i^2 & = & \frac{|S||T|}{n}
}

Hence the above equation becomes
\bea{
\abs{\bra{u_S}A\ket{u_T} - \frac{|S||T|}{n}} &\leq& \lambda\frac{|S||T|}{n}\\
\implies \abs{E(S,T) - \frac{d|S||T|}{n}} & \leq &
  d\lambda\frac{|S||T|}{n}\\
\implies E(S,T) & \geq & \frac{d|S||T|}{n} - d\lambda\frac{|S||T|}{n}
}

From the definition of vertex expansion, we have
\bea{
\frac{E(S,T)}{d|S|} = \epsilon & \geq & \frac{|T|}{n}\inparen{1 -\lambda}\\
& \geq & \frac{1}{2}\inparen{1 - \lambda} \qquad\text{( since $|S|\leq n/2$ and
therefore $|T|\geq n/2$)}
}

Therefore, any graph with spectral expansion $\lambda$ has vertex
expansion of $\frac{1}{2}\inparen{1 - \lambda}.$
\end{proof}

In essence, both notions of expansion are equivalent. It is easier to
work with the spectral definition since it allows us to use a lot of
linear algebra techniques to analyze. 
\end{document}