## Eigenfaces

In this post we’ll talk about the application of principle component analysis in face recognition.

## Eigen vectors as directions of variation

Given a $d$-dimensional dataset $A$ with $M$ samples, each sample being a $\sqrt{d} \times \sqrt{d}$ face photo, we would like to find out unit vectors in the $R^d$ space along which the dataset varies the most around the mean $\mu$.

To simplify the matter let’s assume that the dataset has been standardized as $x_i \leftarrow x_i - \mu$.

Suppose we have such a unit vector $u$, then the projection of a sample $x_i$ on $u$ would be $(x_i \cdot u) u$, so the coeffecient is $x_i \cdot u$.

The variance along $u$:

where $\Theta$ is the covariance matrix of the dataset.

To maximize $Var(u)$, $u$ needs to be the eigenvector that corresponds to the largest eigenvalue of $\Theta$.

## Dimensionality trick

With large images $d$ is going to be large, which poses a numericl difficult when you solve for the eigenvectors. There is neat trick to overcome this.

We have $\underbrace{\Theta}_{d \times d} = A^T \underbrace{A}_{M \times d}$ and want to find out the eigenvectors of $\Theta$, considering $d \gg M$, we try to find the eigenvectors of $AA^T$ first:

Thus we find an eigenvector of $AA^T$ and transform it by $A^T$ and get the eigenvector of $A^TA$.

## Eigenfaces

The eigenvectors thus obtained above are also $\sqrt{d} \times \sqrt{d}$ face photos:

## Face reconstruction from eigenfaces

To reconstruct the face photos (approximately), do this:

Where each columen in $U$ is an eigenface and $AU$ gives you the coeffecients.
e

## Face space

But we dont’ have to reconstruct the faces in order to analyze them, $AU$ gives us a new dataset with its dimensionality reduced from $R^d$ to $R^N$, the latter now called the face space.

Give a new sampel $x$, a simplified face recognition procedure would be:

• Project into the face space: $x \leftarrow xUU^T$.
• Find the most similar row for $x$ in $AU$, the reduced trainding data (one nearest neighbor).
• That’s it!