## Install opencv3 with CUDA support on a mac

Install from source (recommended):

sudo xcode-select --install # if not done already
brew tap homebrew/science
brew install cmake pkg-config jpeg libpng libtiff openexr eigen tbb
cd ~/CppProjects/
git clone --depth 1 https://github.com/opencv/opencv
git clone --depth 1 https://github.com/opencv/opencv_contrib

cd /Users/kaiyin/anaconda3/envs/tensorflow/lib/
ln -s libpython3.5m.dylib libpython3.5.dylib

# install for python 3.5
# tensorflow is an anaconda python3.5 environment on my machine created for tensorflow
export ENV_TENSORFLOW=/Users/kaiyin/anaconda3/envs/tensorflow
export PREFIX=/opt/local
export PY_DYLIB="$ENV_TENSORFLOW/lib/libpython3.5.dylib" export OPENCV_CONTRIB=~/CppProjects/opencv_contrib/modules export PY_INCLUDE="$ENV_TENSORFLOW/include/python3.5m"
export PY_BINARY="$ENV_TENSORFLOW/bin/python3.5" cmake -D CMAKE_BUILD_TYPE=RELEASE \ -D CMAKE_INSTALL_PREFIX=$PREFIX \
-D OPENCV_EXTRA_MODULES_PATH="$OPENCV_CONTRIB" \ -D PYTHON3_LIBRARY="$PY_DYLIB" \
-D PYTHON3_INCLUDE_DIR="$PY_INCLUDE" \ -D PYTHON3_EXECUTABLE="$PY_BINARY" \
-D BUILD_opencv_python2=OFF \
-D BUILD_opencv_python3=ON \
-D INSTALL_PYTHON_EXAMPLES=ON \
-D INSTALL_C_EXAMPLES=OFF \
-D BUILD_EXAMPLES=ON ..

make -j8 # use 8 jobs for compiling
sudo make install
cp $PREFIX/lib/python3.5/site-packages/cv2.cpython-35m-darwin.so$ENV_TENSORFLOW/lib/python3.5/site-packages

# install for python 2.7
# tf27 is an anaconda python2.7 environment on my machine created for tensorflow
export ENV_TENSORFLOW=/Users/kaiyin/anaconda3/envs/tf27
export PREFIX=/opt/local
export PY_DYLIB="$ENV_TENSORFLOW/lib/libpython2.7.dylib" export OPENCV_CONTRIB=~/CppProjects/opencv_contrib/modules export PY_INCLUDE="$ENV_TENSORFLOW/include/python2.7"
export PY_BINARY="$ENV_TENSORFLOW/bin/python2.7" cmake -D CMAKE_BUILD_TYPE=RELEASE \ -D CMAKE_INSTALL_PREFIX=$PREFIX \
-D OPENCV_EXTRA_MODULES_PATH="$OPENCV_CONTRIB" \ -D PYTHON2_LIBRARY="$PY_DYLIB" \
-D PYTHON2_INCLUDE_DIR="$PY_INCLUDE" \ -D PYTHON2_EXECUTABLE="$PY_BINARY" \
-D BUILD_opencv_python2=ON \
-D BUILD_opencv_python3=OFF \
-D INSTALL_PYTHON_EXAMPLES=ON \
-D INSTALL_C_EXAMPLES=OFF \
-D BUILD_EXAMPLES=ON ..

make -j8 # use 8 jobs for compiling
sudo make install
cp $PREFIX/lib/python2.7/site-packages/cv2.so$ENV_TENSORFLOW/lib/python2.7/site-packages/

Verify your installation in python 2.7:

# source activate tf27
(tf27) kaiyin@kaiyins-mbp 21:11:12 | /opt/local/lib/python3.5/site-packages =>
ipython
Python 2.7.13 |Continuum Analytics, Inc.| (default, Dec 20 2016, 23:05:08)

IPython 5.1.0 -- An enhanced Interactive Python.
?         -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help      -> Python's own help system.
object?   -> Details about 'object', use 'object??' for extra details.

IPython profile: kaiyin

In [1]: import cv2
c
In [2]: cv2.__version__
Out[2]: '3.2.0-dev'

Verify your installation in python 3.5:

# source activate tensorflow
(tensorflow) kaiyin@kaiyins-mbp 21:13:13 | /opt/local/lib/python3.5/site-packages =>
ipython
Python 3.5.2 |Continuum Analytics, Inc.| (default, Jul  2 2016, 17:52:12)

IPython 5.1.0 -- An enhanced Interactive Python.
?         -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help      -> Python's own help system.
object?   -> Details about 'object', use 'object??' for extra details.

IPython profile: kaiyin

In [1]: import cv2; cv2.__version__
Out[1]: '3.2.0-dev'

## Examples of linear convolutional filters

The sharpening filter perhaps needs a bit explanation. Suppose the pixel under the center of the filter has value $v = x+d$, while all the pixels around have value $v_a = x$, then after filtering:

Obviously, when the difference between the pixel and its environment is zero, the filter will not have any effect, but when there is a difference, it’s amplified by a factor of $17/9$. Therefore this is called a sharpening filter.

## Dodge and burn

Dodging has the over-exposure effect, where light pixels tend to be pushed to white.
Burning is the opposite, where dark pixels tend to be pushed to black.

import numpy as np
import cv2

return cv2.divide(image, 255 - mask, scale=256)

return 255 - cv2.divide(255 - image, 255 - mask, scale=256)

class PencilSketch:
def __init__(self, width, height, bg_gray="pencilsketch_bg.jpg"):
self.width = width
self.height = height
if self.canvas is not None:
self.canvas = cv2.resize(self.canvas, (width, height))

def render(self, img_rgb):
img_gray = cv2.cvtColor(img_rgb, cv2.COLOR_RGB2GRAY)
img_gray_inv = 255 - img_gray
img_blur = cv2.GaussianBlur(img_gray_inv, (21, 21), 0, 0)
img_blend = dodge(img_gray, img_blur)
return cv2.cvtColor(img_blend, cv2.COLOR_GRAY2RGB)

import matplotlib.pyplot as plt
img_gray = cv2.cvtColor(img_rgb, cv2.COLOR_RGB2GRAY)
# print(img_rgb.shape)
plt.imshow(img_rgb)
plt.imshow(img_gray, cmap="gray")
img_dodge = dodge(img_gray, img_gray)
plt.imshow(img_dodge, cmap="gray")
img_burn = burn(img_gray, img_gray)
img_blur = cv2.GaussianBlur(img_gray, (21, 21), 3)
plt.clf(); plt.imshow(img_blur, cmap="gray")
# effect of dodging: pixels that are brighter than a certain threshold are pushed to 255 (except that 255 is degenerated into 0)
plt.clf(); plt.scatter(img_gray.flatten(), img_dodge.flatten())
plt.clf(); plt.scatter(img_gray.flatten(), img_burn.flatten())
img_dodge1 = dodge(img_gray, img_blur)
img_burn1 = burn(img_gray, img_blur)
plt.clf(); plt.scatter(img_gray.flatten(), img_dodge1.flatten())
plt.clf(); plt.scatter(img_gray.flatten(), img_burn1.flatten())
plt.clf(); plt.imshow(img_dodge1, cmap="gray")
plt.clf(); plt.imshow(img_burn1, cmap="gray")

## Eigenfaces

In this post we’ll talk about the application of principle component analysis in face recognition.

## Eigen vectors as directions of variation

Given a $d$-dimensional dataset $A$ with $M$ samples, each sample being a $\sqrt{d} \times \sqrt{d}$ face photo, we would like to find out unit vectors in the $R^d$ space along which the dataset varies the most around the mean $\mu$.

To simplify the matter let’s assume that the dataset has been standardized as $x_i \leftarrow x_i - \mu$.

Suppose we have such a unit vector $u$, then the projection of a sample $x_i$ on $u$ would be $(x_i \cdot u) u$, so the coeffecient is $x_i \cdot u$.

The variance along $u$:

where $\Theta$ is the covariance matrix of the dataset.

To maximize $Var(u)$, $u$ needs to be the eigenvector that corresponds to the largest eigenvalue of $\Theta$.

## Dimensionality trick

With large images $d$ is going to be large, which poses a numericl difficult when you solve for the eigenvectors. There is neat trick to overcome this.

We have $\underbrace{\Theta}_{d \times d} = A^T \underbrace{A}_{M \times d}$ and want to find out the eigenvectors of $\Theta$, considering $d \gg M$, we try to find the eigenvectors of $AA^T$ first:

Thus we find an eigenvector of $AA^T$ and transform it by $A^T$ and get the eigenvector of $A^TA$.

## Eigenfaces

The eigenvectors thus obtained above are also $\sqrt{d} \times \sqrt{d}$ face photos:

## Face reconstruction from eigenfaces

To reconstruct the face photos (approximately), do this:

Where each columen in $U$ is an eigenface and $AU$ gives you the coeffecients.
e

## Face space

But we dont’ have to reconstruct the faces in order to analyze them, $AU$ gives us a new dataset with its dimensionality reduced from $R^d$ to $R^N$, the latter now called the face space.

Give a new sampel $x$, a simplified face recognition procedure would be:

• Project into the face space: $x \leftarrow xUU^T$.
• Find the most similar row for $x$ in $AU$, the reduced trainding data (one nearest neighbor).
• That’s it!

## Polar representation for lines

Let’s take an arbitrary point $(x_0,y_0)$ in the xy-plane, and consider all the possible lines passing through it $y_0 = mx_0 + b$. Each of these lines can be represented as $(m, b)$ subject to $b = -x_0m + y_0$ in the mb-plane. The mb-plane is called the Hough space:

The problem is when the line is vertical, $m$ is infinite and completely off the chart, not a nice property if you ask me. The solution is to use polar representation of the lines instead of slope-intercept representation.

In the figure below, AB is the line we want represent with polar coordinates $(\theta, d)$, where $d$ is the distance from the origin to the line ($CD$) and $\theta$ is angle between the x-axis and the normal vector of the line ($\angle BCD$).

Let E be an arbitrary point on AB, then $CF = GE = x$, $CG = y$, $GH = y / \tan \theta$, $CH = y / \sin \theta$, $HD = d - y / \sin \theta$, $HE = \frac{d - y / \sin \theta}{\cos \theta}$. Using the fact $GH + HE = x$, you can derive that $x\cos \theta + y\sin \theta = d$, the polar representation we were looking for.