Saturday, January 28, 2017

Install opencv3 with CUDA support on a mac

Install from source (recommended):

sudo xcode-select --install # if not done already
brew tap homebrew/science
brew install cmake pkg-config jpeg libpng libtiff openexr eigen tbb
cd ~/CppProjects/
git clone --depth 1 https://github.com/opencv/opencv
git clone --depth 1 https://github.com/opencv/opencv_contrib

cd /Users/kaiyin/anaconda3/envs/tensorflow/lib/
ln -s libpython3.5m.dylib libpython3.5.dylib 

# install for python 3.5
# tensorflow is an anaconda python3.5 environment on my machine created for tensorflow
export ENV_TENSORFLOW=/Users/kaiyin/anaconda3/envs/tensorflow
export PREFIX=/opt/local
export PY_DYLIB="$ENV_TENSORFLOW/lib/libpython3.5.dylib"
export OPENCV_CONTRIB=~/CppProjects/opencv_contrib/modules
export PY_INCLUDE="$ENV_TENSORFLOW/include/python3.5m"
export PY_BINARY="$ENV_TENSORFLOW/bin/python3.5"
cmake -D CMAKE_BUILD_TYPE=RELEASE \
    -D CMAKE_INSTALL_PREFIX=$PREFIX \
    -D OPENCV_EXTRA_MODULES_PATH="$OPENCV_CONTRIB" \
    -D PYTHON3_LIBRARY="$PY_DYLIB" \
    -D PYTHON3_INCLUDE_DIR="$PY_INCLUDE" \
    -D PYTHON3_EXECUTABLE="$PY_BINARY" \
    -D BUILD_opencv_python2=OFF \
    -D BUILD_opencv_python3=ON \
    -D INSTALL_PYTHON_EXAMPLES=ON \
    -D INSTALL_C_EXAMPLES=OFF \
    -D BUILD_EXAMPLES=ON ..

make -j8 # use 8 jobs for compiling
sudo make install
cp $PREFIX/lib/python3.5/site-packages/cv2.cpython-35m-darwin.so  $ENV_TENSORFLOW/lib/python3.5/site-packages


# install for python 2.7
# tf27 is an anaconda python2.7 environment on my machine created for tensorflow
export ENV_TENSORFLOW=/Users/kaiyin/anaconda3/envs/tf27
export PREFIX=/opt/local
export PY_DYLIB="$ENV_TENSORFLOW/lib/libpython2.7.dylib"
export OPENCV_CONTRIB=~/CppProjects/opencv_contrib/modules
export PY_INCLUDE="$ENV_TENSORFLOW/include/python2.7"
export PY_BINARY="$ENV_TENSORFLOW/bin/python2.7"
cmake -D CMAKE_BUILD_TYPE=RELEASE \
    -D CMAKE_INSTALL_PREFIX=$PREFIX \
    -D OPENCV_EXTRA_MODULES_PATH="$OPENCV_CONTRIB" \
    -D PYTHON2_LIBRARY="$PY_DYLIB" \
    -D PYTHON2_INCLUDE_DIR="$PY_INCLUDE" \
    -D PYTHON2_EXECUTABLE="$PY_BINARY" \
    -D BUILD_opencv_python2=ON \
    -D BUILD_opencv_python3=OFF \
    -D INSTALL_PYTHON_EXAMPLES=ON \
    -D INSTALL_C_EXAMPLES=OFF \
    -D BUILD_EXAMPLES=ON ..


make -j8 # use 8 jobs for compiling
sudo make install
cp $PREFIX/lib/python2.7/site-packages/cv2.so $ENV_TENSORFLOW/lib/python2.7/site-packages/

Verify your installation in python 2.7:

# source activate tf27
(tf27) kaiyin@kaiyins-mbp 21:11:12 | /opt/local/lib/python3.5/site-packages =>
ipython
Python 2.7.13 |Continuum Analytics, Inc.| (default, Dec 20 2016, 23:05:08)
Type "copyright", "credits" or "license" for more information.

IPython 5.1.0 -- An enhanced Interactive Python.
?         -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help      -> Python's own help system.
object?   -> Details about 'object', use 'object??' for extra details.

IPython profile: kaiyin

In [1]: import cv2
c
In [2]: cv2.__version__
Out[2]: '3.2.0-dev'

Verify your installation in python 3.5:

# source activate tensorflow
(tensorflow) kaiyin@kaiyins-mbp 21:13:13 | /opt/local/lib/python3.5/site-packages =>
ipython
Python 3.5.2 |Continuum Analytics, Inc.| (default, Jul  2 2016, 17:52:12)
Type "copyright", "credits" or "license" for more information.

IPython 5.1.0 -- An enhanced Interactive Python.
?         -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help      -> Python's own help system.
object?   -> Details about 'object', use 'object??' for extra details.

IPython profile: kaiyin

In [1]: import cv2; cv2.__version__
Out[1]: '3.2.0-dev'

Saturday, January 21, 2017

Examples of linear convolutional filters

enter image description here
Mean filter

The sharpening filter perhaps needs a bit explanation. Suppose the pixel under the center of the filter has value , while all the pixels around have value , then after filtering:

Obviously, when the difference between the pixel and its environment is zero, the filter will not have any effect, but when there is a difference, it’s amplified by a factor of . Therefore this is called a sharpening filter.

Dodge and burn

Dodging has the over-exposure effect, where light pixels tend to be pushed to white.
Burning is the opposite, where dark pixels tend to be pushed to black.

import numpy as np
import cv2


def dodge(image, mask):
    return cv2.divide(image, 255 - mask, scale=256)


def burn(image, mask):
    return 255 - cv2.divide(255 - image, 255 - mask, scale=256)


class PencilSketch:
    def __init__(self, width, height, bg_gray="pencilsketch_bg.jpg"):
        self.width = width
        self.height = height
        self.canvas = cv2.imread(bg_gray, cv2.CV_8UC1)
        if self.canvas is not None:
            self.canvas = cv2.resize(self.canvas, (width, height))

    def render(self, img_rgb):
        img_gray = cv2.cvtColor(img_rgb, cv2.COLOR_RGB2GRAY)
        img_gray_inv = 255 - img_gray
        img_blur = cv2.GaussianBlur(img_gray_inv, (21, 21), 0, 0)
        img_blend = dodge(img_gray, img_blur)
        return cv2.cvtColor(img_blend, cv2.COLOR_GRAY2RGB)


import matplotlib.pyplot as plt
# img = plt.imread("/Users/kaiyin/PycharmProjects/opencv3blueprints/chapter01/tree.jpg")
img_rgb = cv2.imread("/Users/kaiyin/PycharmProjects/opencv3blueprints/chapter01/tree.jpg", -1)
img_gray = cv2.cvtColor(img_rgb, cv2.COLOR_RGB2GRAY)
# print(img_rgb.shape)
plt.imshow(img_rgb)
plt.imshow(img_gray, cmap="gray")
img_dodge = dodge(img_gray, img_gray)
plt.imshow(img_dodge, cmap="gray")
img_burn = burn(img_gray, img_gray)
img_blur = cv2.GaussianBlur(img_gray, (21, 21), 3)
plt.clf(); plt.imshow(img_blur, cmap="gray")
# effect of dodging: pixels that are brighter than a certain threshold are pushed to 255 (except that 255 is degenerated into 0)
plt.clf(); plt.scatter(img_gray.flatten(), img_dodge.flatten())
plt.clf(); plt.scatter(img_gray.flatten(), img_burn.flatten())
img_dodge1 = dodge(img_gray, img_blur)
img_burn1 = burn(img_gray, img_blur)
plt.clf(); plt.scatter(img_gray.flatten(), img_dodge1.flatten())
plt.clf(); plt.scatter(img_gray.flatten(), img_burn1.flatten())
plt.clf(); plt.imshow(img_dodge1, cmap="gray")
plt.clf(); plt.imshow(img_burn1, cmap="gray")

Dodge

Burn

Sunday, January 8, 2017

Eigenfaces

In this post we’ll talk about the application of principle component analysis in face recognition.

Eigen vectors as directions of variation

Given a -dimensional dataset with samples, each sample being a face photo, we would like to find out unit vectors in the space along which the dataset varies the most around the mean .

To simplify the matter let’s assume that the dataset has been standardized as .

Suppose we have such a unit vector , then the projection of a sample on would be , so the coeffecient is .

The variance along :

where is the covariance matrix of the dataset.

To maximize , needs to be the eigenvector that corresponds to the largest eigenvalue of .

Dimensionality trick

With large images is going to be large, which poses a numericl difficult when you solve for the eigenvectors. There is neat trick to overcome this.

We have and want to find out the eigenvectors of , considering , we try to find the eigenvectors of first:

Thus we find an eigenvector of and transform it by and get the eigenvector of .

Eigenfaces

The eigenvectors thus obtained above are also face photos:

enter image description here

Face reconstruction from eigenfaces

To reconstruct the face photos (approximately), do this:

Where each columen in is an eigenface and gives you the coeffecients.
e

Face space

But we dont’ have to reconstruct the faces in order to analyze them, gives us a new dataset with its dimensionality reduced from to , the latter now called the face space.

Give a new sampel , a simplified face recognition procedure would be:

  • Project into the face space: .
  • Find the most similar row for in , the reduced trainding data (one nearest neighbor).
  • That’s it!

Monday, January 2, 2017

Polar representation for lines

Let’s take an arbitrary point in the xy-plane, and consider all the possible lines passing through it . Each of these lines can be represented as subject to in the mb-plane. The mb-plane is called the Hough space:

The problem is when the line is vertical, is infinite and completely off the chart, not a nice property if you ask me. The solution is to use polar representation of the lines instead of slope-intercept representation.

In the figure below, AB is the line we want represent with polar coordinates , where is the distance from the origin to the line () and is angle between the x-axis and the normal vector of the line ().

Let E be an arbitrary point on AB, then , , , , , . Using the fact , you can derive that , the polar representation we were looking for.