Based on the book Computer Vision a Modern Approach page 425, I attempted to use eigenvectors for image segmentation.
http://dl.dropbox.com/u/1570604/tmp/comp-vis-modern-segment.pdf
The author mentions that image pixel affinites can be captured in matrix A. Then we can maximize w^T A w product where w's are weights. After some algebra one obtains Aw = \lambda w, finding w is like finding eigenvectors. Then finding the best cluster is finding the eigenvalue with largest eigenvector, the values inside that eigenvector are cluster membership values. I wrote this code
import matplotlib.pyplot as plt
import numpy as np
Img = plt.imread("twoObj.jpg")
(n,dummy) = Img.shape
Img2 = Img.flatten()
(nn,) = Img2.shape
A = np.zeros((nn,nn))
for i in range(nn):
for j in range(nn):
N=Img2[i]-Img2[j];
A[i,j]=np.exp(-(N**2))
V,D = np.linalg.eig(A)
V = np.real(V)
a = np.real(D[1])
threshold = 1e-10 # filter
a = np.reshape(a, (n,n))
Img[a<threshold] = 255
plt.imshow(Img)
plt.show()
The image
Best result I could get from this is below. I have a feeling the results can be better.
The eigenvalues are sorted from largest to smallest in Numpy, I tried the first one, that did not work, then I tried the second one for the results seen below. Threshold value was chosen by trial and error. Any ideas on how this algorithm can be improved?