码迷,mamicode.com
首页 > 其他好文 > 详细

PCA and kmeans MATLAB实现

时间:2015-07-05 22:27:46      阅读:321      评论:0      收藏:0      [点我收藏+]

标签:

MATLAB基础知识

 

l  Imread:  读取图片信息;

l  axis:轴缩放:axis([xmin xmax ymin ymax zmin zmax cmin cmax]) 设置 x、y 和 z 轴范围以及颜色缩放范围(请参阅 caxis)。v = axis 返回包含 x、y 和 z 轴缩放因子的行矢量。v 具有 4 或 6 个分量,具体分别取决于当前坐标轴是二维还是三维。返回值是当前坐标轴的 XLimYlim 和 ZLim 属性。   基于 x、y 和 z 数据的最小值和最大值,axis auto 自动设置 MATLAB® 默认行为以计算当前坐标轴范围。可以将该自动行为限制为特定轴。例如,axis ‘auto x‘ 仅自动计算 x 轴范围;axis‘auto yz‘ 自动计算 y 和 z 轴范围。

l  bsxfun

对两个数组应用基于元素的二进制操作(启用单一扩展)

用 bsxfun 减去的矩阵 A 对应列元素的列均值。

A = [1 2 10; 1 4 20;1 6 15] ;
C = bsxfun(@minus, A, mean(A))
C =
 
     0    -2    -5
     0     0     5
     0     2     0

@plus

@minus

@times

数组乘法

@rdivide

数组右除

@ldivide

数组左除

@power

数组幂

@max

二进制最大值

@min

二进制最小值

@rem

除后的余数

@mod

除后的模数

@atan2

四象限反切线;以弧度表示结果

@atan2d

四象限反切线;以度表示结果

@hypot

平方和的平方根

@eq

等于

@ne

不等于

@lt

小于

@le

小于或等于

@gt

大于

@ge

大于或等于

@and

按元素逻辑 AND

@or

按元素逻辑 OR

@xor

逻辑异 OR

l  奇异值分解:

 svd 命令计算矩阵奇异值分解。

s = svd(X) 返回奇异值的矢量。

[U,S,V] = svd(X) 生成维度与 X 相同的对角矩阵 S(包含以降序排列的非负对角线元素)以及单位矩阵 U 和 V,这样 X = U*S*V‘

[U,S,V] = svd(X,0) 生成大小合适的分解。如果 X 是 m×n(其中 m > n),则 svd 仅计算 U 的前 n 列,并且 S 是 n×n。

[U,S,V] = svd(X,‘econ‘) 也生成大小合适的分解。如果 X 是 m×n(其中 m >= n),则它等于 svd(X,0)。对于 m < n,仅计算 V 的前 m 列并且 S 是 m×m。

A.    示例

对于矩阵

X =
     1    2
     3    4
     5    6
     7    8

语句

[U,S,V] = svd(X)

生成

U =
    -0.1525   -0.8226   -0.3945   -0.3800
    -0.3499   -0.4214    0.2428    0.8007
    -0.5474   -0.0201    0.6979   -0.4614
    -0.7448    0.3812   -0.5462    0.0407
 
S =
     14.2691         0
           0    0.6268
           0         0
           0         0
 
V =
    -0.6414     0.7672
    -0.7672    -0.6414
 

 

 

II. K-MEANS

聚类是非监督模型不带标签

KMeans算法的基本思想是初始随机给定K个簇中心,按照最邻近原则把待分类样本点分到各个簇。然后按平均法重新计算各个簇的质心,从而确定新的簇心。一直迭代,直到簇心的移动距离小于某个给定的值。
 

K-Means聚类算法主要分为三个步骤:
(1)第一步是为待聚类的点寻找聚类中心
(2)第二步是计算每个点到聚类中心的距离,将每个点聚类到离该点最近的聚类中去
(3)第三步是计算每个聚类中所有点的坐标平均值,并将这个平均值作为新的聚类中心
反复执行(2)、(3),直到聚类中心不再进行大范围移动或者聚类次数达到要求为止

 

 

 

数据kmeans具体过程:

1:设置中心点:初始化,

                       

2:找到每个点的所属最近簇

Function idx = findClosestCentroids(X, initial_centroids);

 

3:训练数据找到每个簇的数据,将数据进行归属簇i。并且重新求每个簇的中心点

 

4:将数据迭代计算中心点基本达到中心点不变为;迭代训练找到中心对

测试集和验证集进行验证。

III.   PCA算法降维:

人脸识别,

和数据压缩(将100维的数据降到10维压缩率为90%)。

l  假设将数据的维数从 R N 降到 R 3 ,具体的 PCA 分析步骤如下:

数据压缩

l    均值归一化;第一步计算矩阵 X 的样本的协方差矩阵 S :

[X_norm, mu, sigma] = featureNormalize(X);

      求平均值;和协方差

      mu = mean(X);平均值

X_norm = bsxfun(@minus, X, mu);

sigma = std(X_norm);协方差

X_norm = bsxfun(@rdivide, X_norm, sigma)

l    奇异值分解:

sigma = X‘*X/m; 

[U, S, V] = svd(sigma); 

l    找到平面上的一维平面

Z = X*U(:, 1:K);

将所有数据垂直折射到一维平面上

hold on;

plot(X_rec(:, 1), X_rec(:, 2), ‘ro‘);

for i = 1:size(X_norm, 1)

    drawLine(X_norm(i,:), X_rec(i,:), ‘--k‘, ‘LineWidth‘, 1);

end

hold off

将bird 中的数据进行提取降维

1:读取数据数据归一化:

A = double(imread(‘bird_small.png‘));

A = A / 255;

img_size = size(A);

X = reshape(A, img_size(1) * img_size(2), 3);

2:kmeans训练特征将3d数据投射到2d曲面上

sel = floor(rand(1000, 1) * size(X, 1)) + 1;

palette = hsv(K);

colors = palette(idx(sel), :);

 

3:画出23d的图像;

人脸识别:

l     基本原理:

1主成分分析(PCA)的原理就是将一个高维向量x,通过一个特殊的特征向量矩阵U,投影到一个低维的向量空间中,表征为一个低维向量y,并且仅仅损失了一些次要信息。也就是说,通过低维表征的向量和特征向量矩阵,可以基本重构出所对应的原始高维向量。

在人脸识别中,特征向量矩阵U称为特征脸(eigenface)空间,因此其中的特征向量ui进行量化后可以看出人脸轮廓,在下面的实验中可以看出。

以人脸识别为例,说明下PCA的应用。

设有N个人脸训练样本,每个样本由其像素灰度值组成一个向量xi,则样本图像的像素点数即为xi的维数,M=width*height ,由向量构成的训练样本集为 。

该样本集的平均向量为:

 

平均向量又叫平均脸。

 

样本集的协方差矩阵为:

 

求出协方差矩阵的特征向量ui和对应的特征值 ,这些特征向量组成的矩阵U就是人脸空间的正交基底,用它们的线性组合可以重构出样本中任意的人脸图像,(如果有朋友不太理解这句话的意思,请看下面的总结2。)并且图像信息集中在特征值大的特征向量中,即使丢弃特征值小的向量也不会影响图像质量。

将协方差矩阵的特征值按大到小排序: 。由大于 的 对应的特征向量构成主成分,主成分构成的变换矩阵为:

                               

这样每一幅人脸图像都可以投影到 构成的特征脸子空间中,U的维数为M×d。有了这样一个降维的子空间,任何一幅人脸图像都可以向其作投影 ,即并获得一组坐标系数,即低维向量y,维数d×1,为称为KL分解系数。这组系数表明了图像在子空间的位置,从而可以作为人脸识别的依据。


有朋友可能不太理解,第一部分讲K-L变换的时候,求的是相关矩阵 的特征向量和特征值,这里怎么求的是协方差矩阵 ?

其实协方差矩阵也是:

,可以看出其实 用代替x就成了相关矩阵R,相当于原始样本向量都减去个平均向量,实质上还是一样的,协方差矩阵也是实对称矩阵。


总结下:

1、在人脸识别过程中,对输入的一个测试样本x,求出它与平均脸的偏差 ,则 在特征脸空间U的投影,可以表示为系数向量y:

                                     

U的维数为M×d, 的维数为M×1,y的维数d×1。若M为200*200=40000维,取200个主成分,即200个特征向量,则最后投影的系数向量y维数降维200维。

2、根据1中的式子,可以得出:

                                

这里的x就是根据投影系数向量y重构出的人脸图像,丢失了部分图像信息,但不会影响图像质量。

 

Matlab 基本函数:

部分函数说明如下:

Mat Mat::reshape(int cn, int rows=0) const

  该函数是改变Mat的尺寸,即保持尺寸大小=行数*列数*通道数 不变。其中第一个参数为变换后Mat的通道数,如果为0,代表变换前后通道数不变。第二个参数为变换后Mat的行数,如果为0也是代表变换前后通道数不变。但是该函数本身不复制数据。

 

  void Mat::convertTo(OutputArray m, int rtype, double alpha=1, double beta=0 ) const

  该函数其实是对原Mat的每一个值做一个线性变换。参数1为目的矩阵,参数2为目d矩阵的类型,参数3和4变换的系数,看完下面的公式就明白了:

  

 

  PCA::PCA(InputArray data, InputArray mean, int flags, int maxComponents=0)

  该构造函数的第一个参数为要进行PCA变换的输入Mat;参数2为该Mat的均值向量;参数3为输入矩阵数据的存储方式,如果其值为CV_PCA_DATA_AS_ROW则说明输入Mat的每一行代表一个样本,同理当其值为CV_PCA_DATA_AS_COL时,代表输入矩阵的每一列为一个样本;最后一个参数为该PCA计算时保留的最大主成分的个数。如果是缺省值,则表示所有的成分都保留。

 

  Mat PCA::project(InputArray vec) const

  该函数的作用是将输入数据vec(该数据是用来提取PCA特征的原始数据)投影到PCA主成分空间中去,返回每一个样本主成分特征组成的矩阵。因为经过PCA处理后,原始数据的维数降低了,因此原始数据集中的每一个样本的维数都变了,由改变后的样本集就组成了本函数的返回值。下面由一个图说明:

                                                              

 

  Mat PCA::backProject(InputArray vec) const

  一般调用backProject()函数前需调用project()函数,因为backProject()函数的参数vec就是经过PCA投影降维过后的矩阵dst。 因此backProject()函数的作用就是用vec来重构原始数据集(关于该函数的本质就是上面总结2的公式)。由一个图说明如下:

                                                                 

  另外PCA类中还有几个成员变量,mean,eigenvectors, eigenvalues等分别对应着原始数据的均值,协方差矩阵的特征值和特征向量。

 

具体步骤:

1)   加载图片:

load (‘ex7faces.mat‘)

displayData(X(1:100, :))

2)  降维操作

[X_norm, mu, sigma] = featureNormalize(X);

 

%  Run PCA

[U, S] = pca(X_norm);

3)  将降维之后的特征一样的进行聚类然后进行识别为同一个人

重复操作进行分类:

X_rec = Z * U(:, 1:K)‘;

 

 

 

1:
%% Machine Learning Online Class
%  Exercise 7 | Principle Component Analysis and K-Means Clustering
%
%  Instructions
%  ------------
%
%  This file contains code that helps you get started on the
%  exercise. You will need to complete the following functions:
%
%     pca.m
%     projectData.m
%     recoverData.m
%     computeCentroids.m
%     findClosestCentroids.m
%     kMeansInitCentroids.m
%
%  For this exercise, you will not need to change any code in this file,
%  or any other files other than those mentioned above.
%

%% Initialization
clear ; close all; clc

%% ================= Part 1: Find Closest Centroids ====================
%  To help you implement K-Means, we have divided the learning algorithm 
%  into two functions -- findClosestCentroids and computeCentroids. In this
%  part, you shoudl complete the code in the findClosestCentroids function. 
%
fprintf(‘Finding closest centroids.\n\n‘);

% Load an example dataset that we will be using
load(‘ex7data2.mat‘);

% Select an initial set of centroids
K = 3; % 3 Centroids                     %定义三簇
initial_centroids = [3 3; 6 2; 8 5];     %定义每个簇的中心点

% Find the closest centroids for the examples using the
% initial_centroids   
idx = findClosestCentroids(X, initial_centroids);

fprintf(‘Closest centroids for the firxst 3 examples: \n‘)
fprintf(‘ %d‘, idx(1:3));
fprintf(‘\n(the closest centroids should be 1, 3, 2 respectively)\n‘);

fprintf(‘Program paused. Press enter to continue.\n‘);
pause;

%% ===================== Part 2: Compute Means =========================
%  After implementing the closest centroids function, you should now
%  complete the computeCentroids function.
%
fprintf(‘\nComputing centroids means.\n\n‘);

%  Compute means based on the closest centroids found in the previous part.
centroids = computeCentroids(X, idx, K);

fprintf(‘Centroids computed after initial finding of closest centroids: \n‘)
fprintf(‘ %f %f \n‘ , centroids‘);
fprintf(‘\n(the centroids should be\n‘);
fprintf(‘   [ 2.428301 3.157924 ]\n‘);
fprintf(‘   [ 5.813503 2.633656 ]\n‘);
fprintf(‘   [ 7.119387 3.616684 ]\n\n‘);

fprintf(‘Program paused. Press enter to continue.\n‘);
pause;


%% =================== Part 3: K-Means Clustering ======================
%  After you have completed the two functions computeCentroids and
%  findClosestCentroids, you have all the necessary pieces to run the
%  kMeans algorithm. In this part, you will run the K-Means algorithm on
%  the example dataset we have provided. 
%
fprintf(‘\nRunning K-Means clustering on example dataset.\n\n‘);

% Load an example dataset
load(‘ex7data2.mat‘);

% Settings for running K-Means
K = 3;
max_iters = 10;

% For consistency, here we set centroids to specific values
% but in practice you want to generate them automatically, such as by
% settings them to be random examples (as can be seen in
% kMeansInitCentroids).
initial_centroids = [3 3; 6 2; 8 5];

% Run K-Means algorithm. The ‘true‘ at the end tells our function to plot
% the progress of K-Means   找到每个训练集最近的簇节点  训练特征值然后改变节点位置
[centroids, idx] = runkMeans(X, initial_centroids, max_iters, true);
fprintf(‘\nK-Means Done.\n\n‘);

fprintf(‘Program paused. Press enter to continue.\n‘);
pause;

%% ============= Part 4: K-Means Clustering on Pixels ===============
%  In this exercise, you will use K-Means to compress an image. To do this,
%  you will first run K-Means on the colors of the pixels in the image and
%  then you will map each pixel on to it‘s closest centroid.
%  
%  You should now complete the code in kMeansInitCentroids.m
%

fprintf(‘\nRunning K-Means clustering on pixels from an image.\n\n‘);

%  Load an image of a bird
A = double(imread(‘bird_small.png‘));

% If imread does not work for you, you can try instead
%   load (‘bird_small.mat‘);

A = A / 255; % Divide by 255 so that all values are in the range 0 - 1 
%将数据归一化

% Size of the image
img_size = size(A);

% Reshape the image into an Nx3 matrix where N = number of pixels.
% Each row will contain the Red, Green and Blue pixel values
% This gives us our dataset matrix X that we will use K-Means on.
X = reshape(A, img_size(1) * img_size(2), 3);

% Run your K-Means algorithm on this data
% You should try different values of K and max_iters here
K = 16; 
max_iters = 10;

% When using K-Means, it is important the initialize the centroids
% randomly. 
% You should complete the code in kMeansInitCentroids.m before proceeding
initial_centroids = kMeansInitCentroids(X, K);

% Run K-Means
[centroids, idx] = runkMeans(X, initial_centroids, max_iters);

fprintf(‘Program paused. Press enter to continue.\n‘);
pause;


%% ================= Part 5: Image Compression ======================
%  In this part of the exercise, you will use the clusters of K-Means to
%  compress an image. To do this, we first find the closest clusters for
%  each example. After that, we 

fprintf(‘\nApplying K-Means to compress an image.\n\n‘);

% Find closest cluster members
idx = findClosestCentroids(X, centroids);

% Essentially, now we have represented the image X as in terms of the
% indices in idx. 

% We can now recover the image from the indices (idx) by mapping each pixel
% (specified by it‘s index in idx) to the centroid value
X_recovered = centroids(idx,:);

% Reshape the recovered image into proper dimensions
X_recovered = reshape(X_recovered, img_size(1), img_size(2), 3);

% Display the original image 
subplot(1, 2, 1);
imagesc(A); 
title(‘Original‘);

% Display compressed image side by side
subplot(1, 2, 2);
imagesc(X_recovered)
title(sprintf(‘Compressed, with %d colors.‘, K));


fprintf(‘Program paused. Press enter to continue.\n‘);
pause;


2:
%% Machine Learning Online Class
%  Exercise 7 | Principle Component Analysis and K-Means Clustering
%
%  Instructions
%  ------------
%
%  This file contains code that helps you get started on the
%  exercise. You will need to complete the following functions:
%
%     pca.m
%     projectData.m
%     recoverData.m
%     computeCentroids.m
%     findClosestCentroids.m
%     kMeansInitCentroids.m
%
%  For this exercise, you will not need to change any code in this file,
%  or any other files other than those mentioned above.
%

%% Initialization
clear ; close all; clc

%% ================== Part 1: Load Example Dataset  ===================
%  We start this exercise by using a small dataset that is easily to
%  visualize
%
fprintf(‘Visualizing example dataset for PCA.\n\n‘);

%  The following command loads the dataset. You should now have the 
%  variable X in your environment
load (‘ex7data1.mat‘);

%  Visualize the example dataset
plot(X(:, 1), X(:, 2), ‘bo‘);
axis([0.5 6.5 2 8]); 
axis square;

fprintf(‘Program paused. Press enter to continue.\n‘);
pause;


%% =============== Part 2: Principal Component Analysis ===============
%  You should now implement PCA, a dimension reduction technique. You
%  should complete the code in pca.m
%
fprintf(‘\nRunning PCA on example dataset.\n\n‘);

%  Before running PCA, it is important to first normalize X
[X_norm, mu, sigma] = featureNormalize(X);

%  Run PCA
[U, S] = pca(X_norm);

%  Compute mu, the mean of the each feature

%  Draw the eigenvectors centered at mean of data. These lines show the
%  directions of maximum variations in the dataset.
hold on;
drawLine(mu, mu + 1.5 * S(1,1) * U(:,1)‘, ‘-k‘, ‘LineWidth‘, 2);
drawLine(mu, mu + 1.5 * S(2,2) * U(:,2)‘, ‘-k‘, ‘LineWidth‘, 2);
hold off;

fprintf(‘Top eigenvector: \n‘);
fprintf(‘ U(:,1) = %f %f \n‘, U(1,1), U(2,1));
fprintf(‘\n(you should expect to see -0.707107 -0.707107)\n‘);

fprintf(‘Program paused. Press enter to continue.\n‘);
pause;


%% =================== Part 3: Dimension Reduction ===================
%  You should now implement the projection step to map the data onto the 
%  first k eigenvectors. The code will then plot the data in this reduced 
%  dimensional space.  This will show you what the data looks like when 
%  using only the corresponding eigenvectors to reconstruct it.
%
%  You should complete the code in projectData.m
%
fprintf(‘\nDimension reduction on example dataset.\n\n‘);

%  Plot the normalized dataset (returned from pca)
plot(X_norm(:, 1), X_norm(:, 2), ‘bo‘);
axis([-4 3 -4 3]); axis square

%  Project the data onto K = 1 dimension
K = 1;
Z = projectData(X_norm, U, K);
fprintf(‘Projection of the first example: %f\n‘, Z(1));
fprintf(‘\n(this value should be about 1.481274)\n\n‘);

X_rec  = recoverData(Z, U, K);
fprintf(‘Approximation of the first example: %f %f\n‘, X_rec(1, 1), X_rec(1, 2));
fprintf(‘\n(this value should be about  -1.047419 -1.047419)\n\n‘);

%  Draw lines connecting the projected points to the original points
hold on;
plot(X_rec(:, 1), X_rec(:, 2), ‘ro‘);
for i = 1:size(X_norm, 1)
    drawLine(X_norm(i,:), X_rec(i,:), ‘--k‘, ‘LineWidth‘, 1);
end
hold off

fprintf(‘Program paused. Press enter to continue.\n‘);
pause;

%% =============== Part 4: Loading and Visualizing Face Data =============
%  We start the exercise by first loading and visualizing the dataset.
%  The following code will load the dataset into your environment
%
fprintf(‘\nLoading face dataset.\n\n‘);

%  Load Face dataset
load (‘ex7faces.mat‘)

%  Display the first 100 faces in the dataset
displayData(X(1:100, :));

fprintf(‘Program paused. Press enter to continue.\n‘);
pause;

%% =========== Part 5: PCA on Face Data: Eigenfaces  ===================
%  Run PCA and visualize the eigenvectors which are in this case eigenfaces
%  We display the first 36 eigenfaces.
%
fprintf([‘\nRunning PCA on face dataset.\n‘ ...
         ‘(this mght take a minute or two ...)\n\n‘]);

%  Before running PCA, it is important to first normalize X by subtracting 
%  the mean value from each feature
[X_norm, mu, sigma] = featureNormalize(X);

%  Run PCA
[U, S] = pca(X_norm);

%  Visualize the top 36 eigenvectors found
displayData(U(:, 1:36)‘);

fprintf(‘Program paused. Press enter to continue.\n‘);
pause;


%% ============= Part 6: Dimension Reduction for Faces =================
%  Project images to the eigen space using the top k eigenvectors 
%  If you are applying a machine learning algorithm 
fprintf(‘\nDimension reduction for face dataset.\n\n‘);

K = 100;
Z = projectData(X_norm, U, K);

fprintf(‘The projected data Z has a size of: ‘)
fprintf(‘%d ‘, size(Z));

fprintf(‘\n\nProgram paused. Press enter to continue.\n‘);
pause;

%% ==== Part 7: Visualization of Faces after PCA Dimension Reduction ====
%  Project images to the eigen space using the top K eigen vectors and 
%  visualize only using those K dimensions
%  Compare to the original input, which is also displayed

fprintf(‘\nVisualizing the projected (reduced dimension) faces.\n\n‘);

K = 100;
X_rec  = recoverData(Z, U, K);

% Display normalized data
subplot(1, 2, 1);
displayData(X_norm(1:100,:));
title(‘Original faces‘);
axis square;

% Display reconstructed data from only k eigenfaces
subplot(1, 2, 2);
displayData(X_rec(1:100,:));
title(‘Recovered faces‘);
axis square;

fprintf(‘Program paused. Press enter to continue.\n‘);
pause;


%% === Part 8(a): Optional (ungraded) Exercise: PCA for Visualization ===
%  One useful application of PCA is to use it to visualize high-dimensional
%  data. In the last K-Means exercise you ran K-Means on 3-dimensional 
%  pixel colors of an image. We first visualize this output in 3D, and then
%  apply PCA to obtain a visualization in 2D.

close all; close all; clc

% Re-load the image from the previous exercise and run K-Means on it
% For this to work, you need to complete the K-Means assignment first
A = double(imread(‘bird_small.png‘));

% If imread does not work for you, you can try instead
%   load (‘bird_small.mat‘);

A = A / 255;
img_size = size(A);
X = reshape(A, img_size(1) * img_size(2), 3);
K = 16; 
max_iters = 10;
initial_centroids = kMeansInitCentroids(X, K);
[centroids, idx] = runkMeans(X, initial_centroids, max_iters);

%  Sample 1000 random indexes (since working with all the data is
%  too expensive. If you have a fast computer, you may increase this.
sel = floor(rand(1000, 1) * size(X, 1)) + 1;

%  Setup Color Palette
palette = hsv(K);
colors = palette(idx(sel), :);

%  Visualize the data and centroid memberships in 3D
figure;
scatter3(X(sel, 1), X(sel, 2), X(sel, 3), 10, colors);
title(‘Pixel dataset plotted in 3D. Color shows centroid memberships‘);
fprintf(‘Program paused. Press enter to continue.\n‘);
pause;

%% === Part 8(b): Optional (ungraded) Exercise: PCA for Visualization ===
% Use PCA to project this cloud to 2D for visualization

% Subtract the mean to use PCA
[X_norm, mu, sigma] = featureNormalize(X);

% PCA and project the data to 2D
[U, S] = pca(X_norm);
Z = projectData(X_norm, U, 2);

% Plot in 2D
figure;
plotDataPoints(Z(sel, :), idx(sel), K);
title(‘Pixel dataset plotted in 2D, using PCA for dimensionality reduction‘);
fprintf(‘Program paused. Press enter to continue.\n‘);
pause;




3:
function [centroids, idx] = runkMeans(X, initial_centroids, ...
                                      max_iters, plot_progress)
%RUNKMEANS runs the K-Means algorithm on data matrix X, where each row of X
%is a single example
%   [centroids, idx] = RUNKMEANS(X, initial_centroids, max_iters, ...
%   plot_progress) runs the K-Means algorithm on data matrix X, where each 
%   row of X is a single example. It uses initial_centroids used as the
%   initial centroids. max_iters specifies the total number of interactions 
%   of K-Means to execute. plot_progress is a true/false flag that 
%   indicates if the function should also plot its progress as the 
%   learning happens. This is set to false by default. runkMeans returns 
%   centroids, a Kxn matrix of the computed centroids and idx, a m x 1 
%   vector of centroid assignments (i.e. each entry in range [1..K])
%

% Set default value for plot progress
if ~exist(‘plot_progress‘, ‘var‘) || isempty(plot_progress)
    plot_progress = false;
end

% Plot the data if we are plotting progress
if plot_progress
    figure;
    hold on;
end

% Initialize values
[m n] = size(X);
K = size(initial_centroids, 1);
centroids = initial_centroids;
previous_centroids = centroids;
idx = zeros(m, 1);

% Run K-Means
for i=1:max_iters
    
    % Output progress
    fprintf(‘K-Means iteration %d/%d...\n‘, i, max_iters);
    if exist(‘OCTAVE_VERSION‘)
        fflush(stdout);
    end
    
    % For each example in X, assign it to the closest centroid
    idx = findClosestCentroids(X, centroids);  %接着调用找最近节点的函数指导训练结束
    
    % Optionally, plot progress here
    if plot_progress
        plotProgresskMeans(X, centroids, previous_centroids, idx, K, i);
        previous_centroids = centroids;
        fprintf(‘Press enter to continue.\n‘);
        pause;
    end
    
    % Given the memberships, compute new centroids
    centroids = computeCentroids(X, idx, K);
end

% Hold off if we are plotting progress
if plot_progress
    hold off;
end

end

4:
function centroids = computeCentroids(X, idx, K)  
%COMPUTECENTROIDS returs the new centroids by computing the means of the   
%data points assigned to each centroid.  
%   centroids = COMPUTECENTROIDS(X, idx, K) returns the new centroids by   
%   computing the means of the data points assigned to each centroid. It is  
%   given a dataset X where each row is a single data point, a vector  
%   idx of centroid assignments (i.e. each entry in range [1..K]) for each  
%   example, and K, the number of centroids. You should return a matrix  
%   centroids, where each row of centroids is the mean of the data points  
%   assigned to it.  
%  
  
% Useful variables  
[m n] = size(X);  
  
% You need to return the following variables correctly.  
centroids = zeros(K, n);  
  
  
% ====================== YOUR CODE HERE ======================  
% Instructions: Go over every centroid and compute mean of all points that  
%               belong to it. Concretely, the row vector centroids(i, :)  
%               should contain the mean of the data points assigned to  
%               centroid i.  
%  
% Note: You can use a for-loop over the centroids to compute this.  
%  
  
for i = 1:K,  
    k = find(idx==i);%注意这里不要写成一个等号,第一次就写错了  
    num = size(k, 1);  %分类将属于每个簇的数据放到一起
    centroids(i,:) = sum(X(k,:),1)/num;    %将每个簇中的数据进行求中心点
end;  
  
  
% =============================================================  
  
  
end  
5;
function [h, display_array] = displayData(X, example_width)
%DISPLAYDATA Display 2D data in a nice grid
%   [h, display_array] = DISPLAYDATA(X, example_width) displays 2D data
%   stored in X in a nice grid. It returns the figure handle h and the 
%   displayed array if requested.

% Set example_width automatically if not passed in
if ~exist(‘example_width‘, ‘var‘) || isempty(example_width) 
	example_width = round(sqrt(size(X, 2)));
end

% Gray Image
colormap(gray);

% Compute rows, cols
[m n] = size(X);
example_height = (n / example_width);

% Compute number of items to display
display_rows = floor(sqrt(m));
display_cols = ceil(m / display_rows);

% Between images padding
pad = 1;

% Setup blank display
display_array = - ones(pad + display_rows * (example_height + pad), ...
                       pad + display_cols * (example_width + pad));

% Copy each example into a patch on the display array
curr_ex = 1;
for j = 1:display_rows
	for i = 1:display_cols
		if curr_ex > m, 
			break; 
		end
		% Copy the patch
		
		% Get the max value of the patch
		max_val = max(abs(X(curr_ex, :)));
		display_array(pad + (j - 1) * (example_height + pad) + (1:example_height), ...
		              pad + (i - 1) * (example_width + pad) + (1:example_width)) = ...
						reshape(X(curr_ex, :), example_height, example_width) / max_val;
		curr_ex = curr_ex + 1;
	end
	if curr_ex > m, 
		break; 
	end
end

% Display Image
h = imagesc(display_array, [-1 1]);

% Do not show axis
axis image off

drawnow;

end

6:
function [X_norm, mu, sigma] = featureNormalize(X)
%FEATURENORMALIZE Normalizes the features in X 
%   FEATURENORMALIZE(X) returns a normalized version of X where
%   the mean value of each feature is 0 and the standard deviation
%   is 1. This is often a good preprocessing step to do when
%   working with learning algorithms.

mu = mean(X);
X_norm = bsxfun(@minus, X, mu);

sigma = std(X_norm);
X_norm = bsxfun(@rdivide, X_norm, sigma);


% ============================================================

end


8:
function centroids = kMeansInitCentroids(X, K)
%KMEANSINITCENTROIDS This function initializes K centroids that are to be 
%used in K-Means on the dataset X
%   centroids = KMEANSINITCENTROIDS(X, K) returns K initial centroids to be
%   used with the K-Means on the dataset X
%

% You should return this values correctly
centroids = zeros(K, size(X, 2));

% ====================== YOUR CODE HERE ======================
% Instructions: You should set centroids to randomly chosen examples from
%               the dataset X
%








% =============================================================

end

 

PCA and kmeans MATLAB实现

标签:

原文地址:http://www.cnblogs.com/meng-qing/p/4623029.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!