标签:style blog http io color os ar 使用 for
人生如戏!!!!
一、理论准备
输入:簇的数目k和包含n个对象的数据库。
输出:k个簇,使平方误差准则最小。
算法步骤:
1.为每个聚类确定一个初始聚类中心,这样就有K 个初始聚类中心。
2.将样本集中的样本按照最小距离原则分配到最邻近聚类
3.使用每个聚类中的样本均值作为新的聚类中心。
4.重复步骤2.3直到聚类中心不再变化。
5.结束,得到K个聚类
二、算法实现
1: %K-means算法主程序2: k=4;3: x =[ 1.2126 2.1338 0.5115 0.20444: -0.9316 0.7634 0.0125 -0.27525: -2.9593 0.1813 -0.8833 0.85056: 3.1104 -2.5393 -0.0588 0.18087: -3.1141 -0.1244 -0.6811 0.98918: -3.2008 0.0024 -1.2901 0.97489: -1.0777 1.1438 0.1996 0.013910: -2.7213 -0.1909 0.1184 0.101311: -1.1467 1.3820 0.1427 -0.223912: 1.1497 1.9414 -0.3035 0.346413: 2.6993 -2.2556 0.1637 -0.013914: -3.0311 0.1417 0.0888 0.179115: -2.8403 -0.1809 -0.0965 0.081716: 1.0118 2.0372 0.1638 -0.034917: -0.8968 1.0260 -0.1013 0.236918: 1.1112 1.8802 -0.0291 -0.150619: 1.1907 2.2041 -0.1060 0.216720: -1.0114 0.8029 -0.1317 0.015321: -3.1715 0.1041 -0.3338 0.032122: 0.9718 1.9634 0.0305 -0.325923: -1.0377 0.8889 -0.2834 0.230124: -0.8989 1.0185 -0.0289 0.021325: -2.9815 -0.4798 0.2245 0.308526: -0.8576 0.9231 -0.2752 -0.009127: -3.1356 0.0026 -1.2138 0.773328: 3.4470 -2.2418 0.2014 -0.155629: 2.9143 -1.7951 0.1992 -0.214630: 3.4961 -2.4969 -0.0121 0.131531: -2.9341 -0.1071 -0.7712 0.891132: -2.8105 -0.0884 -0.0287 -0.127933: 3.1006 -2.0677 -0.2002 -0.130334: 0.8209 2.1724 0.1548 0.351635: -2.8500 0.3196 0.1359 -0.117936: -2.8679 0.1365 -0.5702 0.762637: -2.8245 -0.1312 0.0881 -0.130538: -0.8322 1.3014 -0.3837 0.240039: -2.6063 0.1431 0.1880 0.048740: -3.1341 -0.0854 -0.0359 -0.208041: 0.6893 2.0854 -0.3250 -0.100742: 1.0894 1.7271 -0.0176 0.655343: -2.9851 -0.0113 0.0666 -0.080244: 1.0371 2.2724 0.1044 0.398245: -2.8032 -0.2737 -0.7391 1.027746: -2.6856 0.0619 -1.1066 1.048547: -2.9445 -0.1602 -0.0019 0.009348: 1.2004 2.1302 -0.1650 0.341349: 3.2505 -1.9279 0.4462 -0.240550: -1.2080 0.8222 0.1671 0.157651: -2.8274 0.1515 -0.9636 1.067552: 2.8190 -1.8626 0.2702 0.002653: 1.0507 1.7776 -0.1421 0.099954: -2.8946 0.1446 -0.1645 0.307155: -1.0105 1.0973 0.0241 0.162856: -2.9138 -0.3404 0.0627 0.128657: -3.0646 -0.0008 0.3819 -0.154158: 1.2531 1.9830 -0.0774 0.241359: 1.1486 2.0440 -0.0582 -0.065060: -3.1401 -0.1447 -0.6580 0.956261: -2.9591 0.1598 -0.6581 1.193762: -2.9219 -0.3637 -0.1538 -0.208563: 2.8948 -2.2745 0.2332 -0.031264: -3.2972 -0.0219 -0.0288 -0.143665: -1.2737 0.7648 0.0643 0.085866: -1.0690 0.8108 -0.2723 0.323167: -0.5908 0.7508 -0.5456 0.019068: 0.5808 2.0573 -0.1658 0.170969: 2.8227 -2.2461 0.2255 -0.368470: 0.6174 1.7654 -0.3999 0.412571: 3.2587 -1.9310 0.2021 0.080072: 1.0999 1.8852 -0.0475 -0.058573: -2.7395 0.2585 -0.8441 0.998774: -1.2223 1.0542 -0.2480 -0.279575: -2.9212 -0.0605 -0.0259 0.259176: 3.1598 -2.2631 0.1746 0.148577: 0.8476 1.8760 -0.2894 -0.035478: 2.9205 -2.2418 0.4137 -0.249979: 2.7656 -2.1768 0.0719 -0.184880: -0.8698 1.0249 -0.2084 -0.000881: -1.1444 0.7787 -0.4958 0.367682: -1.0711 1.0450 -0.0477 -0.403083: 0.5350 1.8110 -0.0377 0.162284: 0.9076 1.8845 -0.1121 0.570085: -2.7887 -0.2119 0.0566 0.012086: -1.2567 0.9274 0.1104 0.158187: -2.9946 -0.2086 -0.8169 0.666288: 1.0536 1.9818 -0.0631 0.258189: -2.8465 -0.2222 0.2745 0.199790: -2.8516 0.1649 -0.7566 0.861691: -3.2470 0.0770 0.1173 -0.109292: -2.9322 -0.0631 -0.0062 -0.051193: -2.7919 0.0438 -0.1935 -0.502394: 0.9894 1.9475 -0.0146 -0.039095: -2.9659 -0.1300 0.1144 0.341096: -2.7322 -0.0427 -1.0758 0.971897: -1.4852 0.8592 -0.0503 -0.137398: 2.8845 -2.1465 -0.0533 -0.104499: -3.1470 0.0536 0.1073 0.3323100: 2.9423 -2.1572 0.0505 0.1180101: -3.0683 0.3434 -0.6563 0.8960102: 1.3215 2.0951 -0.1557 0.3994103: -0.7681 1.2075 -0.2781 0.2372104: -0.6964 1.2360 -0.3342 0.1662105: -0.6382 0.8204 -0.2587 0.3344106: -3.0233 -0.1496 -0.2607 -0.0400107: -0.8952 0.9872 0.0019 0.3138108: -0.8172 0.6814 -0.0691 0.1009109: -3.3032 0.0571 -0.0243 -0.1405110: 0.7810 1.9013 -0.3996 0.7374111: -0.9030 0.8646 -0.1498 0.1112112: -0.8461 0.9261 -0.1295 -0.0727113: 2.8182 -2.0818 -0.1430 -0.0547114: 2.9295 -2.3846 -0.0244 -0.1400115: 1.0587 2.2227 -0.1250 0.0957116: 3.0755 -1.7365 -0.0511 0.1500117: -1.3076 0.8791 -0.3720 0.0331118: -2.8252 -0.0366 -0.6790 0.7374119: -2.6551 -0.1875 0.3222 0.0483120: -2.9659 -0.1585 0.4013 -0.1402121: -3.2859 -0.1546 0.0104 -0.1781122: -0.6679 1.1999 0.1396 -0.3195123: -1.0205 1.2226 0.1850 0.0050124: -3.0091 -0.0186 -0.9111 0.9663125: -3.0339 0.1377 -0.9662 1.0664126: 0.8952 1.9594 -0.3221 0.3579127: -2.8481 0.1963 -0.1428 0.0382128: 1.0796 2.1353 -0.0792 0.6491129: -0.8732 0.8985 -0.0049 0.0068130: 1.0620 2.1478 -0.1275 0.3553131: 3.4509 -1.9975 0.1285 -0.1575132: -3.2280 -0.0640 -1.1513 0.8235133: -0.6654 0.9402 0.0577 -0.0175134: -3.2100 0.2762 -0.1053 0.0626135: 3.0793 -2.0043 0.2948 0.0411136: 1.3596 1.9481 -0.0167 0.3958137: -3.1267 0.1801 0.2228 0.1179138: -0.7979 0.9892 -0.2673 0.4734139: 2.5580 -1.7623 -0.1049 -0.0521140: -0.9172 1.0621 -0.0826 0.1501141: -0.7817 1.1658 0.1922 0.0803142: 3.1747 -2.1442 0.1472 -0.3411143: 2.8476 -1.8056 -0.0680 0.1536144: -0.6175 1.4349 -0.1970 -0.1085145: 0.7308 1.9656 0.2602 0.2801146: -1.0310 1.0553 -0.2928 -0.1647147: -2.9251 -0.2095 0.0582 -0.1813148: -0.9827 1.2720 -0.2225 0.2563149: -1.0830 1.1158 -0.0405 -0.1181150: -2.8744 0.0195 -0.3811 0.1455151: 3.1663 -1.9241 0.0455 0.1684152: -1.0734 0.7681 -0.4725 -0.1976];153: [n,d] = size(x);154: bn=round(n/k*rand);%第一个随机数在前1/K的范围内155: %;表示按列显示,都好表示按行显示156: nc=[x(bn,:);x(2*bn,:);x(3*bn,:);x(4*bn,:)];%初始聚类中心157: %x(bn,:) 选择某一行数据作为聚类中心,其列值为全部158:159: %x数据源,k聚类数目,nc表示k个初始化聚类中心160: %cid表示每个数据属于哪一类,nr表示每一类的个数,centers表示聚类中心161: [cid,nr,centers] = kmeans(x,k,nc)%调用kmeans函数162: %认为不该是150,或者说不该是个确定值,该是size(x,1)就是x行数163: for i=1:150
164: if cid(i)==1,
165: plot(x(i,1),x(i,2),‘r*‘) % 显示第一类166: %plot(x(i,2),‘r*‘) % 显示第一类167: hold on168: else
169: if cid(i)==2,
170: plot(x(i,1),x(i,2),‘b*‘) %显示第二类171: % plot(x(i,2),‘b*‘) % 显示第一类172: hold on173: else
174: if cid(i)==3,
175: plot(x(i,1),x(i,2),‘g*‘) %显示第三类176: % plot(x(i,2),‘g*‘) % 显示第一类177: hold on178: else
179: if cid(i)==4,
180: plot(x(i,1),x(i,2),‘k*‘) %显示第四类181: % plot(x(i,2),‘k*‘) % 显示第一类182: hold on183: end184: end185: end186: end187: end188: strt=[‘红色*为第一类;蓝色*为第二类;绿色*为第三类;黑色*为第四类‘ ];189: text(-4,-3.6,strt);
1: %BasicKMeans.m主类2: %x数据源,k聚类数目,nc表示k个初始化聚类中心3: %cid表示每个数据属于哪一类,nr表示每一类的个数,centers表示聚类中心4: function [cid,nr,centers] = kmeans(x,k,nc)5: [n,d] = size(x);6: % 设置cid为分类结果显示矩阵7: cid = zeros(1,n);8: % Make this different to get the loop started.
9: oldcid = ones(1,n);10: % The number in each cluster.11: nr = zeros(1,k);12: % Set up maximum number of iterations.13: maxgn= 100;14: iter = 1;15: %计算每个数据到聚类中心的距离 ,选择最小的值得位置到cid16: %我记得是聚类中心近乎不再变化迭代停止17: while iter < maxgn
18: for i = 1:n
19: %repmat 即 Replicate Matrix ,复制和平铺矩阵,是 MATLAB 里面的一个函数。20: %B = repmat(A,m,n)将矩阵 A 复制 m×n 块,即把 A 作为 B 的元素,B 由 m×n 个 A 平铺而成。B 的维数是 [size(A,1)*m, size(A,2)*n] 。21: %点乘方a.^b,矩阵a中每个元素按b中对应元素乘方或者b是常数22: %sum(x,2)表示矩阵x的横向相加,求每行的和,结果是列向量。 而缺省的sum(x)就是竖向相加,求每列的和,结果是行向量。23: dist = sum((repmat(x(i,:),k,1)-nc).^2,2);24: [m,ind] = min(dist); % 将当前聚类结果存入cid中25: cid(i) = ind;26: end27: %找到每一类的所有数据,计算他们的平均值,作为下次计算的聚类中心28: for i = 1:k
29: %find(A>m,4)返回矩阵A中前四个数值大于m的元素所在位置30: ind = find(cid==i);31: %mean(a,1)=mean(a)纵向;mean(a,2)横向32: nc(i,:) = mean(x(ind,:));33: %统计每一类的数据个数34: nr(i) = length(ind);35: end36: iter = iter + 1;37: end38:39: % Now check each observation to see if the error can be minimized some more.
40: % Loop through all points.41:42: maxiter = 2;43: iter = 1;44: move = 1;45: %j~=k这是一个逻辑表达式,j不等于k,如果j不等于k,返回值为1,否则为046: while iter < maxiter & move ~= 0
47: move = 0;48: %对所有的数据进行再次判断,寻求最佳聚类结果49: for i = 1:n
50: dist = sum((repmat(x(i,:),k,1)-nc).^2,2);51: r = cid(i); % 将当前数据属于的类给r52: %点除,a.\b表示矩阵b的每个元素除以a中对应元素或者除以常数a,a./b表示常数a除以矩阵b中每个元素或者矩阵a除以矩阵b对应元素或者常数b53: %nr是没一类的的个数54:55:56:57: %这个调整看不懂58: %点乘(对应元素相乘),必须同维或者其中一个是标量,a.*b59: dadj = nr./(nr+1).*dist‘; % 计算调整后的距离60:61: [m,ind] = min(dadj); % 找到该数据距哪个聚类中心最近62: if ind ~= r % 如果不等则聚类中心移动
63: cid(i) = ind;%将新的聚类结果送给cid64: ic = find(cid == ind);%重新计算调整当前类别的聚类中心65: nc(ind,:) = mean(x(ic,:));66: move = 1;67: end68: end69: iter = iter+1;70: end71: centers = nc;72: if move == 0
73: disp(‘No points were moved after the initial clustering procedure.‘)74: else
75: disp(‘Some points were moved after the initial clustering procedure.‘)76: end
三、算法结果
控制台自动输出的结果如下,我很奇怪怎么自己输出了。
1: >> main2: No points were moved after the initial clustering procedure.3: cid =4: Columns 1 through 225: 2 3 1 4 1 1 3 1 3 2 4 1 1 2 3 2 2 3 1 2 3 36: Columns 23 through 447: 1 3 1 4 4 4 1 1 4 2 1 1 1 3 1 1 2 2 1 2 1 18: Columns 45 through 669: 1 2 4 3 1 4 2 1 3 1 1 2 2 1 1 1 4 1 3 3 3 210: Columns 67 through 8811: 4 2 4 2 1 3 1 4 2 4 4 3 3 3 2 2 1 3 1 2 1 112: Columns 89 through 11013: 1 1 1 2 1 1 3 4 1 4 1 2 3 3 3 1 3 3 1 2 3 314: Columns 111 through 13215: 4 4 2 4 3 1 1 1 1 3 3 1 1 2 1 2 3 2 4 1 3 116: Columns 133 through 15017: 4 2 1 3 4 3 3 4 4 3 2 3 1 3 3 1 4 318: nr =19: 55 30 40 2520: centers =21: -2.962918181818183 -0.023009090909091 -0.297021818181818 0.34113636363636422: 0.995233333333333 1.997873333333334 -0.078486666666667 0.22965000000000023: -0.956882500000000 0.997800000000000 -0.123667500000000 0.04932000000000024: 3.023444000000000 -2.098592000000001 0.102096000000000 -0.050580000000000
每次运行结果得到的图片都不一样,最奇怪的是第二个图片竟然重叠类别。
四、结果分析
不适合处理离散型属性,但是对于连续型具有较好的聚类效果。
对于不同的初始值,可能会导致不同结果:多设置一些不同的初值,但比较耗时和浪费资源。
分类数目K不确定:通过类的自动合并和分裂,得到较为合理的类型数目K,例如ISODATA算法。相同点:聚类中心都是通过样本均值的迭代运算来决定的;不同点:主要是在选代过程中可将一类一分为二,亦可能二类合二为一,即“自组织”,这种算法具有启发式的特点。由于算法有自我调整的能力,因而需要设置若干个控制用参数,如聚类数期望值K、最小类内样本数、类间中心距离参数、每次迭代允许合并的最大聚类对数L及允许迭代次数I等。
参考了老王的课件。
标签:style blog http io color os ar 使用 for
原文地址:http://www.cnblogs.com/hxsyl/p/4054583.html