这里是接着上一篇决策树算法介绍来的。
之前已经学习过决策树的整个方法,对它构造的过程有了比较清楚的认识。这一次的读书笔记就主要关注决策树的应用和用matplotlib来画出一棵决策树。
matplotlib提供了一个注解工具annotations,跟matlab中的非常相似[不过个人认为matlab画图操作起来更加方便],他是一个很强大的工具。
首先我们先绘制决策树的一个节点:
# coding=utf-8
import matplotlib.pyplot as plt
decisionNode = dict(boxstyle="sawtooth", fc="0.8")
leafNode = dict(boxstyle="round4", fc="0.8")
arrow_args = dict(arrowstyle="<-")
def plotNode(nodeTxt, centerPt, parentPt, nodeType):
createPlot.ax1.annotate(nodeTxt, xy=parentPt, xycoords=‘axes fraction‘,xytext=centerPt, textcoords=‘axes fraction‘,va="center", ha="center", bbox=nodeType, arrowprops=arrow_args )
def createPlot():
fig = plt.figure(1, facecolor=‘white‘)
plt.xlabel(‘abs‘)
plt.xlabel(‘c‘)
fig.clf()
createPlot.ax1 = plt.subplot(111, frameon=False)
plotNode(‘a decision node‘, (0.5, 0.1), (0.1, 0.5), decisionNode)
plotNode(‘a leaf node‘, (0.8, 0.1), (0.3, 0.8), leafNode)
plt.show()
在powershell中运行上述结果得到以下图片
然后我们要构造出注解树:
这里自然而然想到用迭代开始构造每一个节点的情况。事实上也确实如此。
先计算注解树的深度以及叶子节点个数,方便迭代的开始和结束
def getNumLeafs(myTree):
numLeafs = 0
firstStr = myTree.keys()[0]
secondDict = myTree[firstStr]
for key in secondDict.keys():
#查看当前节点是否已经是叶子节点,如果不是则要再次迭代
if type(secondDict[key]).__name__==‘dict‘:
numLeafs += getNumLeafs(secondDict[key])
else: numLeafs +=1
return numLeafs
def getTreeDepth(myTree):
maxDepth = 0
firstStr = myTree.keys()[0]
secondDict = myTree[firstStr]
for key in secondDict.keys():
#查看当前节点是否已经是叶子节点,如果不是则要再次迭代
if type(secondDict[key]).__name__==‘dict‘:
thisDepth = 1 + getTreeDepth(secondDict[key])
else: thisDepth = 1
if thisDepth > maxDepth: maxDepth = thisDepth
return maxDepth
下面就是画出注解树的主函数:
def plotMidText(cntrPt, parentPt, txtString):
xMid = (parentPt[0]-cntrPt[0])/2.0 + cntrPt[0]
yMid = (parentPt[1]-cntrPt[1])/2.0 + cntrPt[1]
createPlot.ax1.text(xMid, yMid, txtString, va="center", ha="center", rotation=30)
def plotTree(myTree, parentPt, nodeTxt):#if the first key tells you what feat was split on
numLeafs = getNumLeafs(myTree) #this determines the x width of this tree
depth = getTreeDepth(myTree)
firstStr = myTree.keys()[0] #the text label for this node should be this
cntrPt = (plotTree.xOff + (1.0 + float(numLeafs))/2.0/plotTree.totalW, plotTree.yOff)
plotMidText(cntrPt, parentPt, nodeTxt)
plotNode(firstStr, cntrPt, parentPt, decisionNode)
secondDict = myTree[firstStr]
plotTree.yOff = plotTree.yOff - 1.0/plotTree.totalD
for key in secondDict.keys():
if type(secondDict[key]).__name__==‘dict‘:#test to see if the nodes are dictonaires, if not they are leaf nodes
plotTree(secondDict[key],cntrPt,str(key)) #recursion
else: #it‘s a leaf node print the leaf node
plotTree.xOff = plotTree.xOff + 1.0/plotTree.totalW
plotNode(secondDict[key], (plotTree.xOff, plotTree.yOff), cntrPt, leafNode)
plotMidText((plotTree.xOff, plotTree.yOff), cntrPt, str(key))
plotTree.yOff = plotTree.yOff + 1.0/plotTree.totalD
#if you do get a dictonary you know it‘s a tree, and the first element will be another dict
def createPlot(inTree):
fig = plt.figure(1, facecolor=‘white‘)
fig.clf()
axprops = dict(xticks=[], yticks=[])
createPlot.ax1 = plt.subplot(111, frameon=False, **axprops) #no ticks
#createPlot.ax1 = plt.subplot(111, frameon=False) #ticks for demo puropses
plotTree.totalW = float(getNumLeafs(inTree))
plotTree.totalD = float(getTreeDepth(inTree))
plotTree.xOff = -0.5/plotTree.totalW; plotTree.yOff = 1.0;
plotTree(inTree, (0.5,1.0), ‘‘)
plt.show()
眼科医生根据病人的情况决定给病人选配哪一种隐形眼镜。这是一个专家系统的问题,可以用决策树来进行分类得出分类指标。
由于生成决策树很费时,我们没有必要每次使用决策树都重新生成一个决策树,书中介绍了pickle序列化的方法存储一棵已经生成了的决策树。
pickle模块的基本用法如下用法。
使用pickle模块存储决策树:
def storeTree(inputTree,filename):
import pickle
fw = open(filename,‘w‘)
pickle.dump(inputTree,fw)
fw.close()
def grabTree(filename):
import pickle
fr = open(filename)
return pickle.load(fr)
存储形式如下:
实现步骤
下次就可以使用决策树做一些分类任务了。本章决策树构造算法是ID3算法,后面会学习更多的~加油~
原文地址:http://blog.csdn.net/iboxty/article/details/45083425