码迷,mamicode.com
首页 > 其他好文 > 详细

鸢尾花——神经网络详解

时间:2016-04-29 17:19:43      阅读:1394      评论:0      收藏:0      [点我收藏+]

标签:

神经元的广泛互联与并行工作必然使整个网络呈现出高度的非线性特点。在客观世界中,许多系统的输入与输出之间存在着复杂的非线性关系,对于这类系统,往往很难用传统的数理方法建立其数学模型。设计合理地神经网络通过对系统输入输出样本对进行自动学习,能够以任意精度逼近任何复杂的非线性映射。神经网络的这一优点能使其可以作为多维非线性函数的通用数学模型。该模型的表达式非解析的,输入输出数据之间的映射规则由神经网络在学习阶段自动抽取并分布式存储在网络的所有连接中。具有非线性映射功能的神经网络应用十分广阔,几乎涉及所有领域。

Dataset

本文的数据集是常见的官方数据集鸢尾花iris.csv,该数据集的特征如下:

  • sepal_length - Continuous variable measured in centimeters.
  • sepal_width - Continuous variable measured in centimeters.
  • petal_length - Continuous variable measured in centimeters.
  • petal_width - Continuous variable measured in centimeters.
  • species - Categorical. 2 species of iris flowers, Iris-virginica or
    Iris-versicolor.
import pandas
import matplotlib.pyplot as plt
import numpy as np
iris = pandas.read_csv("iris.csv")
# shuffle rows
shuffled_rows = np.random.permutation(iris.index)
iris = iris.loc[shuffled_rows,:]
print(iris.head())
‘‘‘
    sepal_length  sepal_width  petal_length  petal_width          species
80           7.4          2.8           6.1          1.9   Iris-virginica
84           6.1          2.6           5.6          1.4   Iris-virginica
33           6.0          2.7           5.1          1.6  Iris-versicolor
81           7.9          3.8           6.4          2.0   Iris-virginica
93           6.8          3.2           5.9          2.3   Iris-virginica
‘‘‘
# There are 2 species
print(iris.species.unique())
‘‘‘
[‘Iris-virginica‘ ‘Iris-versicolor‘]
‘‘‘
iris.hist()
plt.show()
  • 下面是数据集的每个特征取值的分布。

    技术分享

Neurons

  • 目前为止,我们讨论的问题都是线性的,比如在二维例子里能找到一条曲线将数据纯净的分离开。但是,有的数据线性不可分的,比如:

    技术分享

  • 无论是线性回归还是逻辑回归都无法建立这样一个函数可以将这个数据分离开,因此必须采用神经网络这样的可以处理非线性数据的模型。这些模型是由一系列神经元组成,然后输出预测的结果。神经元接受一些输入,应用一个转换函数,并返回一个输出。下面我们看看一个神经元的例子,其中输入为5个值,一个偏差单元(类似线性模型中的截距),4和特征。

    技术分享

  • 这些单元进入到激活函数h里面。我们可以用一个逻辑激活函数g将这些输入转化为一个0到1之间的概率值输出:

    技术分享
    技术分享

仔细看,可以发现前面学习的逻辑回归函数是可以用作这里的一个神经元的。

  • 计算神经元的输出:激活函数是逻辑函数,参数是随机的。a1是第一行数据的输出。
# Variables to test sigmoid_activation
iris["ones"] = np.ones(iris.shape[0])
X = iris[[‘ones‘, ‘sepal_length‘, ‘sepal_width‘, ‘petal_length‘, ‘petal_width‘]].values
y = (iris.species == ‘Iris-versicolor‘).values.astype(int)

# The first observation
x0 = X[0]

# Initialize thetas randomly 
theta_init = np.random.normal(0,0.01,size=(5,1))
def sigmoid_activation(x, theta):
    x = np.asarray(x)
    theta = np.asarray(theta)
    return 1 / (1 + np.exp(-np.dot(theta.T, x)))

a1 = sigmoid_activation(x0, theta_init)
# First observation‘s features and target
x0 = X[0]
y0 = y[0]

# Initialize parameters, we have 5 units and just 1 layer
theta_init = np.random.normal(0,0.01,size=(5,1))
def singlecost(X, y, theta):
    # Compute activation
    h = sigmoid_activation(X.T, theta)
    # Take the negative average of target*log(activation) + (1-target) * log(1-activation)
    cost = -np.mean(y * np.log(h) + (1-y) * np.log(1-h))
    return cost

first_cost = singlecost(x0, y0, theta_init)

Cost Function

  • 误差函数定义如下,当yi=1时,h(xi)接近1,那么log(h(xi))就接近0,表示误差几乎为0.由于这个log()函数的值都是负的,因此前面加了一个-号,最后依旧是求误差的最小化。:
    技术分享

  • 计算一个样本的误差:

# First observation‘s features and target
x0 = X[0]
y0 = y[0]

# Initialize parameters, we have 5 units and just 1 layer
theta_init = np.random.normal(0,0.01,size=(5,1))
def singlecost(X, y, theta):
    # Compute activation
    h = sigmoid_activation(X.T, theta)
    # Take the negative average of target*log(activation) + (1-target) * log(1-activation)
    cost = -np.mean(y * np.log(h) + (1-y) * np.log(1-h))
    return cost

first_cost = singlecost(x0, y0, theta_init)

Compute The Gradients

  • 在计算梯度下降迭代公式时,我们需要计算偏导,由于此时的误差函数是激活函数的函数,因此求偏导略微有些复杂:

    技术分享

  • 最终推导的结果如下,其中(yi?hΘ(xi))?hΘ(xi)?(1?hΘ(xi))是标量,而xi是向量,因此δ是向量:

    技术分享

  • 然后计算每个样本产生的误差的偏导δi,δi是一个长度为5的向量,然后计算这些梯度向量的 平均值:

# Initialize parameters
theta_init = np.random.normal(0,0.01,size=(5,1))

# Store the updates into this array
grads = np.zeros(theta_init.shape)

# Number of observations 
n = X.shape[0]
for j, obs in enumerate(X):
    # 计算预测值
    h = sigmoid_activation(obs, theta_init)
    # 计算误差
    delta = (y[j]-h) * h * (1-h) * obs
    # 累计平均值求和
    grads += delta[:,np.newaxis]/X.shape[0]

Two Layer Network

theta_init = np.random.normal(0,0.01,size=(5,1))

# set a learning rate
learning_rate = 0.1
# maximum number of iterations for gradient descent
maxepochs = 10000       
# costs convergence threshold, ie. (prevcost - cost) > convergence_thres
convergence_thres = 0.0001  

def learn(X, y, theta, learning_rate, maxepochs, convergence_thres):
    costs = []
    cost = singlecost(X, y, theta)  # compute initial cost
    costprev = cost + convergence_thres + 0.01  # set an inital costprev to past while loop
    counter = 0  # add a counter
    # Loop through until convergence
    for counter in range(maxepochs):
        grads = np.zeros(theta.shape)
        for j, obs in enumerate(X):
            h = sigmoid_activation(obs, theta)   # Compute activation
            delta = (y[j]-h) * h * (1-h) * obs   # Get delta
            grads += delta[:,np.newaxis]/X.shape[0]  # accumulate

        # update parameters 
        theta += grads * learning_rate
        counter += 1  # count
        costprev = cost  # store prev cost
        cost = singlecost(X, y, theta) # compute new cost
        costs.append(cost)
        if np.abs(costprev-cost) < convergence_thres:
            break

    plt.plot(costs)
    plt.title("Convergence of the Cost Function")
    plt.ylabel("J($\Theta$)")
    plt.xlabel("Iteration")
    plt.show()
    return theta

theta = learn(X, y, theta_init, learning_rate, maxepochs, convergence_thres)

技术分享

Neural Network

  • 神经网络通常有很多层,最简单的是三层结构:输入层,中间层以及输出层。

    技术分享

  • 中间层神经元的值计算公式如下,其中θ参数是每条边上的权值:

    技术分享

  • 最终的输出是:

    技术分享

  • 下面是前馈式神经网络的代码,其中theta0_init是输入层到中间层的权值,theta1_init是中间层到输出层的权值。

theta0_init = np.random.normal(0,0.01,size=(5,4))
theta1_init = np.random.normal(0,0.01,size=(5,1))
def feedforward(X, theta0, theta1):
    # 计算中间层a1的输入
    a1 = sigmoid_activation(X.T, theta0).T
    # 添加一个偏差项
    a1 = np.column_stack([np.ones(a1.shape[0]), a1])
    # 中间层的输出
    out = sigmoid_activation(a1.T, theta1)
    return out

h = feedforward(X, theta0_init, theta1_init)

Multiple Neural Network Cost Function

  • 多层神经网络的误差计算公式如下:

    技术分享

theta0_init = np.random.normal(0,0.01,size=(5,4))
theta1_init = np.random.normal(0,0.01,size=(5,1))

# X and y are in memory and should be used as inputs to multiplecost()
def multiplecost(X, y, theta0, theta1):
    # feed through network
    h = feedforward(X, theta0, theta1) 
    # compute error
    inner = y * np.log(h) + (1-y) * np.log(1-h)
    # negative of average error
    return -np.mean(inner)

c = multiplecost(X, y, theta0_init, theta1_init)

Backpropagation

  • 反向传播
# Use a class for this model, it‘s good practice and condenses the code
class NNet3:
    def __init__(self, learning_rate=0.5, maxepochs=1e4, convergence_thres=1e-5, hidden_layer=4):
        self.learning_rate = learning_rate
        self.maxepochs = int(maxepochs)
        self.convergence_thres = 1e-5
        self.hidden_layer = int(hidden_layer)

    def _multiplecost(self, X, y):
        # feed through network
        l1, l2 = self._feedforward(X) 
        # compute error
        inner = y * np.log(l2) + (1-y) * np.log(1-l2)
        # negative of average error
        return -np.mean(inner)

    def _feedforward(self, X):
        # feedforward to the first layer
        l1 = sigmoid_activation(X.T, self.theta0).T
        # add a column of ones for bias term
        l1 = np.column_stack([np.ones(l1.shape[0]), l1])
        # activation units are then inputted to the output layer
        l2 = sigmoid_activation(l1.T, self.theta1)
        return l1, l2

    def predict(self, X):
        _, y = self._feedforward(X)
        return y

    def learn(self, X, y):
        nobs, ncols = X.shape
        self.theta0 = np.random.normal(0,0.01,size=(ncols,self.hidden_layer))
        self.theta1 = np.random.normal(0,0.01,size=(self.hidden_layer+1,1))

        self.costs = []
        cost = self._multiplecost(X, y)
        self.costs.append(cost)
        costprev = cost + self.convergence_thres+1  # set an inital costprev to past while loop
        counter = 0  # intialize a counter

        # Loop through until convergence
        for counter in range(self.maxepochs):
            # feedforward through network
            l1, l2 = self._feedforward(X)

            # Start Backpropagation
            # Compute gradients
            l2_delta = (y-l2) * l2 * (1-l2)
            l1_delta = l2_delta.T.dot(self.theta1.T) * l1 * (1-l1)

            # Update parameters by averaging gradients and multiplying by the learning rate
            self.theta1 += l1.T.dot(l2_delta.T) / nobs * self.learning_rate
            self.theta0 += X.T.dot(l1_delta)[:,1:] / nobs * self.learning_rate

            # Store costs and check for convergence
            counter += 1  # Count
            costprev = cost  # Store prev cost
            cost = self._multiplecost(X, y)  # get next cost
            self.costs.append(cost)
            if np.abs(costprev-cost) < self.convergence_thres and counter > 500:
                break

# Set a learning rate
learning_rate = 0.5
# Maximum number of iterations for gradient descent
maxepochs = 10000       
# Costs convergence threshold, ie. (prevcost - cost) > convergence_thres
convergence_thres = 0.00001  
# Number of hidden units
hidden_units = 4

# Initialize model 
model = NNet3(learning_rate=learning_rate, maxepochs=maxepochs,
              convergence_thres=convergence_thres, hidden_layer=hidden_units)
# Train model
model.learn(X, y)

# Plot costs
plt.plot(model.costs)
plt.title("Convergence of the Cost Function")
plt.ylabel("J($\Theta$)")
plt.xlabel("Iteration")
plt.show()

技术分享

Splitting Data

# First 70 rows to X_train and y_train
# Last 30 rows to X_train and y_train
X_train = X[:70]
y_train = y[:70]

X_test = X[-30:]
y_test = y[-30:]

Predicting Iris Flowers

from sklearn.metrics import roc_auc_score
# Set a learning rate
learning_rate = 0.5
# Maximum number of iterations for gradient descent
maxepochs = 10000       
# Costs convergence threshold, ie. (prevcost - cost) > convergence_thres
convergence_thres = 0.00001  
# Number of hidden units
hidden_units = 4

# Initialize model 
model = NNet3(learning_rate=learning_rate, maxepochs=maxepochs,
              convergence_thres=convergence_thres, hidden_layer=hidden_units)
model.learn(X_train, y_train)

yhat = model.predict(X_test)[0]

auc = roc_auc_score(y_test, yhat)

鸢尾花——神经网络详解

标签:

原文地址:http://blog.csdn.net/zm714981790/article/details/51251759

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!