标签:机器学习 logistic newton迭代法 逻辑回归 线性分类
在上面讨论回归问题时,讨论的结果都是连续类型,但如果要求做分类呢?即讨论结果为离散型的值。
假设:
其中:
由此可知:当
推导迭代式:
结果:
注意:这里的迭代式增量迭代法
上述迭代法,收敛速度很慢,在利用最大似然法求解的时候可以运用Newton迭代法,即
推导:
定义:
应用:
自己编写相应的循环,给出迭代次数以及下降坡度alpha,进行增量梯度下降。
主要函数及功能:
%% part0: 准备
data = load(‘ex2data1.txt‘);
x = data(:,[1,2]);
y = data(:,3);
pos = find(y==1);
neg = find(y==0);
x1 = x(:,1);
x2 = x(:,2);
plot(x(pos,1),x(pos,2),‘r*‘,x(neg,1),x(neg,2),‘co‘);
pause;
%% part1: GradientDecent and compute cost of J
[m,n] = size(x);
x = [ones(m,1),x];
theta = zeros(3,1);
J = computeCost(x,y,theta);
theta = gradientDecent(x, y, theta);
X = 25:100;
Y = ( -theta(1,1) - theta(3,1)*X)/theta(2,1);
plot(x(pos,2),x(pos,3),‘r*‘,x(neg,2),x(neg,3),‘co‘, X, Y, ‘b‘);
pause;
function theta = gradientDecent(x, y, theta)
%% compute GradientDecent 更新theta,利用的是增量梯度下降
m = size(x,1);
alph = 0.001;
for iter = 1:150000
for j = 1:3
dec = 0;
for i = 1:m
dec = dec + (y(i) - sigmoid(x(i,:)*theta))*x(i,j);
end
theta(j,1) = theta(j,1) + dec*alph/m;
end
end
end
function g = sigmoid(z)
%% SIGMOID Compute sigmoid functoon
g = 1/(1+exp(-z));
end
function J = computeCost(x, y, theta)
%% compute cost: J
m = size(x,1);
J = 0;
for i = 1:m
J = J + y(i)*log(sigmoid(x(i,:)*theta)) + (1 - y(i))*log(1 - sigmoid(x(i,:)*theta));
end
J = (-1/m)*J;
end
给出损失
主要函数及功能:
%% part0: 准备
data = load(‘ex2data1.txt‘);
x = data(:,[1,2]);
y = data(:,3);
pos = find(y==1);
neg = find(y==0);
x1 = x(:,1);
x2 = x(:,2);
plot(x(pos,1),x(pos,2),‘r*‘,x(neg,1),x(neg,2),‘co‘);
pause;
%% part1: GradientDecent and compute cost of J
[m,n] = size(x);
x = [ones(m,1),x];
theta = zeros(3,1);
options = optimset(‘GradObj‘, ‘on‘, ‘MaxIter‘, 400);
% Run fminunc to obtain the optimal theta
% This function will return theta and the cost
[theta, cost] = ...
fminunc(@(t)(computeCost(x,y,t)), theta, options);
X = 25:100;
Y = ( -theta(1,1) - theta(3,1)*X)/theta(2,1);
plot(x(pos,2),x(pos,3),‘r*‘,x(neg,2),x(neg,3),‘co‘, X, Y, ‘b‘);
pause;
function g = sigmoid(z)
%% SIGMOID Compute sigmoid functoon
g = zeros(size(z));
g = 1.0 ./ (1.0 + exp(-z));
end
function [J,grad] = computeCost(x, y, theta)
%% compute cost: J
m = size(x,1);
grad = zeros(size(theta));
hx = sigmoid(x * theta);
J = (1.0/m) * sum(-y .* log(hx) - (1.0 - y) .* log(1.0 - hx));
grad = (1.0/m) .* x‘ * (hx - y);
end
版权声明:本文为博主原创文章,未经博主允许不得转载。
Classification and logistic regression
标签:机器学习 logistic newton迭代法 逻辑回归 线性分类
原文地址:http://blog.csdn.net/neu_chenguangq/article/details/46576109