标签:exit 使用 mit cost ext 限制 toc 逻辑回归 mat
Sigmoid Function Logistic Function
\[ h_\theta(x) = g(\theta^Tx) \]
\[ z = \theta^Tx \]
\[ 0 <= g(z) = \frac{1}{1 + e^{-z}} <= 1 \]
\( h_\theta(x) \) the probability that the output is 1.
\( h_\theta(x) = P(y = 1 | x; \theta) \)
\( P(y = 0 | x; \theta) + P(y = 1 | x; \theta) = 1 \)
设置 0.5为判定边界,则 \(h_\theta(x)=0.5 <==> \theta^Tx = 0\)
\(\begin{align*} & Repeat \; \lbrace \newline & \; \theta_j := \theta_j - \frac{\alpha}{m} \sum_{i=1}^m (h_\theta(x^{(i)}) - y^{(i)}) x_j^{(i)} \newline & \rbrace \end{align*}\)
逻辑回归也可以通过特征缩放来加快收敛速度
可用于计算 \(\theta\) 的算法
Advantages:
a.no need to manually pick \(\alpha\)
b.often faster than gradient descent
Disadvantages:
More complex
Octave 的优化算法使用
%exitFlag: 1 收敛
%R(optTheta) >= 2
options = optimset(‘GradObj’, ‘on’, ‘MaxIter’, ‘100’);
initialTheta = zeros(2, 1);
[optTheta, functionVal, exitFlag] ...
= fminumc(@costFunction, initialTheta, options);
%costFunction:
function [jVal, gradient] = costFunction(theta)
jVal = ... %cost function
gradient = zeros(n, 1); %gradient
gradient(1) = ...
...
gradient(n) = ...
one-vs-rest
标签:exit 使用 mit cost ext 限制 toc 逻辑回归 mat
原文地址:https://www.cnblogs.com/QQ-1615160629/p/03-Logistic-Regression.html