码迷,mamicode.com
首页 > 其他好文 > 详细

迭代法写线性回归

时间:2019-06-14 00:53:51      阅读:112      评论:0      收藏:0      [点我收藏+]

标签:ram   imp   ati   learning   oss   err   desc   ota   put   

import numpy as np

def compute_error_points(b, w, points):
total_error = 0
for i in range(0, len(points)):
x = points[i, 0]
y = points[i, 1]
# 计算均方误差
total_error += (y - (w * x + b)) ** 2
# 返回loss
return total_error / float(len(points))

def step_gradient(b_current, w_current, points, learning_rate):
b_gradient = 0
w_gradient = 0
N = float(len(points))
for i in range(0, len(points)):
x = points[i, 0]
y = points[i, 1]
# 求b的偏导
b_gradient += (2/N) * ((w_current * x + b_current) - y)
# 求w的偏导
w_gradient += (2/N) * x * ((w_current * x + b_current) - y)

new_b = b_current - (learning_rate * b_gradient)
new_w = w_current - (learning_rate * w_gradient)

# 返回迭代的b和w
return [new_b, new_w]

def gradient_descent_runner(points, starting_b, starting_w, learning_rate, num_iterations):
‘‘‘
:param points: data
:param starting_b: 开始的b
:param starting_w: 开始的w
:param learning_rate: 迭代率
:param num_iterations: 迭代次数
:return:
‘‘‘
b = starting_b
w = starting_w
# 迭代w和b
for i in range(num_iterations):
b, w = step_gradient(b, w, np.array(points), learning_rate)
return [b, w]

points = np.random.random(size=(100, 2))
b, w = gradient_descent_runner(points, 0, 0, 0.0001, 1000)
print(b, w)


迭代法写线性回归

标签:ram   imp   ati   learning   oss   err   desc   ota   put   

原文地址:https://www.cnblogs.com/abc23/p/11020541.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!