码迷,mamicode.com
首页 > 其他好文 > 详细

机器学习笔记(Washington University)- Regression Specialization-week one

时间:2017-05-01 18:20:17      阅读:202      评论:0      收藏:0      [点我收藏+]

标签:using   res   ant   ova   eps   and   nbsp   die   log   

1. Convex and concave functions

Concave is the upside-down of the convex function

and convex is a bow-shaped function

 

2. Stepsize

common choice:

as the iteration goes, we will decrease the stepsize against a fixed stepsize

alpha = alpha/t or alpha = alpha / (t ^0.5

and when the deriative is smaller than a set threshlod, we can stop the algorithm

 

3. Approach 1

set gradient=0, so we can solve for w0 and w1,

using those two equations:

技术分享

and we can get that:

技术分享

 

4. High leverage points

High leverage points mean that it ia at an extreme x value where there are no other observation. And 

this point has the potential to change the least squares line since the center of x mass is heavily influenced

by this point.

an influential observation is one where the removal of the point siginificcantly changes the fit.‘

 

5. Asymmetric cost functions

This means the errors are not weighed equally between these two types of mistakes (too high and too low estimated price).

 

机器学习笔记(Washington University)- Regression Specialization-week one

标签:using   res   ant   ova   eps   and   nbsp   die   log   

原文地址:http://www.cnblogs.com/climberclimb/p/6792325.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!