标签:
In this chapter we will study the family of linear predictors, one of the most useful families of hypothesis classes. Many learning algorithms that are being widely used in practice rely on linear predictors, first and foremost because of the ability to learn them efficiently in many cases. In addition, linear predictors are intuitive, are easy to interpret, and fit the data reasonably well in many natural learning problems.
We will introduce several hypothesis classes belonging to this family – halfspaces, linear regression predictors, and logistic regression predictors – and present relevant learning algorithms: linear programming and the Perceptron algorithm for the class of halfspaces and the Least Squares algorithm for linear regression. This chapter is focused on learning linear predictors using the ERM approach; however, in later chapters we will see alternative paradigms for leaning these hypothesis classes.
First, we define the class of affine functions as
(1)
where
(2)
It will be convenient also to use the notation
(3)
which reads as follows: is a set of functions, where each function is parameterized by and , and each function takes as input a vector and returns as output the scalar .
The different hypothesis classes of linear predictors are compositions of a function on . For example, in binary classification, we can choose to be the sign function, and for regression problems, where , is simply the identity function.
It may be more convenient to incorporate , called the bias, into as an extra coordinate and add an extra coordinate with a value of 1 to all ; namely, let and let . Therefore,
(4)
It follows that each affine function in can be rewritten as a homogenous linear function in applied over the transformation that appends the constant 1 to each input vector. Therefore, whenever it simplifies the presentation, we will omit the bias term and refer to as the class of homogenous linear functions of the form .
标签:
原文地址:http://www.cnblogs.com/JenifferWu/p/5813242.html