码迷,mamicode.com
首页 > Web开发 > 详细

[ML] 2. Introduction to neural networks

时间:2020-02-18 09:58:23      阅读:80      评论:0      收藏:0      [点我收藏+]

标签:image   eterm   inf   follow   parameter   div   lua   net   zed   

Training an algorithm involes four ingredients:

  • Data
  • Model
  • Objective function: We put data input a Model and get output out of it. The value we call it as ‘lost‘. We want to minimize the ‘lost‘ value.
  • Optimization algorithm: For example the linear model, we will try to optimize y = wx + b, ‘w‘ & ‘b‘ so that it will minimize the ‘lost‘ value.

Repeat the process...

 

Three types of machine learning:

Supervised: Give feedback

  • Classification: outputs are categories: cats or dogs
  • Regression: output would be numbers.

Unsupervised: No feedback, find parttens

Reinforcement: Train the algorithm to works in a enviorment based on the rewords it receives. (Just like training your dog)

 

Linear Model:

f(x) = x * w + b

x: input

w: coefficient / weight

b: intercept / bias

 

Linear Model: Multi inputs:

x, w are both vectors: 

x: 1 * 2

w: 2 * 1

f(x): 1 * 1

技术图片

Notice that the lienar model doesn‘t chage, it is still:

f(x) = x * w + b

 

Lienar Model: multi inputs and multi outputs:

技术图片

For ‘W‘, the first index is always the same as X; the second index is always the same as ouput Y.

If there is K inputs and M outputs, the number of Weigths would be K * M

The number of bias is equal to the number of ouputs: M

 

N * M = (N * K) * (K * M) + 1 * M

Each model is determined by its weights and biases.

 

Objection function:

Is the measure used to evaluate how well the model‘s output match the desired correct values.

  • Loss function: the lower the loss function, the higher the level of accuracy (Supervized learning)
  • Reward function: the hight of the reward function, the higher the level of accuracy (Reubfircement learning)

 

Loss functions for Supervised learning:

  • Regression: L2-NORM

技术图片

  • Classification: CROSS-ENTROPY

Expect cross-entropy should be lower.

技术图片

 

Optimization algorithm: Dradient descent

技术图片

技术图片

Until one point, the following value never update anymore.

The picture looks like this:

技术图片

Generally, we want the learning rate to be:

  High enough, so we can reach the closest minimum in a rational amount of time

  Low enough, so we don‘t oscillate around the minimum

技术图片

 

N-parameter gradient descent

技术图片

技术图片

[ML] 2. Introduction to neural networks

标签:image   eterm   inf   follow   parameter   div   lua   net   zed   

原文地址:https://www.cnblogs.com/Answer1215/p/12324642.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!