标签:The ISE point Asible hat imp RoCE near win
CanChen
ggchen@mail.ustc.edu.cn
Recently I am following Optimization lectured by Prof.Zhouwang Yang at USTC. Opt lies at the heart of ML.
Why I choose his course? I am familiar with USTC courses and have a motive inside myself to acquire every knowledge in USTC courses‘ slides.
Now I have already followed most of this course and so I decide to write some blogs chapter by chapter to deepen my understanding and record my learning process in the meantime.
The first blog is introduction.
We use x to denote decision variable, S to denote feasible region, f to denote objective function.
Then optimization is to minimize f(x) in the scope S.
If S is available in the whole space, then this is an unconstrained optimization problem.
Otherwise, this is a linear programming problem if S only contains linear constraints and an non-linear programming problem if S contains nonlinear constraints.
The condition is very simple and we just need to know its first-order and/or second-order derivates if they exist.
Lagrange mutiplier. One thing that confused me when I first saw the lagrange equation is that the equation‘s derivate on the mutiplier is zero. Then I realied it is the original equation constraint in fact.
Lagrange KKT. The key to understand KKT is determine whether inequation becomes equation when it meets the optimal point.
标签:The ISE point Asible hat imp RoCE near win
原文地址:https://www.cnblogs.com/JuliaAI123/p/12829378.html