标签:newton element efi tab fun man win soc fine
\section{Elementary inequalities for $\sigma_k$ function (Lecture given by Xinan Ma)}
The elementary symmetric functions appear naturally in the geometric quantities. In order to carry on analysis, we need to understand properties of the elementary symmetric functions.
For $1\leq k\leq n$, and $\lambda=(\lambda_1,\lambda_2,...,\lambda_2)\in \mathbb{R}^n$, the $k$-th elementary symmetric function is
defined as
\begin{equation}
\sigma_k(\lambda)=\sum\limits_{1\leq i_1<i_2<...<i_k\leq n}\lambda_{i_1}\lambda_{i_2}...\lambda_{i_k}.\nonumber
\end{equation}
where the sum is taken over all strictly increasing sequences $i_1,...,i_k$ of the indices from
the set $\{1,2,...,n\}$. The definition can be extended to symmetric matrices
Denote $\lambda(W)=(\lambda_1(W),...,\lambda_n(W))$ to be the eigenvalues of the symmetric matrix $W$, set
$\sigma_k(W)=\sigma_k(\lambda(W))$.
It is convenient to set
\begin{equation}
\sigma_0(W) = 1,\ \ \ \sigma_k(W)=0\ \ \ for\ \ \ k > n.
\end{equation}
It follows directly from the definition that, for any $n\times n$ symmetric matrix $W$, and
$t\in \mathbb{R}$,
\begin{equation}
\sigma_n(I+tW)=\det(I_n+tW)=\sum\limits_{i=1}^n\sigma_i(W)t^i.
\end{equation}
Conversely, (2.2) can also be used to define $\sigma_k(W)$, $\forall\ \ k=0,...,n.$\\
An important property of $\sigma_k$ is the divergent free structure.
We say a function $u\in C^2(\Omega)\cap C^0(\overline{\Omega})$ is
$k$-admissible if
\begin{equation}
\lambda(D^2u)\in \overline{\Gamma}_k,
\end{equation}
where $\Gamma_k$ is an open symmetric convex cone in $\mathbb{R}^n$, with vertex at the origin,
given by
\begin{equation}
\Gamma_k=\{(\lambda_1,\lambda_2,...,\lambda_n)\in\mathbb{R}^n\mid \sigma_j(\lambda)>0, \ \ \forall \ j=1,2,...,k \}
\end{equation}
Clearly $\sigma_k(\lambda)=0$ for $\lambda\in\partial\Gamma_k$,
\begin{equation}
\Gamma_n\subset\Gamma_{n-1}\subset...\subset\Gamma_1\nonumber
\end{equation}
$\Gamma_n$ is the positive cone,
$$\Gamma_n=\{(\lambda_1,\lambda_2,...,\lambda_n)\in\mathbb{R}^n\mid \lambda_1>0,\ \lambda_2>0,...,\ \lambda_n>0\}\nonumber
$$
and $\Gamma_1$ is the half space $\{\lambda\in\mathbb{R}^n\mid \sum\limits_{j=1}^n\lambda_{j}>0\}$. A function is $1$-admissible if
and only if it is sub-harmonic, and an $n$-admissible function must be convex.
For any $2\leq k\leq n$, a $k$-admissible function is sub-harmonic, and the set of
all $k-$ admissible functions is a convex cone in $C^2(\Omega)$.\\
The cone $\Gamma_k$ may also be equivalently defined as the component $\{\lambda\in\mathbb{R}^n\mid \sigma_k(\lambda)>0\}$
containing the vector $(1,1,...,1)$, or characterized as
$\Gamma_k=\{\lambda\in\mathbb{R}^n\mid 0<\sigma_k(\lambda)\leq\sigma_k(\lambda+\eta),\ \forall\ \eta_i\geq0\ \ i=1,2,...,n\}.$\\
Let
$$
\sigma_k(\lambda)=\sum\limits_{1\leq i_1<i_2<...<i_k\leq n}\lambda_{i_1}\lambda_{i_2}...\lambda_{i_k}.\nonumber
$$
For example, $\sigma_1(\lambda)=\sum\limits_{i=1}^{n}\lambda_i$ and $\sigma_n(\lambda)=\prod\limits_{i=1}^{n}\lambda_i$. Especially, if $n=3$ and $k=2$, we have $\sigma_2(\lambda)=\lambda_1\lambda_2+\lambda_2\lambda_3+\lambda_3\lambda_1$.\\
We collect some inequalities related to the polynomial $\sigma_k(\lambda)$,
which are needed in our investigation of the $k-$Hessian equation. Denote $\sigma_0=1$ and $\sigma_k=0$ for $k>n$.\\
(1)$\sigma_{k+1}(\lambda)=\sigma_{k+1}(\lambda| i)+\lambda_i\sigma_{k}(\lambda|i)$, where $(\lambda| i)=(\lambda_1,...,\hat{\lambda}_i,...,\lambda_n)$, delete $\lambda_i$;\\
(2)$\sum\limits_{i=1}^n\lambda_i\sigma_k(\lambda| i)=(k+1)\sigma_{k+1}(\lambda)$;\\
(3)$\sum\limits_{i=1}^n\sigma_k(\lambda|i)=(n-k)\sigma_k(\lambda)$;\\
(4)$\frac{\partial\sigma_{k+1}(\lambda)}{\partial\lambda_i}=\sigma_k(\lambda| i)$;\\
(5)$\sum\limits_{i=1}^n\lambda_i^2\sigma_k(\lambda| i)=\sum\limits_{i=1}^n\lambda_i(\sigma_{k+1}(\lambda)-\sigma_{k+1}(\lambda| i))=\sigma_1(\lambda)\cdot\sigma_{k+1}(\lambda)-(k+2)\sigma_{k+2}(\lambda)$.
The above five inequalities follows easily from fundamental algebraic identity.
(Newton‘s inequality) $\forall\ k\geq1$,
$$
(n-k+1)(k+1)\sigma_{k+1}(\lambda)\cdot\sigma_{k-1}(\lambda)\leq k(n-k)^2\sigma_k^2(\lambda),
$$
that exactly is
$$
\frac{\sigma_{k+1}(\lambda)}{C_n^{k+1}}\cdot\frac{\sigma_{k-1}(\lambda)}{C_n^{k-1}}\leq \Big(\frac{\sigma_{k}(\lambda)}{C_n^{k}}\Big)^2.\nonumber
$$
See reference book: Hardy Littlewood Ploya [Inequalities], D.S.MitrinoviC [Analytic Inequalities].\\
Garding‘s inequality(1958)[See CNS III]:\\
Assume that $\lambda\in\Gamma_k$,\ $\mu\in\Gamma_k$, then
$$
\frac{1}{k}\sum\limits_{i=1}^{n}\mu_i\sigma_{k-1}(\lambda| i)\geq\left(\sigma_k(\mu)\right)^{\frac{1}{k}}\cdot\left(\sigma_k(\lambda)\right)^{1-\frac{1}{k}}
$$
Some corollary: For $\lambda\in\Gamma_k$,\ $\sigma_k^{\frac{1}{k}}(\lambda)$ is concave w.r.t. $\lambda$.\\
We only need to show that for any $\lambda\in\Gamma_k$,\ $\mu\in\Gamma_k$,
$$
\sigma^{\frac{1}{k}}_{k}(\mu)\leq\sigma^{\frac{1}{k}}_{k}(\lambda)+\frac{1}{k}\sigma^{\frac{1}{k}-1}_{k}(\lambda)\sigma_{k-1}(\lambda| i)(\mu_i-\lambda_i).
$$
By Lemma1.1(2), we have $\frac{1}{k}\sigma^{\frac{1}{k}-1}_{k}(\lambda)\sigma_{k-1}\lambda_i=\sigma^{\frac{1}{k}}_{k}(\lambda)$. It follows that the above inequality is equivalent to
$$
\sigma^{\frac{1}{k}}_{k}(\mu)\leq\frac{1}{k}\sigma^{\frac{1}{k}-1}_{k}(\lambda)\sigma_{k-1}(\lambda| i)\mu_i.
$$
This is the Garding‘s inequality. We are done!
The following three statements are equivalent:\\
(1)$\Gamma_k=\{(\lambda_1,\lambda_2,...,\lambda_n)\in\mathbb{R}^n\mid \sigma_j(\lambda)>0, \ \ \forall \ j=1,2,...,k \}$;\\
(2)$\Gamma_k=\{\lambda\in\mathbb{R}^n\mid 0<\sigma_k(\lambda)\leq\sigma_k(\lambda+\eta),\ \forall\ \eta=(\eta_1,...,\eta_n),\ \eta_i\geq0\ \ \mbox{for}\ \ i=1,2,...,n\}$;\\
(3)$\Gamma_k$ is defined as the component $\{\lambda\in\mathbb{R}^n\mid \sigma_k(\lambda)>0\}$
containing the vector $(1,1,...,1)$ or $\Gamma_n$.
(Ellipticity):\\
$\forall \lambda\in\Gamma_k$, $\forall\ h\in\{1,2,...,k-1\}$, we have $\sigma_{h}(\lambda|i)>0$,\ $\forall\ i=1,2,...,n$. \\
It follows that
$\frac{\partial\sigma_{k}(\lambda)}{\partial\lambda_i}=\sigma_{k-1}(\lambda| i)>0$, which indicates the ellipticity of the $\sigma_k$ equation.
Lemma 1.4 can be proved by induction for $h$.
(Newton-Maclaurin‘s inequality)\\
$\forall\ k\geq1$, $\forall$ $\lambda\in\Gamma_k$,
$
\left(\frac{\sigma_{k}(\lambda)}{C_n^{k}}\right)^{\frac{1}{k}}\leq \left(\frac{\sigma_{k}(\lambda)}{C_n^{l}}\right)^{\frac{1}{l}},\ \ k>l.\nonumber
$
We only need to show
$
\left(\frac{\sigma_{l}(\lambda)}{C_n^{l}}\right)^{\frac{1}{l}}\leq \left(\frac{\sigma_{l-1}(\lambda)}{C_n^{l-1}}\right)^{\frac{1}{l-1}},\ \ k\geq l\geq1.\nonumber
$
This can be proved by induction for $l$.
(Generalized Newton-Maclaurin‘s inequality)
$\forall$ $\lambda\in\Gamma_k$, $\forall\ k>1\geq0,\ r>s\geq0,\ k\geq r,\ l\geq s$, then
$
\left(\frac{\frac{\sigma_{k}(\lambda)}{C_n^{k}}}{\frac{\sigma_{l}(\lambda)}{C_n^{l}}}\right)^{\frac{1}{k-l}}\leq \left(\frac{\frac{\sigma_{r}(\lambda)}{C_n^{r}}}{\frac{\sigma_{s}(\lambda)}{C_n^{s}}}\right)^{\frac{1}{r-s}},\nonumber
$
with $"="$ holds iff $\lambda_1=...=\lambda_n$.
$
$\forall\ k\geq2$, $\forall$ $\lambda\in\Gamma_k$. \\
(1)$\lambda_i\leq\sigma_1(\lambda)$, $\forall$ $i=1,2,...,n$;\\
(2)$\sum\limits_{i=1}^{n}\frac{\partial\sigma^{\frac{1}{k}}_k(\lambda)}{\partial\lambda_i}\geq\Big(C^k_n\Big)^{\frac{1}{k}}$;\\
(3)$\sum\limits_{i=1}^{n}\frac{\partial(\frac{\sigma_k}{\sigma_l}(\lambda))^{\frac{1}{k-l}}}{\partial\lambda_i}\geq\Big(\frac{C^k_n}{C^l_n}\Big)^{\frac{1}{k-l}}$, where $k>l\geq0$.
$
(1)For $k\geq2$, $\lambda\in\Gamma_k$, by Lemma 1.4, we have $\sigma_1(\lambda|i)>0$ for each $i=1,2,...,n$. Then we have
$
\sigma_1(\lambda)=\lambda_i+\sigma_1(\lambda|i)>\lambda_i,\ \ \ \forall \ \ i=1,2,...,n.
$
(2)By Lemma 1.1 and Lemma 1.5(1), we have
$
\sum\limits_{i=1}^{n}\frac{\partial\sigma^{\frac{1}{k}}_k(\lambda)}{\partial\lambda_i}&=&\sum\limits_{i=1}^{n}\frac{1}{k}\sigma^{\frac{1}{k}-1}_k(\lambda)\sigma_{k-1}(\lambda| i)\nonumber\\
&=&\frac{n-k+1}{k}\sigma^{\frac{1}{k}-1}_k(\lambda)\cdot\sigma_{k-1}(\lambda)\nonumber\\
&\geq& (C^k_n)^{\frac{1}{k}}
$
$
$\forall$ $\lambda\in\Gamma_k$, $\lambda_1\geq\lambda_2\geq...\geq\lambda_n$, \\
(1)$\sigma_{k-1}(\lambda|n)\geq...\geq\sigma_{k-1}(\lambda|1)$;\\
(2)$\sigma_k(\lambda)\leq C^k_n\cdot\prod\limits_{i=1}^n\lambda_i$ and $\lambda_k>0$.
$
(1)By Lemma 1.1, we have
$
\sigma_{k-1}(\lambda|i)=\sigma_{k-1}(\lambda| i,j)+\lambda_j\sigma_{k-2}(\lambda|i,j),\nonumber\\
\sigma_{k-1}(\lambda|j)=\sigma_{k-1}(\lambda| j,i)+\lambda_i\sigma_{k-2}(\lambda|j,i).
$
It follows that
$
\sigma_{k-1}(\lambda|i)-\sigma_{k-1}(\lambda| j)=(\lambda_i-\lambda_j)\cdot\sigma_{k-2}(\lambda|i,j)
$
Then (1) follows from Lemma 1.4 clearly.
$\forall$ $\lambda\in\Gamma_k$, $\lambda_1\geq\lambda_2\geq...\geq\lambda_n$, \\
(1)$\lambda_1\sigma_{k-1}(\lambda|1)\geq\frac{k}{n}\sigma_{k}(\lambda)$, which implies that $\lambda_1$‘s upper bound is crucial for the uniformly ellipticity of equation $\sigma_k=f$;\\
(2)$\sigma_{k-1}(\lambda|k)\geq \theta(k,n)\sigma_{k-1}(\lambda)$.
(1)By Lemma 1, we have $\sigma_k(\lambda)=\lambda_1\cdot\sigma_{k-1}(\lambda| 1)+\sigma_{k}(\lambda|1)$. If $\sigma_{k}(\lambda|1)\leq0$, the inequality follows clearly. Now we assume $\sigma_{k}(\lambda|1)>0$, by Lemma 1.4, we have $(\lambda|1)\in \Gamma_k$. By Lemma 1.5(2), we have
$
\frac{k}{n-k}\cdot\frac{\sigma_{k}(\lambda|1)}{\sigma_{k-1}(\lambda| 1)}\leq\frac{\sigma_1(\lambda|1)}{n-1}\leq \lambda_1.
$
Then
$
\sigma_{k}(\lambda|1)\leq\frac{n-k}{k}\lambda_1\cdot\sigma_{k-1}(\lambda|1).
$
Combining $\sigma_k(\lambda)=\lambda_1\cdot\sigma_{k-1}(\lambda| 1)+\sigma_{k}(\lambda|1)$ and the above inequality, we have
$
\sigma_k(\lambda)\leq\lambda_1\cdot\sigma_{k-1}(\lambda| 1)+\frac{n-k}{k}\lambda_1\cdot\sigma_{k-1}(\lambda| 1)=\frac{n}{k}\lambda_1\cdot\sigma_{k-1}(\lambda|1).
$
This completes the proof of (1).\\
(2)By Lemma 1.1, we have
$
\sigma_k(\lambda|1,k)+\lambda_1\sigma_{k-1}(\lambda| k)=\sigma_k(\lambda|k)&=&\sigma_k-\lambda_k\sigma_{k-1}(\lambda|k)\nonumber\\
&\geq&-\lambda_k\sigma_{k-1}(\lambda|k),\\
\sigma_{k-1}(\lambda|1,k)+\lambda_1\sigma_{k-2}(\lambda| 1,k)&=&\sigma_{k-1}(\lambda|k).
$
Eliminate $\lambda_1$ from above formula, we have
$
&&\sigma_{k-1}^2(\lambda|1,k)-\sigma_{k-2}(\lambda|1,k)\sigma_{k}(\lambda| 1,k)\nonumber\\
&\leq&\sigma_{k-1}(\lambda|k)\Big(\sigma_{k-1}(\lambda| 1,k)+\lambda_k\sigma_{k-2}(\lambda|1,k)\Big)\nonumber\\
&=&\sigma_{k-1}(\lambda| k)\sigma_{k-1}(\lambda|1)\leq \sigma_{k-1}^2(\lambda| k)
$
By Newton‘s inequality, we have
$
\sigma_{k-1}^2(\lambda|1,k)-\sigma_{k-2}(\lambda|1,k)\sigma_{k}(\lambda| 1,k)\geq\left(1-\frac{(k-1)(n-k-1)}{k(n-k)}\right)\sigma_{k-1}^2(\lambda|1,k).
$
Combining (1.18) with the inequality above, we have
$
\sigma_{k-1}(\lambda| k)\geq\sqrt{1-\frac{(k-1)(n-k-1)}{k(n-k)}}\sigma_{k-1}(\lambda|1,k).
$
It follows that
$
|\sigma_{k-1}(\lambda| 1,k)|\leq\sqrt{\frac{k(n-k)}{n-1}}\sigma_{k-1}(\lambda|k).
$
Therefore,
$
\sigma_{k-1}(\lambda|k)&=&\sigma_{k-1}(\lambda|1,k)+\lambda_1\sigma_{k-2}(\lambda| 1,k)\nonumber\\
&\geq&-C_k\sigma_{k-1}(\lambda|k)+\lambda_1\sigma_{k-2}(\lambda| 1,k),
$
where $C(n,k)=\sqrt{\frac{k(n-k)}{n-1}}$.\\
Hence,
$
\sigma_{k-1}(\lambda|k)\geq\frac{\lambda_1}{1+C(n,k)}\sigma_{k-2}(\lambda|1,k).
$
Then the result follows clearly by recursion.\\
For more inequalities, see\\
Mi Lin and Neil S. Trudinger, On some inequalities for elementary symmetric functions,
Bull. Austral. Math. Soc., 50 (1994), 317--326.
$
$\frac{\sigma_k(\lambda)}{\sigma_{k-1}(\lambda)}$ is concave w.r.t. $\lambda\in\Gamma_{k-1}$. We must make full use of the condition $\lambda\in\Gamma_{k-1}$. Generally, if $0\leq l\leq k$, we have that $\Big(\frac{\sigma_k(\lambda)}{\sigma_{k-l}(\lambda)}\Big)^{\frac{1}{k-l}}$ is concave w.r.t. $\lambda\in\Gamma_{k}$.
$
While $\lambda\in\Gamma_k$, see the proof in G.Lieberman‘s book [second order parabolic differential equations 2nd] P404 or D.S.MitrinoviC [Analytic Inequalities]P102.\\
We only need two show that for any $\lambda,\mu\in\Gamma_k$,
$
\frac{\sigma_k(\lambda+\mu)}{\sigma_{k-1}(\lambda+\mu)}\geq\frac{\sigma_k(\lambda)}{\sigma_{k-1}(\lambda)}+\frac{\sigma_k(\mu)}{\sigma_{k-1}(\mu)}
$
Once the above inequality has been established, it is easy to prove that $\Big(\frac{\sigma_k(\lambda)}{\sigma_{k-l}(\lambda)}\Big)^{\frac{1}{k-l}}$ is concave.
\\
Matrix case:
$(w_{ij})_{n\times n}$ is symmetric, and $\lambda(w_{ij})=(\lambda_1,\lambda_2,...,\lambda_n)$.
$
\sigma_k(D^2w)=\frac{1}{k!}\sum\limits_{i_1,...,i_k;j_1,...,j_k}\delta(i _1,...,i_k;j_1,...,j_k)w_{i_1j_1}...w_{i_kj_k}
$
where $\delta(\cdot;\cdot)$ is the generalized kronecker symbol.
$
\frac{\partial\sigma_k(D^2w)}{\partial w_{ij}}=\frac{1}{(k-1)!}\sum\limits_{i_1,...,i_{k-1};j_1,...,j_{k-1}}\delta(i,i_1,...,i_{k-1};j,j_1,...,j_{k-1})w_{i_1j_1}...w_{i_{k-1}j_{k-1}}.
$
Similarly, we can define
$
\frac{\partial^2\sigma_k(D^2w)}{\partial w_{ij}\partial w_{rs}}=\frac{1}{(k-2)!}\sum\limits_{i_1,...,i_{k-2};j_1,...,j_{k-2}}\delta(i,r,i_1,...,i_{k-2};j,s,j_1,...,j_{k-2})w_{i_1j_1}...w_{i_{k-2}j_{k-2}}.$
$W=(w_{ij})_{n\times n}$, and $\lambda(w_{ij})=(\lambda_1,\lambda_2,...,\lambda_n)$. \\
If $W$ is diagonal, $\lambda_i=w_{ii}$ is different from each other, then\\
(1)$\frac{\partial\lambda_i}{\partial w_{ii}}=1$, otherwise, $\frac{\partial\lambda_i}{\partial w_{rs}}=0$.\\
(2)$\frac{\partial^2\lambda_i}{\partial w_{ii}\partial w_{jj}}=\frac{1}{\lambda_i-\lambda_j}, i\neq j$, otherwise $\frac{\partial^2\lambda_i}{\partial w_{ij}\partial w_{kl}}=0$.
Some Corollary: If $W$ is diagonal, then\\
(1)$\frac{\partial\sigma_k(W)}{\partial w_{ii}}=\sigma_{k-1}(\lambda|i)$, otherwise, $\frac{\partial\sigma_k(W)}{\partial w_{ij}}=0$. It is easy to check that (by Lemma 1.1(5))
$
\sum\limits_{i,j,m}\frac{\partial\sigma_k(W)}{\partial w_{ij}}w_{im}w_{mj}=\sum\limits_{m}\lambda_m^2\sigma_{k-1}(\lambda|m)=\sigma_1(\lambda)\cdot\sigma_k(\lambda)+(k+1)\sigma_{k+1}(\lambda).
$
(2)Second derivatives:\\
$\frac{\partial^2\sigma_k(W)}{\partial w_{ij}\partial w_{rs}}=\sigma_{k-2}(\lambda|i,r)$, for $i=r,\ j=s,\ i\neq r$; \\ $\frac{\partial^2\sigma_k(W)}{\partial w_{ij}\partial w_{rs}}=-\sigma_{k-2}(\lambda|i,r)$, for $i=s,\ j=r,\ i\neq r$;\\
otherwise $\frac{\partial^2\sigma_k(W)}{\partial w_{ij}\partial w_{rs}}=0$.\\
Divergence free structure: $\sum\limits_{i=1}^n\partial_i\Big(\frac{\partial\sigma_k(W)}{\partial w_{ij}}\Big)=0$, for any $j=1,2,...,n$.\\
The above formula can be generalized to $F(W)=f(\lambda(W))$. If $W$ is diagonal, then\\
(1)$\frac{\partial F(W)}{\partial w_{ii}}=\frac{\partial f(\lambda)}{\partial\lambda_i}\delta_{ij}$.\\
(2)$\frac{\partial^2 F(W)}{\partial w_{ij}\partial w_{rs}}=\frac{\partial^2 f(\lambda)}{\partial\lambda_i\partial\lambda_r}\delta_{ij}\delta_{rs}+\frac{\frac{\partial f(\lambda)}{\partial\lambda_i}-\frac{\partial f(\lambda)}{\lambda_j}}{\lambda_i-\lambda_j }\delta_{is}\delta_{jr}(1-\delta_{ij})$.\\
$g(x)=\log\sigma_k(\lambda)$, $\lambda\in\Gamma_k$, then
$
\sum\limits_{i=1}^n(g_{ii}+\frac{g_i}{\lambda_i})\xi_i^2+\sum_{i\neq j}g_{ij}\xi_i\xi_j\geq0.
$
See Guan and Ma, Invention Mathematica.
$f(\lambda)$ is concave and its homogeneity degree is one $\Longleftrightarrow$ $\log f(\lambda)$ is concave.
标签:newton element efi tab fun man win soc fine
原文地址:https://www.cnblogs.com/Analysis-PDE/p/11128375.html