标签:
1. Eigendecomposition of Symmetric Matrix:
(1) Solve the equation $det[\lambda I-A]=0$ to get eigenvalues $\lambda_1,\lambda_2,...,\lambda_n$;
(2) Solve each $(\lambda_k I-A)\vec{x}=0$ to get eigenvectors $\vec{v}_1,\vec{v}_2,...,\vec{v}_n$;
(3) Orthogonalize and normalize the eigenvectors via Gram-Schimdt Process;
(4) $A=V\Lambda V^T$, where $\Lambda=diag(\lambda_1,\lambda_2,...,\lambda_n)$ and $\vec{v}_k$ is the $k$th column of $V$.
Please see my demo program of Orthogonal Diagonalization.
2. Singular Value Decomposition (SVD):
(1) Eigendecomposition of $A^T A$ ($A\in\mathbb{R}^{m\times n}$):
eigenvalues $\lambda_1,\lambda_2,...,\lambda_n$ in decreasing order,
and corresponding eigenvectors $\vec{v}_1,\vec{v}_2,...,\vec{v}_n$;
(2) Singular Values: $\sqrt{\lambda_1},\sqrt{\lambda_2},...,\sqrt{\lambda_n}$;
Left singular vectors: $\vec{u}_1,\vec{u}_2,...,\vec{u}_n$, where $\vec{u}_k=\frac{1}{\sqrt{\lambda_k}}A\vec{v}_k$;
Right singular vectors: $\vec{v}_1,\vec{v}_2,...,\vec{v}_n$;
(3) $A=U\Lambda V^T$, where $U\in\mathbb{R}^{m\times m}$, $\Lambda\in\mathbb{R}^{m\times n}$, $V\in\mathbb{R}^{n\times n}$.
3. Principal Component Analysis (PCA):
(1) Preprocessing of Dataset:
$\vec{\mu}_j = \frac{1}{N}\sum_{n=1}^N X_{nj}$, $\sigma_j=\sqrt{\frac{1}{N-1}\sum_{n=1}^N(X_{nj}-\vec{\mu}_j)}$, $X_{nj}=(X_{nj}-\vec{\mu}_j)/\sigma_j$;
(2) Singular Value Decomposition:
$X_{N\times D}\approx U_{N\times K}\Lambda_{K\times K}V_{K\times D}^T$;
(3) $X_{N\times D}V_{D\times K}\approx U_{N\times K}\Lambda_{K\times K}=\widetilde{X}_{N\times K}$.
References:
1. LeftNotEasy‘s blog: http://www.cnblogs.com/LeftNotEasy/archive/2011/01/19/svd-and-applications.html
2. Bishop, Christopher M. Pattern Recognition and Machine Learning [M]. Singapore: Springer, 2006
标签:
原文地址:http://www.cnblogs.com/DevinZ/p/4581916.html