Skip to content
Snippets Groups Projects
Commit 7fc121ec authored by Manuel Grizonnet's avatar Manuel Grizonnet
Browse files

DOC:update MDMDNMF class documentation

parent 15c40879
No related branches found
No related tags found
No related merge requests found
......@@ -37,79 +37,86 @@ namespace otb
* M. G. A. Huck and J. Blanc-Talon, IEEE TGRS, vol. 48, no. 6, pp. 2590-2602, 2010.
* A. Huck and M. Guillaume, in WHISPERS, 2010, Grenoble.
*
* Let $\matR$ be the matrix of the hyperspectral data, whose $I$ columns are the
* spectral pixels and the $L$ rows are the vectorial spectral band
* Let \f$ \matR \f$ be the matrix of the hyperspectral data, whose
* \f$ I \f$ columns are the
* spectral pixels and the \f$ L \f$ rows are the vectorial spectral band
* images. The linear mixing model can be written as follow :
* \begin{equation}
* \f$ \begin{equation}
* \matR=\matA \matS + \matN= \matX + \matN
* \end{equation}
* The $I$ columns of $\matR$ contain the spectral pixels
* and the $I$ columns of $\matS$ hold their respective sets of abundance
* fractions. The $J$ rows of $\matS$ are the abundance maps
* corresponding to the respective end-members. The $J$ columns of
* $\matA$ are the end members spectra, and $\matX$ is the signal
* matrix. Both $\matA$ and $\matS$ are unknown.
* \end{equation} \f$
* The \f$ I \f$ columns of \f$ \matR \f$ contain the spectral pixels
* and the \f $I \f$ columns of \f $\matS \f$ hold their respective sets of abundance
* fractions. The \f$ J \f$ rows of \f$ \matS \f$ are the abundance maps
* corresponding to the respective end-members. The \f$ J \f$ columns of
* \f$ \matA \f$ are the end members spectra, and \f$ \matX \f$ is the signal
* matrix. Both \f$ \matA \f$ and \f$ \matS \f$ are unknown.
*
* The basic NMF formulation is to find two matrices $\hat{\matA}$ and
* $\hat{ \matS}$ such as:
* \begin{equation}
* The basic NMF formulation is to find two matrices \f$ \hat{\matA} \f$ and
* \f$ \hat{ \matS} \f$ such as:
* \f$ \begin{equation}
* \matX\simeq \hat{\matA} \hat{\matS}
* \end{equation}
* \end{equation} \f$
* NMF based algorithms consider the
* properties of the dual spaces $span^+(\matA')$ and $span^+(\matS)$, in
* which $span^+(\mathbf m^1 ...\mathbf m^d)=\{\mathbf v=\sum_i \mathbf
* m^i\mathbf a_i|\mathbf a\in \matR _+^d\}$. The
* properties of the dual spaces \f$ span^+(\matA')\f$ and
* \f $span^+(\matS) \f$, in
* which \f$ span^+(\mathbf m^1 ...\mathbf m^d)=\{\mathbf v=\sum_i \mathbf
* m^i\mathbf a_i|\mathbf a\in \matR _+^d\} \f$. The
* positiveness is then a fundamental assumption and is exploited to
* restrict the admissible solutions set.
*
* A common used solution is to minimize the reconstruction quadratic
* error $RQE({\matA}, {\matS})=\|\matR-{\matA} {\matS}\|^2_F$. In order to
* error :
* \f$ RQE({\matA}, {\matS})=\|\matR-{\matA} {\matS}\|^2_F \f$. In order to
* satisfy the sum-to-one constraint for hyperspectral data, a
* regularization term $STU(\matS)$ can be added to the objective
* regularization term \f$ STU(\matS) \f$ can be added to the objective
* function.
*
* A generic expression for the optimized function is $$
* A generic expression for the optimized function is \f$
* f(\matA,\matS)=\|\matA \matS-\matR\|_{norm}+\sum_i \lambda_i
* D_i(\matA) + \sum_j \lambda_j D_j(\matS)$$ in which $\|\matA
* \matS-\matR\|_{norm}$ is the connection-to-the-data term, and
* $\lambda_{\{i, j\}}$ are regularization parameters for end members and
* abundances constraints $D_{\{i, j\}}$.
* In \cite{Huck2010a}, they
* D_i(\matA) + \sum_j \lambda_j D_j(\matS) \f$ in which \f$ \|\matA
* \matS-\matR\|_{norm} \f$ is the connection-to-the-data term, and
* \f$ \lambda_{\{i, j\}} \f$ are regularization parameters for end members and
* abundances constraints \f$ D_{\{i, j\}} \f$.
* Huck
* propose an other regularization term,
* $D_A(\matA)=Tr(\matA^T\matA)-\frac{1}{L}Tr\left ( \matA^T \1_{LL}\matA
* \right)$, which ensures low spectral dispersion on endmembers.
* \f$ D_A(\matA)=Tr(\matA^T\matA)-\frac{1}{L}Tr\left ( \matA^T \1_{LL}\matA
* \right) \f$, which ensures low spectral dispersion on endmembers.
* The physical
* motivation is based on the assuption that in most situations, the
* whole set of pure materials do not appear in each pixel, but
* selectively in multiple piece-wise convex sets., as pointed in
* \cite{Zare2010}. As a consequence, the mean value of the abundance,
* $\frac{1}{J}$, is the least likely one. The maximum abundance
* selectively in multiple piece-wise convex sets. As a consequence, the mean value of the abundance,
* \f$ \frac{1}{J} \f$, is the least likely one. The maximum abundance
* dispersion condition is given by
* $D_s(\matS)=-\|\matS-\frac{1}{J}\1_{JI}\|^2_F$. MDMD-NMF algorithm
* minimizes the following function $ f(\matA,\matS) =RQE(\matA,
* \matS)+\delta.STU(\matS)+\lambda_A D_A(\matA)-\lambda_S D_S(\matS)$,
* $STU$ the sum-to-one constraint.
* \f$ D_s(\matS)=-\|\matS-\frac{1}{J}\1_{JI}\|^2_F \f$. MDMD-NMF algorithm
* minimizes the following function \f$ f(\matA,\matS) =RQE(\matA,
* \matS)+\delta.STU(\matS)+\lambda_A D_A(\matA)-\lambda_S D_S(\matS)
* \f$,
* \f$ STU \f$ the sum-to-one constraint.
*
* In the literature, NMF based optimization algorithms are mainly based
* on multiplicative rules, or else alternate gradient descent
* iterations, or else on alternate least square methods. In MDMD-NMF, the update rules
* at each iteration become \begin{eqnarray}\label{algomdmd} \matS
* &\leftarrow &P\left [\matS-\mu_S \left( \bar \matA^T
* at each iteration become :
* \f$ \begin{eqnarray}
* \label{algomdmd} \matS&\leftarrow &P\left [\matS-\mu_S \left( \bar \matA^T
* (\bar\matA\matS-\bar\matR)-\lambda_S(\matS-\frac{1}{J}\1_{JI})\right)\right
* ]\\ \nonumber \matA &\leftarrow &P\left [\matA-\mu_A \left(
* (\matA\matS-\matR)\matS^T +\lambda_A(\matA-\frac{1}{L}\ \mathbf
* 1_{LL}\matA)\right)\right ] \end{eqnarray} where $\mu_A$ and $\mu_S$
* are the step sizes. % and the projection. % stands for
* non-negativity hard constraint : $P(M_{ij})=0$ if $M_{ij}<0$ for
* $\mathbf %M=\{\matA,\matS\}$. Huck propose a
* multiscale method to determine the coefficients $\mu_A$ and
* $\mu_S$. The projection operator $P$ at each step ensures the
* positivity constraint for $\matA$ and $\matS$, and $\bar\matR$ and
* $\bar\matA$ include the sum-to-one constraint: $ \bar\matR=\left[
* 1_{LL}\matA)\right)\right ]
* \end{eqnarray} \f$
* where \f$ \mu_A\f$ and \f$\mu_S \f$
* are the step sizes.
* Huck propose a
* multiscale method to determine the coefficients \f$ \mu_A \f$ and
* \f$ \mu_S \f$. The projection operator \f$ P \f$ at each step ensures the
* positivity constraint for \f$ \matA \f$ and \f$ \matS \f$, and \f$
* \bar\matR \f$ and
* \f$ \bar\matA \f$ include the sum-to-one constraint:
* \f$ \bar\matR=\left[
* \begin{array}{c} \matR \\
* \delta\cdot\1_{1I}\end{array}\right],\enspace \bar\matA=\left[
* \begin{array}{c} \matA \\
* \delta\cdot\1_{1J}\end{array}\right]\enspace$.
* \delta\cdot\1_{1J}\end{array}\right]\enspace \f$.
*
* \ingroup ImageFilters
*/
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment