Metrics are probably the most critical element of a registration problem. The metric defines what the goal of the process is, they measure how well the Target object is matched by the Reference object after the transform has been applied to it. The Metric should be selected in function of the types of objects to be registered and the expected kind of missalignment. Some metrics has a rather large capture region, which means that the optimizer will be able to find his way to a maximum even if the missalignment is high. Typicaly large capture regions are associated with low precision for the maximum. Other metrics can provide high precision for the final registration, but usually require to be initialized quite close to the optimal value.
Unfortunately there are no clear rules about how to select a metric, other that trying some of them in different conditions. In some cases could be and advantage to use a particular metric to get an initial approximation of the transformation, and then switch to another more sensitive metric to achieve better precision in the final result.
Metrics are depend on the objects they compare. The toolkit currently offers <em> Image To Image </em> and <em> PointSet to Image </em> metrics as follows:
\li <b> Mean Squares </b> Sum of squared differences between intensity values. It requires the two objects to have intensity values in the same range.
\li <b> Normalized Correlation </b> Correlation between intensity values divided by the square rooted autocorrelation of both target and reference objects: \f$ \frac{\sum_i^n{a_i * b_i }}{\sum_i^n{a_i^2}\sum_i^n{b_i^2}} \f$. This metric allows to register objects whose intensity values are related by a linear transformation.
\li <b> Pattern Intensity </b> Squared differences between intensity values transformed by a function of type \f$ \frac{1}{1+x} \f$ and summed them up. This metric has the advantage of increase simultaneously when more samples are available and when intensity values are close.
\li <b> Mutual Information </b> Mutual information is based in an information theory concept. Mutual information between two sets measures how much can be known from one set if only the other set is known. Given a set of values \f$ A=\{a_i\} \f$. Its entropy \f$ H(A) \f$ is defined by \f$ H(A) = \sum_i^n{- p(a_i) \log({p(a_i)})} \f$ where \f$ p(a_i) \f$ are the probabilities of the values in the set. Entropy can be interpreted as a measure of the mean uncertainty reduction that is obtained when one of the particular values is found during sampling. Given two sets \f$ A=\{a_i\} \f$ and \f$ B=\{b_i\} \f$ its joint entropy is given by the joint probabilities \f$ p_(a_i,b_i) \f$ as \f$ H(A,B) = \sum_i^n{-p(a_i,b_i) * log( p(a_i, b_i) )} \f$. Mutual information is obtained by subtracting the entropy of both sets from the joint entropy, as : \f$ H(A,B)-H(A)-H(B) \f$, and indicates how much uncertainty about one set is reduced by the knowledge of the second set. Mutual information is the metric of choice when image from different modalities need to be registered.