diff --git a/Documentation/Cookbook/rst/recipes/pbclassif.rst b/Documentation/Cookbook/rst/recipes/pbclassif.rst
index 0f5a7ce801d13edb9bf6186446c166ff9ea2705c..a8e79cc7a051c1eccd22525443179ef28cd83a14 100644
--- a/Documentation/Cookbook/rst/recipes/pbclassif.rst
+++ b/Documentation/Cookbook/rst/recipes/pbclassif.rst
@@ -104,10 +104,10 @@ There are several strategies to compute those sampling rates:
   of samples, which is user-defined.
 * **Smallest class strategy:** The class with the least number of samples
   will be fully sampled. All other classes will be sampled with the
-same number of samples.
+  same number of samples.
 * **Percent strategy:** Each class will be sampled with a user-defined
   percentage (same value for all classes) of samples available in this
-class.
+  class.
 * **Total strategy:** A global number of samples to generate is
   divided proportionally among each class (classes proportions are
   enforced).
@@ -269,7 +269,7 @@ image.
   - *Mode = equal:* For each image :math:`i` and each class :math:`c`,
     :math:`N_i( c ) = (total / L) * (\frac{Ti(c)}{sum_k(Ti(k))})` where :math:`total` is the total number of samples specified
   - *Mode = custom:* For each image :math:`i` and each class :math:`c`,
-  :math:`Ni( c ) = total(i) * (\frac{Ti(c)}{sum_k(Ti(k))})` where :math:`total(i)` is the total number of samples specified for image :math:`i`
+    :math:`Ni( c ) = total(i) * (\frac{Ti(c)}{sum_k(Ti(k))})` where :math:`total(i)` is the total number of samples specified for image :math:`i`
     
 * **Strategy = smallest class**
       
@@ -722,6 +722,7 @@ used to predict output values. The applications to do that are and .
 .. figure:: ../Art/MonteverdiImages/classification_chain_inputimage.jpg
 .. figure:: ../Art/MonteverdiImages/classification_chain_fancyclassif_CMR_input.png
 .. figure:: ../Art/MonteverdiImages/classification_chain_fancyclassif_CMR_3.png
+
 Figure 6: From left to right: Original image, fancy colored classified image and regularized classification map with radius equal to 3 pixels. 
 
 The input data set for training must have the following structure :