diff --git a/Documentation/Cookbook/rst/Installation_Linux.txt b/Documentation/Cookbook/rst/Installation_Linux.txt
index f896cde00edadd101ceeea0353d1b09ba331334d..72297d480d6b6c794a6e5cb9048997085c79f456 100644
--- a/Documentation/Cookbook/rst/Installation_Linux.txt
+++ b/Documentation/Cookbook/rst/Installation_Linux.txt
@@ -4,7 +4,7 @@ Download it from `OTB's download page
 <https://www.orfeo-toolbox.org/download>`__.
 
 This package is a self-extractible archive. You may uncompress it with a
-double-click on the file, or with the command line :
+double-click on the file, or with the command line:
 
 .. parsed-literal::
 
@@ -20,27 +20,27 @@ Please note that the resulting installation is not meant to be moved,
 you should uncompress the archive in its final location. Once the
 archive is extracted, the directory structure is made of:
 
--  ``monteverdi.sh`` : A launcher script for Monteverdi
+-  ``monteverdi.sh``: A launcher script for Monteverdi
 
--  ``mapla.sh`` : A launcher script for Mapla
+-  ``mapla.sh``: A launcher script for Mapla
 
--  ``otbenv.profile`` : A script to initialize the environment for OTB
+-  ``otbenv.profile``: A script to initialize the environment for OTB
    executables
 
--  ``bin`` : A folder containing application launchers (otbcli.sh,
+-  ``bin``: A folder containing application launchers (otbcli.sh,
    otbgui.sh), Monteverdi and Mapla.
 
--  ``lib`` : A folder containing all shared libraries and OTB
+-  ``lib``: A folder containing all shared libraries and OTB
    applications.
 
--  ``share`` : A folder containing common resources and copyright
+-  ``share``: A folder containing common resources and copyright
    mentions.
 
 In order to run the command line launchers, this package doesn’t require
 any special library that is not present in most modern Linux
 distributions. There is a small caveat for "expat" though. The binaries depend
 on "libexpat.so", which can be supplied by most package managers (apt, yum, ...).
-If not already present, look for one of the following packages :
+If not already present, look for one of the following packages:
 
 ::
 
@@ -48,7 +48,7 @@ If not already present, look for one of the following packages :
 
 The graphical executable (otbgui launchers, Monteverdi
 and Mapla) use the X11 libraries, which are widely used in a lot of
-distributions :
+distributions:
 
 ::
 
diff --git a/Documentation/Cookbook/rst/Installation_Macx.txt b/Documentation/Cookbook/rst/Installation_Macx.txt
index eac8fb4b7655f77c612a4cbee9c140b4b1b56f09..5cca9511690d1cba8a0f10320504179fef399015 100644
--- a/Documentation/Cookbook/rst/Installation_Macx.txt
+++ b/Documentation/Cookbook/rst/Installation_Macx.txt
@@ -12,17 +12,17 @@ the same direcrtory along with OTB-|release|-Darwin64.run
 
 Contents of OTB-|release|-Darwin64 is briefly listed below:
 
--  ``Monteverdi.app`` : A Mac OSX .app for Monteverdi
+-  ``Monteverdi.app``: A Mac OSX .app for Monteverdi
 
--  ``Mapla.app`` : A Mac OSX .app for Mapla.
+-  ``Mapla.app``: A Mac OSX .app for Mapla.
 
--  ``bin`` : A folder containing application launchers (otbcli.sh,
+-  ``bin``: A folder containing application launchers (otbcli.sh,
    otbgui.sh), monteverdi and mapla binaries.
 
--  ``lib`` : A folder containing all shared libraries and OTB
+-  ``lib``: A folder containing all shared libraries and OTB
    applications.
 
--  ``share`` : A folder containing common resources and copyright
+-  ``share``: A folder containing common resources and copyright
    mentions.
 
 Python bindings
diff --git a/Documentation/Cookbook/rst/Installation_Windows.txt b/Documentation/Cookbook/rst/Installation_Windows.txt
index 82b823a8b4c02531d65c029946652adb4598b55f..d9a72803aa4b5a0d61eb2adb4906e2ee4d50415d 100644
--- a/Documentation/Cookbook/rst/Installation_Windows.txt
+++ b/Documentation/Cookbook/rst/Installation_Windows.txt
@@ -6,17 +6,17 @@ Pick the correct version (32 bit or 64 bit) depending on your system.
 Extract the archive and use one of the launchers, they contain all applications
 and their launchers (both command line and graphical launchers are provided):
 
--  ``monteverdi.bat`` : A launcher script for Monteverdi
+-  ``monteverdi.bat``: A launcher script for Monteverdi
 
--  ``mapla.bat`` : A launcher script for Mapla
+-  ``mapla.bat``: A launcher script for Mapla
 
--  ``otbenv.bat`` : A script to initialize the environment for OTB
+-  ``otbenv.bat``: A script to initialize the environment for OTB
    executables
 
--  ``bin`` : A folder containing application launchers (otbcli.bat,
+-  ``bin``: A folder containing application launchers (otbcli.bat,
    otbgui.bat) and the DLLs.
 
--  ``lib`` : A folder containing application DLLs.
+-  ``lib``: A folder containing application DLLs.
 
 The applications can be launched from the Mapla launcher. If you want to
 use the otbcli and otbgui launchers, you can initialize a command prompt
diff --git a/Documentation/Cookbook/rst/Monteverdi.rst b/Documentation/Cookbook/rst/Monteverdi.rst
index 57614e613d75aa505fa9cce1ce6cbf94f9f481b2..991b4d6e7c5b81ab8ee90fe899aa17de91558734 100644
--- a/Documentation/Cookbook/rst/Monteverdi.rst
+++ b/Documentation/Cookbook/rst/Monteverdi.rst
@@ -56,14 +56,14 @@ The top toolbar is made up of ten icons; from left to right:
 
 #. gives/changes the current projection, used as reference of the view
 
-#. selects the effect to be applied to the selected layer :
+#. selects the effect to be applied to the selected layer:
    chessboard, local constrast, local translucency, normal, spectral
    angle, swipe (horizontal and vertical)
 
-#. a parameter used for the following effects : chessboard, local
+#. a parameter used for the following effects: chessboard, local
    contrast, local translucency, spectral angle
 
-#. a parameter used for the following effects : local constrast,
+#. a parameter used for the following effects: local constrast,
    spectral angle
 
 Image displaying
@@ -74,7 +74,7 @@ the user. There are many nice keyboard shortcuts or mouse tricks that
 let the user have a better experience in navigating throughout the
 loaded images. These shortcuts and tricks are given within the Help item
 of the main menu, by clicking Keymap; here is a short list of the most
-useful ones :
+useful ones:
 
 The classical ones:
 
@@ -106,22 +106,22 @@ In the layer stack part:
 Right side dock
 ~~~~~~~~~~~~~~~
 
-The dock on the right side is divided into four tabs :
+The dock on the right side is divided into four tabs:
 
--  Quicklook : gives the user a degraded view of the whole extent,
+-  Quicklook: gives the user a degraded view of the whole extent,
    letting him/her easily select the area to be displayed
 
--  Histogram : gives the user information about the value distribution
+-  Histogram: gives the user information about the value distribution
    of the selected channels. By clicking the mouse’s left button, user
    can sample their values.
 
--  Color Setup : lets the user map the image channels to the RGB
+-  Color Setup: lets the user map the image channels to the RGB
    channels. Also lets him/her set the alpha parameter (translucency).
 
--  Color dynamics : lets the user change the displaying dynamics of a
+-  Color dynamics: lets the user change the displaying dynamics of a
    selected image. For each RGB channel (each mapped to an image
    channel), the user can decide how the pixel range of a selected image
-   will be shortcut before being rescaled to 0-255 : either by setting
+   will be shortcut before being rescaled to 0-255: either by setting
    the extremal values, or by setting the extremal quantiles.
 
 Each tab is represented by the figures below ( [fig:quickhisto]
@@ -139,29 +139,29 @@ loaded images: projection, resolution (if available), name, and effect
 applied to the images (see top toolbar subsection). If the user moves
 the mouse over the displayed images, they will get more information:
 
--  (i,j) : pixel index
+-  (i,j): pixel index
 
--  (Red Green Blue) : original image pixel values from channel mapped to
+-  (Red Green Blue): original image pixel values from channel mapped to
    the RGB ones.
 
--  (X,Y) : pixel position
+-  (X,Y): pixel position
 
 Concerning the six icons, from left to right:
 
--  1st : moves the selected layer to the top of the stack
+-  1st: moves the selected layer to the top of the stack
 
--  2nd : moves the selected layer up within the stack
+-  2nd: moves the selected layer up within the stack
 
--  3rd : moves the selected layer down within the stack
+-  3rd: moves the selected layer down within the stack
 
--  4th : moves the selected layer to the bottom of the stack
+-  4th: moves the selected layer to the bottom of the stack
 
--  5th : use selected layer as projection reference
+-  5th: use selected layer as projection reference
 
--  6th : applies all display settings (color-setup, color-dynamics,
+-  6th: applies all display settings (color-setup, color-dynamics,
    shader and so forth) of selected layer to all other layers
 
-The layer stack is represented in the figure below ( [fig:layerstack]) :
+The layer stack is represented in the figure below ( [fig:layerstack]):
 
 .. figure:: Art/MonteverdiImages/layerstack.png
 
@@ -192,7 +192,7 @@ values in a txt file-, solarillumination.txt -solar illumination values
 in watt/m2/micron for each band in a txt file-, and so on... refer to
 the documentation of the application).
 
--  Note : if OTB (on which is based ) is able to parse the metadata of
+-  Note: if OTB (on which is based ) is able to parse the metadata of
    the image to be calibrated, then some of the fields will be
    automatically filled in.
 
@@ -209,7 +209,7 @@ BandMath application is intended to apply mathematical operations on
 pixels (launch it with shortcut CTRL+A). In this example, we are going
 to use this application to change the dynamics of an image, and check
 the result by looking at histogram tab, in the right side dock. The
-formula used is the following : :math:`\text{im1b1} \times 1000`. In the
+formula used is the following: :math:`\text{im1b1} \times 1000`. In the
 figures below ( [fig:BM]), one can notice that the mode of the
 distribution is located at position :math:`356.0935`, whereas in the
 transformed image, the mode is located at position :math:`354737.1454`,
@@ -256,7 +256,7 @@ effects.
 Polarimetry
 ~~~~~~~~~~~
 
-In this example, we are going to use three applications :
+In this example, we are going to use three applications:
 
 -  the first one is SARDecompositions. This application is used to
    compute the HaA decomposition. It takes as inputs three complex
@@ -277,7 +277,7 @@ In this example, we are going to use three applications :
       a gradient of colors to represent the entropy image.
 
    -  method.continuous.lut = hot. We specify here the kind of gradient
-      to be used : low values in black, high ones in white, and
+      to be used: low values in black, high ones in white, and
       intermediate ones in red/orange/yellow...
 
    -  method.continuous.min = 0 and method.continuous.max = 1. Here, the
@@ -295,7 +295,7 @@ Pansharpening
 ~~~~~~~~~~~~~
 
 Finally, let’s try a last example with the Pansharpening application
-(launch it with shortcut CTRL+A). The fields are quite easy to fill in :
+(launch it with shortcut CTRL+A). The fields are quite easy to fill in:
 this application needs a panchromatic image, a XS image, and an output
 image. These images are represented in the figures below ( [fig:ps12]
 and  [fig:ps3]):
@@ -306,12 +306,12 @@ and  [fig:ps3]):
 
 Now, in order to inspect the result properly, these three images are
 loaded in . The pansharpened image is placed to the top of the stack
-layer, and different layer effects are applied to it :
+layer, and different layer effects are applied to it:
 
--  in figure  [fig:ps4] : chessboard effect, to compare the result with
+-  in figure  [fig:ps4]: chessboard effect, to compare the result with
    the XS image.
 
--  in figure  [fig:ps5] : translucency effect, to compare the result
+-  in figure  [fig:ps5]: translucency effect, to compare the result
    with the panchromatic image.
 
 .. figure:: Art/MonteverdiImages/ps4.png
@@ -324,7 +324,7 @@ Conclusion
 The images used in this documentation can be found in the OTB-Data
 repository (https://git.orfeo-toolbox.org/otb-data.git):
 
--  in OTB-Data/Input :
+-  in OTB-Data/Input:
 
    -  QB\_TOULOUSE\_MUL\_Extract\_500\_500.tif and
       QB\_Toulouse\_Ortho\_XS\_ROI\_170x230.tif (GUI presentation)
@@ -335,4 +335,4 @@ repository (https://git.orfeo-toolbox.org/otb-data.git):
    -  QB\_Toulouse\_Ortho\_PAN.tif QB\_Toulouse\_Ortho\_XS.tif
       (pansharpening example)
 
--  in OTB-Data/Input/mv2-test : QB\_1\_ortho.tif
+-  in OTB-Data/Input/mv2-test: QB\_1\_ortho.tif
diff --git a/Documentation/Cookbook/rst/OTB-Applications.rst b/Documentation/Cookbook/rst/OTB-Applications.rst
index a2d72187bf34f8396aa47db7de8325d9500be158..1e2997ca83cc694ec93821e3b9b79bb1350525cb 100644
--- a/Documentation/Cookbook/rst/OTB-Applications.rst
+++ b/Documentation/Cookbook/rst/OTB-Applications.rst
@@ -47,7 +47,7 @@ results in the following help to be displayed:
 ::
 
     $ otbApplicationLauncherCommandLine
-    Usage : ./otbApplicationLauncherCommandLine module_name [MODULEPATH] [arguments]
+    Usage: ./otbApplicationLauncherCommandLine module_name [MODULEPATH] [arguments]
 
 The ``module_name`` parameter corresponds to the application name. The
 ``[MODULEPATH]`` argument is optional and allows to pass to the launcher
@@ -148,7 +148,7 @@ This launcher needs the same two arguments as the command line launcher
 The application paths can be set with the ``OTB_APPLICATION_PATH``
 environment variable, as for the command line launcher. Also, as for the
 command-line application, a more simple script is generated and
-installed by OTB to ease the configuration of the module path : to
+installed by OTB to ease the configuration of the module path: to
 launch the graphical user interface, one will start the
 ``otbgui_Rescale`` script.
 
@@ -229,7 +229,7 @@ application, changing the algorithm at each iteration.
     import otbApplication
 
     # otbApplication.Registry can tell you what application are available
-    print "Available applications : "
+    print "Available applications: "
     print str( otbApplication.Registry.GetAvailableApplications() )
 
     # Let's create the application with codename "Smoothing"
@@ -242,7 +242,7 @@ application, changing the algorithm at each iteration.
     app.SetParameterString("in", argv[1])
 
     # The smoothing algorithm can be set with the "type" parameter key
-    # and can take 3 values : 'mean', 'gaussian', 'anidif'
+    # and can take 3 values: 'mean', 'gaussian', 'anidif'
     for type in ['mean', 'gaussian', 'anidif']:
 
       print 'Running with ' + type + ' smoothing type'
@@ -505,7 +505,7 @@ Extended filenames
 There are multiple ways to define geo-referencing information. For
 instance, one can use a geographic transform, a cartographic projection,
 or a sensor model with RPC coefficients. A single image may contain
-several of these elements, such as in the “ortho-ready” products : this
+several of these elements, such as in the “ortho-ready” products: this
 is a type of product still in sensor geometry (the sensor model is
 supplied with the image) but it also contains an approximative
 geographic transform that can be used to have a quick estimate of the
@@ -513,7 +513,7 @@ image localisation. For instance, your product may contain a “.TIF” file
 for the image, along with a “.RPB” file that contains the sensor model
 coefficients and an “.IMD” file that contains a cartographic projection.
 
-This case leads to the following question : which geo-referencing
+This case leads to the following question: which geo-referencing
 element should be used when opening this image in OTB. In
 fact, it depends on the users need. For an orthorectification
 application, the sensor model must be used. In order to specify which
@@ -680,14 +680,14 @@ Writer options
 
 -  Available values are:
 
-   -  auto : tiled or stripped streaming mode chosen automatically
+   -  auto: tiled or stripped streaming mode chosen automatically
       depending on TileHint read from input files
 
-   -  tiled : tiled streaming mode
+   -  tiled: tiled streaming mode
 
-   -  stripped : stripped streaming mode
+   -  stripped: stripped streaming mode
 
-   -  none : explicitly deactivate streaming
+   -  none: explicitly deactivate streaming
 
 -  Not set by default
 
@@ -701,12 +701,12 @@ Writer options
 
 -  Available values are:
 
-   -  auto : size is estimated from the available memory setting by
+   -  auto: size is estimated from the available memory setting by
       evaluating pipeline memory print
 
-   -  height : size is set by setting height of strips or tiles
+   -  height: size is set by setting height of strips or tiles
 
-   -  nbsplits : size is computed from a given number of splits
+   -  nbsplits: size is computed from a given number of splits
 
 -  Default is auto
 
@@ -720,11 +720,11 @@ Writer options
 
 -  Value is :
 
-   -  if sizemode=auto : available memory in Mb
+   -  if sizemode=auto: available memory in Mb
 
-   -  if sizemode=height : height of the strip or tile in pixels
+   -  if sizemode=height: height of the strip or tile in pixels
 
-   -  if sizemode=nbsplits : number of requested splits for streaming
+   -  if sizemode=nbsplits: number of requested splits for streaming
 
 -  If not provided, the default value is set to 0 and result in
    different behaviour depending on sizemode (if set to height or
diff --git a/Documentation/Cookbook/rst/recipes/bandmathx.rst b/Documentation/Cookbook/rst/recipes/bandmathx.rst
index d71de421dc6964ebc73183853fe556d9a65a0f47..967948d734c6ec31fe22511ad48ed27b294431a2 100644
--- a/Documentation/Cookbook/rst/recipes/bandmathx.rst
+++ b/Documentation/Cookbook/rst/recipes/bandmathx.rst
@@ -30,8 +30,8 @@ A simple example is given below:
 As we can see, the new band math filter works with the class
 otb::VectorImage.
 
-Syntax : first elements
------------------------
+Syntax: first elements
+----------------------
 
 The default prefix name for variables related to the ith input is
 *im(i+1)* (note the indexing from 1 to N, for N inputs). The user has
@@ -41,14 +41,14 @@ prefix.
 ::
 
 
-    // All variables related to image1 (input 0) will have the prefix im1 
-    filter->SetNthInput(0, image1);         
+    // All variables related to image1 (input 0) will have the prefix im1
+    filter->SetNthInput(0, image1);
 
-    // All variables related to image2 (input 1) will have the prefix  toulouse   
-    filter->SetNthInput(1, image2, "toulouse");   
+    // All variables related to image2 (input 1) will have the prefix  toulouse
+    filter->SetNthInput(1, image2, "toulouse");
 
     // All variables related to anotherImage (input 2) will have the prefix im3
-    filter->SetNthInput(2, anotherImage);      
+    filter->SetNthInput(2, anotherImage);
 
 In this document, we will keep the default convention. Following list
 summaries the available variables for input #0 (and so on for every
@@ -65,15 +65,15 @@ Variables and their descriptions:
 +-----------------------+--------------------------------------------------------------------------------------+----------+
 | im1bjNkxp             | a neighbourhood (”N”) of pixels of the jth component from first input, of size kxp   | Matrix   |
 +-----------------------+--------------------------------------------------------------------------------------+----------+
-| im1bjMini             | global statistic : minimum of the jth band from first input                          | Scalar   |
+| im1bjMini             | global statistic: minimum of the jth band from first input                           | Scalar   |
 +-----------------------+--------------------------------------------------------------------------------------+----------+
-| im1bjMaxi             | global statistic : maximum of the jth band from first input                          | Scalar   |
+| im1bjMaxi             | global statistic: maximum of the jth band from first input                           | Scalar   |
 +-----------------------+--------------------------------------------------------------------------------------+----------+
-| im1bjMean             | global statistic : mean of the jth band from first input                             | Scalar   |
+| im1bjMean             | global statistic: mean of the jth band from first input                              | Scalar   |
 +-----------------------+--------------------------------------------------------------------------------------+----------+
-| im1bjSum              | global statistic : sum of the jth band from first input                              | Scalar   |
+| im1bjSum              | global statistic: sum of the jth band from first input                               | Scalar   |
 +-----------------------+--------------------------------------------------------------------------------------+----------+
-| im1bjVar              | global statistic : variance of the jth band from first input                         | Scalar   |
+| im1bjVar              | global statistic: variance of the jth band from first input                          | Scalar   |
 +-----------------------+--------------------------------------------------------------------------------------+----------+
 | im1PhyX and im1PhyY   | spacing of first input in X and Y directions                                         | Scalar   |
 +-----------------------+--------------------------------------------------------------------------------------+----------+
@@ -345,33 +345,33 @@ Functions and operators summary:
 +----------------+-------------------------------------------------------------------------------+
 | pow and pw     | operators                                                                     |
 +----------------+-------------------------------------------------------------------------------+
-| vnorm          | adapation of an existing function to vectors : one input                      |
+| vnorm          | adapation of an existing function to vectors: one input                       |
 +----------------+-------------------------------------------------------------------------------+
-| vabs           | adapation of an existing function to vectors : one input                      |
+| vabs           | adapation of an existing function to vectors: one input                       |
 +----------------+-------------------------------------------------------------------------------+
-| vmin           | adapation of an existing function to vectors : one input                      |
+| vmin           | adapation of an existing function to vectors: one input                       |
 +----------------+-------------------------------------------------------------------------------+
-| vmax           | adapation of an existing function to vectors : one input                      |
+| vmax           | adapation of an existing function to vectors: one input                       |
 +----------------+-------------------------------------------------------------------------------+
-| vcos           | adapation of an existing function to vectors : one input                      |
+| vcos           | adapation of an existing function to vectors: one input                       |
 +----------------+-------------------------------------------------------------------------------+
-| vsin           | adapation of an existing function to vectors : one input                      |
+| vsin           | adapation of an existing function to vectors: one input                       |
 +----------------+-------------------------------------------------------------------------------+
-| vtan           | adapation of an existing function to vectors : one input                      |
+| vtan           | adapation of an existing function to vectors: one input                       |
 +----------------+-------------------------------------------------------------------------------+
-| vtanh          | adapation of an existing function to vectors : one input                      |
+| vtanh          | adapation of an existing function to vectors: one input                       |
 +----------------+-------------------------------------------------------------------------------+
-| vsinh          | adapation of an existing function to vectors : one input                      |
+| vsinh          | adapation of an existing function to vectors: one input                       |
 +----------------+-------------------------------------------------------------------------------+
-| vcosh          | adapation of an existing function to vectors : one input                      |
+| vcosh          | adapation of an existing function to vectors: one input                       |
 +----------------+-------------------------------------------------------------------------------+
-| vlog           | adapation of an existing function to vectors : one input                      |
+| vlog           | adapation of an existing function to vectors: one input                       |
 +----------------+-------------------------------------------------------------------------------+
-| vlog10         | adapation of an existing function to vectors : one input                      |
+| vlog10         | adapation of an existing function to vectors: one input                       |
 +----------------+-------------------------------------------------------------------------------+
-| vexp           | adapation of an existing function to vectors : one input                      |
+| vexp           | adapation of an existing function to vectors: one input                       |
 +----------------+-------------------------------------------------------------------------------+
-| vsqrt          | adapation of an existing function to vectors : one input                      |
+| vsqrt          | adapation of an existing function to vectors: one input                       |
 +----------------+-------------------------------------------------------------------------------+
 
 [variables]
@@ -391,7 +391,7 @@ of the new band math filter.
     /** Return a pointer on the nth filter input */
     ImageType * GetNthInput(unsigned int idx);
 
-Refer to the section “Syntax : first elements” ([ssec:syntax]) where the
+Refer to the section “Syntax: first elements” ([ssec:syntax]) where the
 two first functions have already been commented. The function
 GetNthInput is quite clear to understand.
 
@@ -404,7 +404,7 @@ Each time the function SetExpression is called, a new expression is
 pushed inside the filter. **There are as many outputs as there are
 expressions. The dimensions of the outputs (number of bands) are totally
 dependent on the dimensions of the related expressions (see also last
-remark of the section “Syntax : first element” -[ssec:syntax]-).** Thus,
+remark of the section “Syntax: first element” -[ssec:syntax]-).** Thus,
 the filter always performs a pre-evaluation of each expression, in order
 to guess how to allocate the outputs.
 
diff --git a/Documentation/Cookbook/rst/recipes/improc.rst b/Documentation/Cookbook/rst/recipes/improc.rst
index 776fa35ad94722c494e0f508d25ed2a48facf03d..0cfeb5b9f3be8f810325be5b1fa0797ebbaf536f 100644
--- a/Documentation/Cookbook/rst/recipes/improc.rst
+++ b/Documentation/Cookbook/rst/recipes/improc.rst
@@ -44,22 +44,22 @@ image writers. The OTB filters that produce a no-data value are able to
 export this value so that the output file will store it.
 
 An application has been created to manage the no-data value. The
-application has the following features :
+application has the following features:
 
--  Build a mask corresponding to the no-data pixels in the input image :
+-  Build a mask corresponding to the no-data pixels in the input image:
    it gives you a binary image of the no-data pixels in your input
    image.
 
--  Change the no-data value of the input image : it will change all
+-  Change the no-data value of the input image: it will change all
    pixels that carry the old no-data value to the new one and update the
    metadata
 
--  Apply an external mask to the input image as no-data : all the pixels
+-  Apply an external mask to the input image as no-data: all the pixels
    that corresponds have a null mask value are flagged as no-data in the
    output image.
 
 For instance, the following command converts the no-data value of the
-input image to the default value for DEM (which is -32768) :
+input image to the default value for DEM (which is -32768):
 
 ::
 
@@ -276,7 +276,7 @@ Fuzzy Model (requisite)
 
 The *DSFuzzyModelEstimation* application performs the fuzzy model
 estimation (once by use case: descriptor set / Belief support /
-Plausibility support). It has the following input parameters :
+Plausibility support). It has the following input parameters:
 
 -  ``-psin`` a vector data of positive samples enriched according to the
    “Compute Descriptors” part
@@ -311,7 +311,7 @@ First Step: Compute Descriptors
 The first step in the classifier fusion based validation is to compute,
 for each studied polyline, the chosen descriptors. In this context, the
 *ComputePolylineFeatureFromImage* application can be used for a large
-range of descriptors. It has the following inputs :
+range of descriptors. It has the following inputs:
 
 -  ``-in`` an image (of the sudied scene) corresponding to the chosen
    descriptor (NDVI, building Mask…)
@@ -327,7 +327,7 @@ range of descriptors. It has the following inputs :
 The output is a vector data containing polylines with a new field
 containing the descriptor value. In order to add the “NONDVI” descriptor
 to an input vector data (“inVD.shp”) corresponding to the percentage of
-pixels along a polyline that verifies the formula “NDVI >0.4” :
+pixels along a polyline that verifies the formula “NDVI >0.4”:
 
 ::
 
@@ -368,7 +368,7 @@ Second Step: Feature Validation
 The final application (*VectorDataDSValidation* ) will validate or
 unvalidate the studied samples using `the Dempster-Shafer
 theory <http://en.wikipedia.org/wiki/Dempster%E2%80%93Shafer_theory>`_ 
-. Its inputs are :
+. Its inputs are:
 
 -  ``-in`` an enriched vector data “VD\_NONDVI\_ROADSA\_NOBUIL.shp”
 
diff --git a/Documentation/Cookbook/rst/recipes/optpreproc.rst b/Documentation/Cookbook/rst/recipes/optpreproc.rst
index a88dceceeb7cc776f78666988b5b3985f6af31ce..03bf6fc348b2e7beb359c9e25580302871968046 100644
--- a/Documentation/Cookbook/rst/recipes/optpreproc.rst
+++ b/Documentation/Cookbook/rst/recipes/optpreproc.rst
@@ -47,7 +47,7 @@ This transformation can be done either with **OTB Applications** or with
 **Monteverdi** . Sensor-related parameters such as gain, date, spectral
 sensitivity and sensor position are seamlessly read from the image
 metadata. Atmospheric parameters can be tuned by the user. Supported
-sensors are :
+sensors are:
 
 -  Pleiades
 
@@ -86,7 +86,7 @@ Pan-sharpening
 --------------
 
 Because of physical constrains on the sensor design, it is difficult to
-achieve high spatial and spectral resolution at the same time : a better
+achieve high spatial and spectral resolution at the same time: a better
 spatial resolution means a smaller detector, which in turns means lesser
 optical flow on the detector surface. On the contrary, spectral bands
 are obtained through filters applied on the detector surface, that
@@ -95,7 +95,7 @@ detector size to achieve an acceptable signal to noise ratio.
 
 For these reasons, many high resolution satellite payload are composed
 of two sets of detectors, which in turns delivers two different kind of
-images :
+images:
 
 -  The multi-spectral (XS) image, composed of 3 to 8 spectral bands
    containing usually blue, green, red and near infra-red bands at a
@@ -115,7 +115,7 @@ multi-spectral one so as to get an image combining the spatial
 resolution of the panchromatic image with the spectral richness of the
 multi-spectral image. This operation is called pan-sharpening.
 
-This fusion operation requires two different steps :
+This fusion operation requires two different steps:
 
 #. The multi-spectral (XS) image is zoomed and registered to the
    panchromatic image,
@@ -131,7 +131,7 @@ described in the above sections.
 The *BundleToPerfectSensor* application allows to perform both steps in
 a row. Seamless sensor modelling is used to perform zooming and
 registration of the multi-spectral image on the panchromatic image. In
-the case of a Pléiades bundle, a different approach is used : an affine
+the case of a Pléiades bundle, a different approach is used: an affine
 transform is used to zoom the multi-spectral image and apply a residual
 translation. This translation is computed based on metadata about the
 geometric processing of the bundle. This zooming and registration of the
@@ -189,7 +189,7 @@ Default value is 256 Mb.
 
 .. figure:: ../Art/MonteverdiImages/monteverdi_QB_XS_pan-sharpened.png
 
-Figure 5 : Pan-sharpened image using Orfeo ToolBox. 
+Figure 5: Pan-sharpened image using Orfeo ToolBox.
 
 Please also note that since registration and zooming of the
 multi-spectral image with the panchromatic image relies on sensor
@@ -226,7 +226,7 @@ both delivered as 1 degree by 1 degree tiles:
    resolution DEM obtained by stereoscopic processing of the archive of
    the ASTER instrument.
 
-The **Orfeo Toolbox** relies on `OSSIM <http://www.ossim.org/>`_ 
+The **Orfeo Toolbox** relies on `OSSIM <http://www.ossim.org/>`_
 capabilities for sensor modelling and DEM handling. Tiles of a given DEM
 are supposed to be located within a single directory. General elevation
 support is also supported from GeoTIFF files.
@@ -238,9 +238,9 @@ files. Subdirectories are not supported.
 
 Depending on the reference of the elevation, you also need to use a
 geoid to manage elevation accurately. For this, you need to specify a
-path to a file which contains the geoid. `Geoid <http://en.wikipedia.org/wiki/Geoid>`_ 
+path to a file which contains the geoid. `Geoid <http://en.wikipedia.org/wiki/Geoid>`_
 corresponds to the equipotential surface that would coincide with the mean ocean surface of
-the Earth . 
+the Earth.
 
 We provide one geoid in the `OTB-Data  <http://hg.orfeo-toolbox.org/OTB-Data/file/4722d9e672c6/Input/DEM/egm96.grd>`_ repository.
 
@@ -314,7 +314,7 @@ Beware of “ortho-ready” products
 
 There are some image products, called “ortho-ready”, that should be
 processed carefully. They are actual products in raw geometry, but their
-metadata also contains projection data :
+metadata also contains projection data:
 
 -  a map projection
 
@@ -335,17 +335,17 @@ projection has to be hidden from **Orfeo Toolbox** .
 
 You can see if a product is an “ortho-ready” product by using ``gdalinfo`` or
 OTB ReadImageInfo application.
-Check if your product verifies following two conditions :
+Check if your product verifies following two conditions:
 
--  The product is in raw geometry : you should expect the presence of
+-  The product is in raw geometry: you should expect the presence of
    RPC coefficients and a non-empty OSSIM keywordlist.
 
--  The product has a map projection : you should see a projection name
+-  The product has a map projection: you should see a projection name
    with physical origin and spacing.
 
 In that case, you can hide the map projection from the **Orfeo Toolbox**
 by using *extended* filenames. Instead of using the plain input image
-path, you append a specific key at the end :
+path, you append a specific key at the end:
 
 ::
 
@@ -355,7 +355,7 @@ The double quote can be necessary for a successful parsing. More details
 about the extended filenames can be found in the `wiki page <http://wiki.orfeo-toolbox.org/index.php/ExtendedFileName>`_ , and
 also in the `OTB Software Guide <http://orfeo-toolbox.org/SoftwareGuide>`_  .
 
-Ortho-rectification with **OTB Applications** 
+Ortho-rectification with **OTB Applications**
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 The *OrthoRectification* application allows to perform
@@ -402,7 +402,7 @@ used (example with *lambert93* map projection):
 Map projections handled by the application are the following (please
 note that the ellipsoid is always WGS84):
 
--  | UTM : ``-map utm``  | The UTM zone and hemisphere can be set by the options ``-map.utm.zone`` and ``-map.utm.northhem``.
+-  | UTM: ``-map utm``  | The UTM zone and hemisphere can be set by the options ``-map.utm.zone`` and ``-map.utm.northhem``.
 
 -  Lambert 2 etendu: ``-map lambert2``
 
@@ -410,9 +410,9 @@ note that the ellipsoid is always WGS84):
 
 -  | TransMercator: ``-map transmercator`` | The related parameters (false easting, false northing and scale factor) can be set by the options    ``-map.transmercator.falseeasting``, ``-map.transmercator.falsenorthing`` and ``-map.transmercator.scale``
 
--  WGS : ``-map wgs``
+-  WGS: ``-map wgs``
 
--  | Any map projection system with an EPSG code : ``-map epsg`` | The EPSG code is set with the option ``-map.epsg.code``
+-  | Any map projection system with an EPSG code: ``-map epsg`` | The EPSG code is set with the option ``-map.epsg.code``
 
 The group ``outputs`` contains parameters to set the origin, size and
 spacing of the output image. For instance, the ground spacing can be
diff --git a/Documentation/Cookbook/rst/recipes/pbclassif.rst b/Documentation/Cookbook/rst/recipes/pbclassif.rst
index 156b8b9c634419b662fcbc6b1a614195a23e7997..320e3e4796e3429d43d74820c70b8b63b04b7b92 100644
--- a/Documentation/Cookbook/rst/recipes/pbclassif.rst
+++ b/Documentation/Cookbook/rst/recipes/pbclassif.rst
@@ -132,7 +132,7 @@ also provide a raster mask, that will be used to discard pixel
 positions, using parameter ``-mask``.
 
 A simple use of the application ``PolygonClassStatistics`` could be as
-follows :
+follows:
 
 ::
 
@@ -313,7 +313,7 @@ image.
 
 * **Strategy = all**
   
-  - Same behavior for all modes proportional, equal, custom : take all samples
+  - Same behavior for all modes proportional, equal, custom: take all samples
   
 * **Strategy = constant** (let's call :math:`M` the global number of samples per
   class required)
@@ -493,7 +493,7 @@ gray level label image. It allows to get an RGB classification map by
 re-mapping the image values to be suitable for display purposes. One can
 use the *ColorMapping* application. This tool will replace each label
 with an 8-bits RGB color specified in a mapping file. The mapping file
-should look like this :
+should look like this:
 
 ::
 
@@ -502,7 +502,7 @@ should look like this :
 
 In the previous example, 1 is the label and 255 0 0 is a RGB color (this
 one will be rendered as red). To use the mapping tool, enter the
-following :
+following:
 
 ::
 
@@ -511,7 +511,7 @@ following :
                         -method.custom.lut lut_mapping_file.txt
                         -out               RGB_color_image.tif
 
-Other look-up tables (LUT) are available : standard continuous LUT,
+Other look-up tables (LUT) are available: standard continuous LUT,
 optimal LUT, and LUT computed over a support image.
 
 Example
@@ -739,7 +739,7 @@ the regularization. Therefore, those NoData input pixels are invariant
 and keep their NoData label in the output regularized image.
 
 The *ClassificationMapRegularization* application has the following
-input parameters :
+input parameters:
 
 -  ``-io.in`` labeled input image resulting from a previous
    classification process
@@ -808,7 +808,7 @@ Regression
 ----------
 
 The machine learning models in OpenCV and LibSVM also support a
-regression mode : they can be used to predict a numeric value (i.e. not
+regression mode: they can be used to predict a numeric value (i.e. not
 a class index) from an input predictor. The workflow is the same as
 classification. First, the regression model is trained, then it can be
 used to predict output values. The applications to do that are and .
@@ -826,18 +826,18 @@ used to predict output values. The applications to do that are and .
 	 
 Figure 6: From left to right: Original image, fancy colored classified image and regularized classification map with radius equal to 3 pixels. 
 
-The input data set for training must have the following structure :
+The input data set for training must have the following structure:
 
 -  *n* components for the input predictors
 
 -  one component for the corresponding output value
 
-The application supports 2 input formats :
+The application supports 2 input formats:
 
--  An image list : each image should have components matching the
+-  An image list: each image should have components matching the
    structure detailed earlier (*n* feature components + 1 output value)
 
--  A CSV file : the first *n* columns are the feature components and the
+-  A CSV file: the first *n* columns are the feature components and the
    last one is the output value
 
 If you have separate images for predictors and output values, you can
@@ -853,12 +853,12 @@ Statistics estimation
 
 As in classification, a statistics estimation step can be performed
 before training. It allows to normalize the dynamic of the input
-predictors to a standard one : zero mean, unit standard deviation. The
+predictors to a standard one: zero mean, unit standard deviation. The
 main difference with the classification case is that with regression,
 the dynamic of output values can also be reduced.
 
 The statistics file format is identical to the output file from
-application, for instance :
+application, for instance:
 
 ::
 
@@ -879,14 +879,14 @@ application, for instance :
     </FeatureStatistics>
 
 In the application, normalization of input predictors and output values
-is optional. There are 3 options :
+is optional. There are 3 options:
 
--  No statistic file : normalization disabled
+-  No statistic file: normalization disabled
 
--  Statistic file with *n* components : normalization enabled for input
+-  Statistic file with *n* components: normalization enabled for input
    predictors only
 
--  Statistic file with *n+1* components : normalization enabled for
+-  Statistic file with *n+1* components: normalization enabled for
    input predictors and output values
 
 If you use an image list as training set, you can run application. It
@@ -950,13 +950,13 @@ Once the model is trained, it can be used in application to perform the
 prediction on an entire image containing input predictors (i.e. an image
 with only *n* feature components). If the model was trained with
 normalization, the same statistic file must be used for prediction. The
-behavior of with respect to statistic file is identical to :
+behavior of with respect to statistic file is identical to:
 
--  no statistic file : normalization off
+-  no statistic file: normalization off
 
--  *n* components : input only
+-  *n* components: input only
 
--  *n+1* components : input and output
+-  *n+1* components: input and output
 
 The model to use is read from file (the one produced during training).
 
diff --git a/Documentation/Cookbook/rst/recipes/residual_registration.rst b/Documentation/Cookbook/rst/recipes/residual_registration.rst
index c08ccab46d78ff92862edc45d8d2099953862bd1..5bae14b3d6b46e1a295a36d1338448abff80a41f 100644
--- a/Documentation/Cookbook/rst/recipes/residual_registration.rst
+++ b/Documentation/Cookbook/rst/recipes/residual_registration.rst
@@ -84,7 +84,7 @@ or `SURF <http://en.wikipedia.org/wiki/SURF>`__ keypoints can be
 computed in the application. The band on which keypoints are computed
 can be set independently for both images.
 
-The application offers two modes :
+The application offers two modes:
 
 -  the first is the full mode where keypoints are extracted from the
    full extent of both images (please note that in this mode large image
@@ -198,7 +198,7 @@ estimated sensor model is small, you must achieve a good registration
 now between the 2 rectified images. Normally far better than ’only’
 performing separate orthorectification over the 2 images.
 
-This methodology can be adapt and apply in several cases, for example :
+This methodology can be adapt and apply in several cases, for example:
 
 -  register stereo pair of images and estimate accurate epipolar
    geometry
diff --git a/Documentation/Cookbook/rst/recipes/sarprocessing.rst b/Documentation/Cookbook/rst/recipes/sarprocessing.rst
index e6b1ce8921c5e9a8f1be72fe1026c0d5c5bf8ccc..f61bbb4a90e32e0c00ae732d9b2e7cca4fb347b4 100644
--- a/Documentation/Cookbook/rst/recipes/sarprocessing.rst
+++ b/Documentation/Cookbook/rst/recipes/sarprocessing.rst
@@ -11,9 +11,9 @@ The application SarRadiometricCalibration can deal with the calibration
 of data from four radar sensors: RadarSat2, Sentinel1, COSMO-SkyMed and
 TerraSAR-X.
 
-Examples :
+Examples:
 
-If SARimg.tif is a TerraSAR-X or a COSMO-SkyMed image :
+If SARimg.tif is a TerraSAR-X or a COSMO-SkyMed image:
 
 ::
 
@@ -22,7 +22,7 @@ If SARimg.tif is a TerraSAR-X or a COSMO-SkyMed image :
 
 If SARimg.tif is a RadarSat2 or a Sentinel1 image, it ’s possible to
 specify the look-up table (automatically found in the metadata provided
-with such image) :
+with such image):
 
 ::
 
@@ -31,7 +31,7 @@ with such image) :
                                  -out SARimg-calibrated.tif 
 
 For TerraSAR-X (and soon for RadarSat2 and Sentinel1), it is also
-possible to use a noise LUT to derive calibrated noise profiles :
+possible to use a noise LUT to derive calibrated noise profiles:
 
 ::
 
@@ -50,7 +50,7 @@ Frost, Lee, Gamma-MAP and Kuan.
 Figure ([ffig:S1VVdespeckledextract] shows an extract of a SLC Sentinel1
 image, band VV, taken over Cape Verde and the result of the Gamma
 filter. The following commands were used to produce the despeckled
-extract :
+extract:
 
 First, the original image is converted into an intensity one (real part
 corresponds to band 1, and imaginary part to band 2):
@@ -61,7 +61,7 @@ corresponds to band 1, and imaginary part to band 2):
                     -exp im1b1^2+im1b2^2 
                     -out S1-VV-extract-int.tif 
 
-Then the intensity image is despeckled with the Gamma-MAP filter :
+Then the intensity image is despeckled with the Gamma-MAP filter:
 
 ::
 
@@ -107,11 +107,11 @@ where each band is related to their elements. As most of the time SAR
 polarimetry handles symmetric matrices, only the relevant elements are
 stored, so that the images representing them have a minimal number of
 bands. For instance, the coherency matrix size is 3x3 in the monostatic
-case, and 4x4 in the bistatic case : it will thus be stored in a 6-band
+case, and 4x4 in the bistatic case: it will thus be stored in a 6-band
 or a 10-band complex image (the diagonal and the upper elements of the
 matrix).
 
-The Sinclair matrix is a special case : it is always represented as 3 or
+The Sinclair matrix is a special case: it is always represented as 3 or
 4 one-band complex images (for mono- or bistatic case).
 
 There are 13 available conversions, each one being related to the
@@ -414,7 +414,7 @@ For each option parameter, the list below gives the formula used.
 
    #. :math:`Im( T_{xx}.T_{yy}^{*} - T_{xy}.T_{yx}^{*} )`
 
-   With :
+   With:
 
    -  :math:`T_{xx} = -S_{hh}`
 
@@ -434,7 +434,7 @@ For each option parameter, the list below gives the formula used.
 
    #. :math:`DegP_{max}`
 
-Examples :
+Examples:
 
 #. ::
 
@@ -541,9 +541,9 @@ available; it is implemented for the monostatic case (transmitter and
 receiver are co-located). User must provide three one-band complex
 images HH, HV or VH, and VV (HV = VH in monostatic case). The H-alpha-A
 decomposition consists in averaging 3x3 complex coherency matrices
-(incoherent analysis) : The user must provide the size of the averaging
+(incoherent analysis): The user must provide the size of the averaging
 window, thanks to the parameter inco.kernelsize. The applications
-returns a float vector image, made of three channels : H(entropy),
+returns a float vector image, made of three channels: H(entropy),
 Alpha, A(Anisotropy).
 
 Here are the formula used (refer to the previous section about how the
@@ -561,7 +561,7 @@ Where:
 
 -  :math:`\alpha_{i} = \left| SortedEigenVector[i] \right|* \frac{180}{\pi}`
 
-Example :
+Example:
 
 We first extract a ROI from the original image (not required). Here
 imagery\_HH.tif represents the element HH of the Sinclair matrix (and so
@@ -595,9 +595,9 @@ Next we apply the H-alpha-A decomposition:
                  -decomp haa -inco.kernelsize 5 
                              -out haa_extract.tif 
 
-The result has three bands : entropy (0..1) - alpha (0..90) - anisotropy
+The result has three bands: entropy (0..1) - alpha (0..90) - anisotropy
 (0..1). It is split into 3 mono-band images thanks to following
-command :
+command:
 
 ::
 
@@ -651,16 +651,16 @@ antenna and the receiving antenna respectively. Orientations and
 ellipticity are given in degrees, and are between -90/90 degrees and
 -45/45 degrees respectively.
 
-Four polarization architectures can be processed :
+Four polarization architectures can be processed:
 
-#. HH\_HV\_VH\_VV : full polarization, general bistatic case.
+#. HH\_HV\_VH\_VV: full polarization, general bistatic case.
 
-#. HH\_HV\_VV or HH\_VH\_VV : full polarization, monostatic case
+#. HH\_HV\_VV or HH\_VH\_VV: full polarization, monostatic case
    (transmitter and receiver are co-located).
 
-#. HH\_HV : dual polarization.
+#. HH\_HV: dual polarization.
 
-#. VH\_VV : dual polarization.
+#. VH\_VV: dual polarization.
 
 The application takes a complex vector image as input, where each band
 correspond to a particular emission/reception polarization scheme. User
@@ -680,10 +680,10 @@ the number of bands of the input image.
 #. Finally, the two last architectures (dual-polarization), can’t be
    distinguished only by the number of bands of the input image. User
    must then use the parameters emissionh and emissionv to indicate the
-   architecture of the system : emissionh=1 and emissionv=0 for HH\_HV,
+   architecture of the system: emissionh=1 and emissionv=0 for HH\_HV,
    emissionh=0 and emissionv=1 for VH\_VV.
 
-Note : if the architecture is HH\_HV, khii and psii are automatically
+Note: if the architecture is HH\_HV, khii and psii are automatically
 set to 0/0 degrees; if the architecture is VH\_VV, khii and psii are
 automatically set to 0/90 degrees.
 
@@ -695,7 +695,7 @@ and psir will be forced to psii + 90 degrees and -khii.
 Finally, the result of the polarimetric synthesis is expressed in the
 power domain, through a one-band scalar image.
 
-The final formula is thus : :math:`P=\mid B^T.[S].A\mid^2` , where A ans
+The final formula is thus: :math:`P=\mid B^T.[S].A\mid^2` , where A ans
 B are two Jones vectors and S is a Sinclair matrix.
 
 The two figures below ([fig:polsynthll] and [fig:polsynthlr]) show the
@@ -704,7 +704,7 @@ polarization and R for right polarization), from a Radarsat-2 image
 taken over Vancouver, Canada. Once the four two-band images imagery\_HH
 imagery\_HV imagery\_VH imagery\_VV were merged into a single four
 complex band image imageryC\_HH\_HV\_VH\_VV.tif, the following commands
-were used to produce the LL and LR images :
+were used to produce the LL and LR images:
 
 ::
 
@@ -799,7 +799,7 @@ Then, we rescale the produced images to intensities ranging from 0 to
        otbcli_Rescale -in VV.tif -out VV_res.png uint8 
 
 Figures below ([fig:hhfrisco] , [fig:hvfrisco] and [fig:vvfrisco]) show
-the images obtained :
+the images obtained:
 
 .. figure:: ../Art/SARImages/RSAT2_HH_Frisco.png
 
@@ -809,7 +809,7 @@ the images obtained :
 
 Now the most interesting step. In order to get a friendly coloration of
 these data, we are going to use the Pauli decomposition, defined as
-follows :
+follows:
 
 -  :math:`a=\frac{|S_{HH}-S_{VV}|}{\sqrt{2}}`
 
@@ -842,7 +842,7 @@ We use the BandMath application again:
 
 Note that sqrt(2) factors have been omitted purposely, since their
 effects will be canceled by the rescaling step. Then, we rescale the
-produced images to intensities ranging from 0 to 255 :
+produced images to intensities ranging from 0 to 255:
 
 -  ::
 
diff --git a/Documentation/Cookbook/rst/recipes/stereo.rst b/Documentation/Cookbook/rst/recipes/stereo.rst
index 40529b7390df33815862a817cb99b2c200370c12..6bc1c7dcd3f4b44ba18ab6ecf5c11194bc764c54 100644
--- a/Documentation/Cookbook/rst/recipes/stereo.rst
+++ b/Documentation/Cookbook/rst/recipes/stereo.rst
@@ -252,7 +252,7 @@ The command line for the *BlockMatching* application is :
                          -bm.medianfilter.radius 5
                          -bm.medianfilter.incoherence 2.0
 
-The application creates by default a two bands image : the horizontal
+The application creates by default a two bands image: the horizontal
 and vertical disparities.
 
 The *BlockMatching* application gives access to a lot of other powerful
@@ -283,7 +283,7 @@ Here are a few of these functionalities:
                        -out image2_epipolar_mask.tif
                        -exp "if(im1b1<=0,0,255)"
 
--  -mask.variancet : The block matching algorithm has difficulties to
+-  -mask.variancet: The block matching algorithm has difficulties to
    find matches on uniform areas. We can use the variance threshold to
    discard those regions and speed-up computation time.
 
@@ -382,7 +382,7 @@ One application to rule them all in multi stereo framework scheme
 -----------------------------------------------------------------
 
 An application has been added to fuse one or multiple stereo
-reconstruction(s) using all in one approach : *StereoFramework* . It
+reconstruction(s) using all in one approach: *StereoFramework* . It
 computes the DSM from one or several stereo pair. First of all the user
 have to choose his input data and defines stereo couples using
 *-input.co* string parameter. This parameter use the following
@@ -395,7 +395,7 @@ images are processed by pairs (which is equivalent as using “ 0 1,2 3,4
 main parameters have been split in groups detailed below:
 
 Output :
-    output parameters : DSM resolution, NoData value, Cell Fusion
+    output parameters: DSM resolution, NoData value, Cell Fusion
     method,
 
     -  : output projection map selection.
@@ -491,7 +491,7 @@ pair
 
 -  Resample into epipolar geometry with BCO interpolator
 
--  Create masks for each epipolar image : remove black borders and
+-  Create masks for each epipolar image: remove black borders and
    resample input masks
 
 -  Compute horizontal disparities with a block matching algorithm