Commit cf480361 authored by Victor Poughon's avatar Victor Poughon

DOC: migrate developer's guide to the cookbook

parent 735a69d9
Pipeline #319 passed with stage
in 27 minutes and 34 seconds
import argparse
import re
import os
import os.path
from os.path import join
import subprocess
def sed(content, regex, repl):
return re.sub(regex, repl, content, flags = re.MULTILINE | re.DOTALL)
if __name__ == "__main__":
parser = argparse.ArgumentParser(usage="migrate sg tex file")
parser.add_argument("filename", help="")
parser.add_argument("output_dir", help="")
args = parser.parse_args()
input = args.filename
output = join(args.output_dir, os.path.basename(args.filename).replace(".tex", ".rst"))
content = open(input).read()
content = sed(content,
r"\\doxygen\{otb\}\{(.*?)\}",
r":doxygen:`\1`")
content = sed(content,
r"\\doxygen\{itk\}\{(.*?)\}",
r":doxygen-itk:`\1`")
content = sed(content,
r"\\code\{(.*?)\}",
r"\\texttt{\1}")
content = sed(content,
r"cmakecode",
r"verbatim")
content = sed(content,
r"cppcode",
r"verbatim")
content = sed(content,
r"\\input\{(.*?)\}",
r"See example :ref:`\1`")
content = sed(content,
r"\\input (\w+)\n",
r"See example \1\n")
content = sed(content,
r"\\begin\{figure\}",
r"\\begin{verbatim}\\begin{figure}")
content = sed(content,
r"\\end\{figure\}",
r"\\end{figure}\\end{verbatim}")
open(output, "w").write(content)
subprocess.check_call("pandoc -f latex -t rst -o {} {}".format(output, output), shell=True)
subprocess.check_call(['sed', '-i', "s ‘ ` g", output])
print(output)
......@@ -95,9 +95,10 @@ def render_example(filename, otb_root):
rst_description = ""
# Render the template
name = os.path.basename(filename)
template_example = open("templates/example.rst").read()
output_rst = template_example.format(
label="example-" + root,
label=name,
heading=rst_section(name, "="),
description=rst_description,
usage=example_usage,
......@@ -108,7 +109,7 @@ def render_example(filename, otb_root):
return output_rst
if __name__ == "__main__":
def main():
parser = argparse.ArgumentParser(usage="Export examples to rst")
parser.add_argument("rst_dir", help="Directory where rst files are generated")
parser.add_argument("otb_root", help="OTB repository root")
......@@ -130,3 +131,6 @@ if __name__ == "__main__":
os.makedirs(join(args.rst_dir, "C++", "Examples", tag), exist_ok=True)
with open(join(args.rst_dir, "C++", "Examples", tag, root + ".rst"), "w") as output_file:
output_file.write(render_example(filename, args.otb_root))
if __name__ == "__main__":
main()
......@@ -360,7 +360,7 @@ def multireplace(string, replacements):
def make_links(text, allapps):
"Replace name of applications by internal rst links"
rep = {appname: ":ref:`{}`".format("app-" + appname) for appname in allapps}
rep = {appname: ":ref:`{}`".format(appname) for appname in allapps}
return multireplace(text, rep)
def render_application(appname, allapps):
......@@ -374,7 +374,7 @@ def render_application(appname, allapps):
application_documentation_warnings(app)
output = template_application.format(
label="app-" + appname,
label=appname,
heading=rst_section(app.GetName(), '='),
description=app.GetDescription(),
longdescription=make_links(app.GetDocLongDescription(), allapps),
......
......@@ -4,5 +4,13 @@ C++ API
=======
.. toctree::
:maxdepth: 2
C++/SystemOverview.rst
C++/Iterators.rst
C++/Filters.rst
C++/StreamingAndThreading.rst
C++/PersistentFilters.rst
C++/WriteAnApplication.rst
C++/AddingNewModules.rst
C++/Examples.rst
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Persistent filters
==================
Introduction
------------
As presented in chapter :ref:`StreamingAndThreading`, OTB has two main mechanisms
to handle large data: streaming allows to process image piece-wise, and
multi-threading allows to process concurrently several pieces of one streaming
block. Using these concepts, one can easily write pixel-wise or
neighborhood-based filters and insert them into a pipeline which will be
scalable with respect to the input image size.
Yet, sometimes we need to compute global features on the whole image.
One example is to determine image mean and variance of the input image
in order to produce a centered and reduced image. The operation of
centering and reducing each pixel is fully compliant with streaming and
threading, but one has to first estimate the mean and variance of the
image. This first step requires to walk the whole image once, and
traditional streaming and multi-threading based filter architecture is
of no help here.
This is because there is a fundamental difference between these two
operations: one supports streaming, and the other needs to perform
streaming. In fact we would like to stream the whole image piece by
piece through some filter that will collect and keep mean and variance
cumulants, and then synthetize theses cumulants to compute the final
mean and variance once the full image as been streamed. Each stream
would also benefit from parallel processing. This is exactly what
persistent filters are for.
Architecture
------------
There are two main objects in the persistent filters framework. The
first is the :doxygen:`PersistentImageFilter`, the second is the
:doxygen:`PersistentFilterStreamingDecorator`.
The persistent filter class
~~~~~~~~~~~~~~~~~~~~~~~~~~~
The :doxygen:`PersistentImageFilter` class is a regular
:doxygen-itk:`ImageToImageFilter`, with two additional pure virtual
methods: the ``Synthetize()`` and the ``Reset()`` methods.
Imagine that the ``GenerateData()`` or ``ThreadedGenerateData()``
progressively computes some global feature of the whole image, using
some member of the class to store intermediate results. The
``Synthetize()`` is an additional method which is designed to be called
one the whole image has been processed, in order to compute the final
results from the intermediate results. The ``Reset()`` method is
designed to allow the reset of the intermediate results members so as to
start a fresh processing.
Any sub-class of the :doxygen:`PersistentImageFilter` can be used as a
regular :doxygen-itk:`ImageToImageFilter` (provided that both
``Synthetize()`` and ``Reset()`` have been implemented, but the real
interest of these filters is to be used with the streaming decorator
class presented in the next section.
The streaming decorator class
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The :doxygen:`PersistentFilterStreamingDecorator` is a class designed to
be templated with subclasses of the :doxygen:`PersistentImageFilter`. It
provides the mechanism to stream the whole image through the templated
filter, using a third class called
:doxygen:`StreamingImageVirtualWriter`. When the ``Update()`` method is
called on a :doxygen:`PersistentFilterStreamingDecorator`, a pipeline
plugging the templated subclass of the :doxygen:`PersistentImageFilter`
to an instance of :doxygen:`StreamingImageVirtualWriter` is created. The
latter is then updated, and acts like a regular
:doxygen:`ImageFileWriter` but it does not actually write anything to
the disk : streaming pieces are requested and immediately discarded. The
:doxygen:`PersistentFilterStreamingDecorator` also calls the ``Reset()``
method at the beginning and the ``Synthetize()`` method at the end of
the streaming process. Therefore, it packages the whole mechanism for
the use of a :doxygen:`PersistentImageFilter`:
#. Call the ``Reset()`` method on the filter so as to reset any
temporary results members,
#. Stream the image piece-wise through the filter,
#. Call the ``Synthetize()`` method on the filter so as to compute the
final results.
There are some methods that allows to tune the behavior of the
:doxygen:`StreamingImageVirtualWriter`, allowing to change the image
splitting methods (tiles or strips) or the size of the streams with
respect to some target available amount of memory. Please see the class
documentation for details. The instance of the
:doxygen:`StreamingImageVirtualWriter` can be retrieved from the
:doxygen:`PersistentFilterStreamingDecorator` through the
``GetStreamer()`` method.
Though the internal filter of the
:doxygen:`PersistentFilterStreamingDecorator` can be accessed through
the ``GetFilter()`` method, the class is often derived to package the
streaming-decorated filter and wrap the parameters setters and getters.
An end-to-end example
---------------------
This is an end-to-end example to compute the mean over a full image,
using a streaming and threading-enabled filter. Please note that only
specific details are explained here. For more general information on how
to write a filter, please refer to section :ref:`Filters`.
First step: writing a persistent filter
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The first step is to write a persistent mean image filter. We need to
include the appropriate header :
.. code-block:: cpp
#include "otbPersistentImageFilter.h"
Then, we declare the class prototype as follows:
.. code-block:: cpp
template<class TInputImage>
class ITK_EXPORT PersistentMeanImageFilter :
public PersistentImageFilter<TInputImage, TInputImage>
Since the output image will only be used for streaming purpose, we do
not need to declare different input and output template types.
In the *private* section of the class, we will declare a member which
will be used to store temporary results, and a member which will be used
to store the final result.
.. code-block:: cpp
private:
// Temporary results container
std::vector<PixelType> m_TemporarySums;
// Final result member
double m_Mean;
Next, we will write the ``Reset()`` method implementation in the
*protected* section of the class. Proper allocation of the temporary
results container with respect to the number of threads is handled here.
.. code-block:: cpp
protected:
virtual void Reset()
{
// Retrieve the number of threads
unsigned int numberOfThreads = this->GetNumberOfThreads();
// Reset the temporary results container
m_TemporarySums = std::vector<PixelType>(numberOfThreads, itk::NumericTraits<PixelType>::Zero);
// Reset the final result
m_Mean = 0.;
}
Now, we need to write the ``ThreadedGenerateData()`` methods (also in
the *protected* section), were temporary results will be computed for
each piece of stream.
.. code-block:: cpp
virtual void ThreadedGenerateData(const RegionType&
outputRegionForThread,
itk::ThreadIdType threadId)
{
// Enable progress reporting
itk::ProgressReporter(this,threadId,outputRegionForThread.GetNumberOfPixels());
// Retrieve the input pointer
InputImagePointer inputPtr = const_cast<TInputImage *>(this->GetInput());
// Declare an iterator on the region
itk::ImageRegionConstIteratorWithIndex<TInputImage> it(inputPtr,
outputRegionForThread);
// Walk the region of the image with the iterator
for (it.GoToBegin(); !it.IsAtEnd(); ++it, progress.CompletedPixel())
{
// Retrieve pixel value
const PixelType& value = it.Get();
// Update temporary results for the current thread
m_TemporarySums[threadId]+= value;
}
}
Last, we need to define the ``Synthetize()`` method (still in the
*protected* section), which will yield the final results:
.. code-block:: cpp
virtual void Synthetize()
{
// For each thread
for(unsigned int threadId = 0; threadId <this->GetNumberOfThreads();++threadId)
{
// Update final result
m_Mean+=m_TemporarySums[threadId];
}
// Complete calculus by dividing by the total number of pixels
unsigned int nbPixels = this->GetInput()->GetLargestPossibleRegion().GetNumberOfPixels();
if(nbPixels != 0)
{
m_Mean /= nbPixels;
}
}
Second step: Decorating the filter and using it
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Now, to use the filter, one only has to decorate it with the
:doxygen:`PersistentFilterStreamingDecorator`. First step is to include
the appropriate header:
.. code-block:: cpp
#include "otbPersistentMeanImageFilter.h"
#include "otbPersistentFilterStreamingDecorator.h"
Then, we decorate the filter with some typedefs:
.. code-block:: cpp
typedef otb::PersistentMeanImageFilter<ImageType> PersitentMeanFilterType;
typedef otb::PersistentFilterStreamingDecorator <PersitentMeanFilterType> StreamingMeanFilterType;
Now, the decorated filter can be used like any standard filter:
.. code-block:: cpp
StreamingMeanFilterType::Pointer filter = StreamingMeanFilterType::New();
filter->SetInput(reader->GetOutput());
filter->Update();
Third step: one class to rule them all
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
It is often convenient to avoid the few typedefs of the previous section
by deriving a new class from the decorated filter:
.. code-block:: cpp
template<class TInputImage>
class ITK_EXPORT StreamingMeanImageFilter :
public PersistentFilterStreamingDecorator<PersistentImageFilter<TInputImage, TInputImage>>
This also allows to redefine setters and getters for parameters,
avoiding to call the ``GetFilter()`` method to set them.
.. _StreamingAndThreading:
Streaming and Threading
=======================
Streaming and threading are two
different things even if they are linked to a certain extent. In OTB:
- Streaming describes the ability to combine the processing of several
portion of a big image and to make the output identical as what you
would have got if the whole image was processed at once. Streaming is
compulsory when you’re processing gigabyte images.
- Threading is the ability to process simultaneously different parts of
the image. Threading will give you some benefits only if you have a
fairly recent processor.
To sum up: streaming is good if you have big images, threading is good
if you have several processing units.
However, these two properties are not unrelated. Both rely on the filter
ability to process parts of the image and combine the result, that's what
the ``ThreadedGenerateData()`` method can do.
Streaming and threading in OTB
------------------------------
For OTB, streaming is pipeline related while threading is filter
related. If you build a pipeline where one filter is not streamable, the
whole pipeline is not streamable: at one point, you would hold the
entire image in memory. Whereas you will benefit from a threaded filter
even if the rest of the pipeline is made of non-threadable filters (the
processing time will be shorter for this particular filter).
Even if you use a non streamed writer, each filter which has a
``ThreadedGenerateData()`` will split the image into two and send each part
to one thread and you will notice two calls to the function.
If you have some particular requirement and want to use only one thread,
you can call the ``SetNumberOfThreads()`` method on each of your filter.
When you are writing your own filter, you have to follow some rules to
make your filter streamable and threadable. Some details are provided in
sections :ref:`StreamingLargeData` and :ref:`ThreadedFilterExecution`.
Division strategies
-------------------
The division of the image occurs generally at the writer level.
Different strategies are available and can be specified explicitly. In
OTB, these are referred as *splitter*. Several available splitters are:
- :doxygen-itk:`ImageRegionSplitter`
- :doxygen-itk:`ImageRegionMultidimensionalSplitter`
- :doxygen:`ImageRegionNonUniformMultidimensionalSplitter`
You can add your own strategies based on these examples.
To change the splitting strategy of the writer, you can use the
following model:
.. code-block:: cpp
typedef otb::ImageRegionNonUniformMultidimensionalSplitter<3> splitterType;
splitterType::Pointer splitter=splitterType::New() ;
writer->SetRegionSplitter(splitter);
This diff is collapsed.
This diff is collapsed.
......@@ -38,6 +38,7 @@ extensions = [
'sphinx.ext.todo',
'sphinx.ext.imgmath',
'sphinx.ext.viewcode',
'sphinx.ext.extlinks',
]
imgmath_latex='@LATEX_COMMAND@'
......@@ -281,3 +282,8 @@ texinfo_documents = [
# If true, do not generate a @detailmenu in the "Top" node's menu.
#texinfo_no_detailmenu = False
extlinks = {
'doxygen': ("http://www.orfeo-toolbox.org/doxygen/classotb_1_1%s.html", "otb::"),
'doxygen-itk': ("http://www.itk.org/Doxygen/html/classitk_1_1%s.html", "itk::")
}
......@@ -6,7 +6,7 @@
{usage}
Example source code (`{link_name} <{link_href}>`_):
Example source code (`{link_name} <{link_href}>`__):
.. code-block:: cpp
......
\chapter{Applications}
\label{sec:Applications}
This chapter introduces a set of ready-to-use applications.
These applications were designed to perform simple remote sensing tasks,
more complex than simple examples, to demonstrate the use of the OTB
functions. They were previously known as the OTB-Applications
package but are now part of the OTB library. The new framework is
slightly different from before but they can be used pretty much the
same way: each application has its set of inputs, outputs, parameters.
The applications can be launched as a command line interface but also
via a Qt GUI. In addition, they can be wrapped for SWIG and PyQt. For a
complete list of these applications, please refer to the
\href{http://orfeo-toolbox.org/Applications}{applications documentation}.
\section{Example of use}
\label{sec:appExample}
......@@ -105,14 +105,12 @@ ${SoftwareGuide_BINARY_DIR}/SoftwareGuideConfiguration.tex
)
SET( Tex_SRCS
Applications.tex
DataRepresentation.tex
Filtering.tex
GUI.tex
ImageInterpolators.tex
Infrastructure.tex
IO.tex
Iterators.tex
Numerics.tex
Registration.tex
StereoReconstruction.tex
......@@ -121,14 +119,12 @@ SET( Tex_SRCS
SoftwareProcess.tex
SpatialObjects.tex
Statistics.tex
SystemOverview.tex
Visualization.tex
Watersheds.tex
Fusion.tex
Radiometry.tex
FeatureExtraction.tex
ObjectBasedImageAnalysis.tex
Hyperspectral.tex
)
......
......@@ -48,86 +48,6 @@ a file.
\label{sec:AccessingImageMetadata}
\input{MetadataExample}
\subsection{RGB Images}
The term RGB (Red, Green, Blue) stands for a color representation commonly used
in digital imaging. RGB is a representation of the human physiological
capability to analyze visual light using three spectral-selective
sensors~\cite{Malacara2002,Wyszecki2000}. The human retina possess different
types of light sensitive cells. Three of them, known as \emph{cones}, are
sensitive to color~\cite{Gray2003} and their regions of sensitivity loosely
match regions of the spectrum that will be perceived as red, green and blue
respectively. The \emph{rods} on the other hand provide no color discrimination
and favor high resolution and high sensitivity\footnote{The human eye is
capable of perceiving a single isolated photon.}. A fifth type of receptors,
the \emph{ganglion cells}, also known as circadian\footnote{The term
\emph{Circadian} refers to the cycle of day and night, that is, events that are
repeated with 24 hours intervals.} receptors are sensitive to the lighting
conditions that differentiate day from night. These receptors evolved as a
mechanism for synchronizing the physiology with the time of the day. Cellular
controls for circadian rythms are present in every cell of an organism and are
known to be exquisitively precise~\cite{Lodish2000}.
The RGB space has been constructed as a representation of a physiological
response to light by the three types of \emph{cones} in the human eye. RGB is
not a Vector space. For example, negative numbers are not appropriate in a
color space because they will be the equivalent of ``negative stimulation'' on
the human eye. In the context of colorimetry, negative color values are used
as an artificial construct for color comparison in the sense that
\begin{equation}
\label{eqn:ColorSubtraction}
ColorA = ColorB - ColorC
\end{equation}
just as a way of saying that we can produce $ColorB$ by combining $ColorA$ and
$ColorC$. However, we must be aware that (at least in emitted light) it is not
possible to \emph{subtract light}. So when we mention
Equation~\ref{eqn:ColorSubtraction} we actually mean
\begin{equation}
\label{eqn:ColorAddition}
ColorB = ColorA + ColorC
\end{equation}
On the other hand, when dealing with printed color and with paint, as opposed
to emitted light like in computer screens, the physical behavior of color
allows for subtraction. This is because strictly speaking the objects that we
see as red are those that absorb all light frequencies except those in the red
section of the spectrum~\cite{Wyszecki2000}.
The concept of addition and subtraction of colors has to be carefully
interpreted. In fact, RGB has a different definition regarding whether we are
talking about the channels associated to the three color sensors of the human
eye, or to the three phosphors found in most computer monitors or to the color
inks that are used for printing reproduction. Color spaces are usually non
linear and do not even from a Group. For example, not all visible colors can be
represented in RGB space~\cite{Wyszecki2000}.
ITK introduces the \doxygen{itk}{RGBPixel} type as a support for representing the
values of an RGB color space. As such, the RGBPixel class embodies a different
concept from the one of an \doxygen{itk}{Vector} in space. For this reason, the
RGBPixel lack many of the operators that may be naively expected from it. In
particular, there are no defined operations for subtraction or addition.
When you anticipate to perform the operation of ``Mean'' on a RGB type you are
assuming that in the color space provides the action of finding a color in the
middle of two colors, can be found by using a linear operation between their
numerical representation. This is unfortunately not the case in color spaces
due to the fact that they are based on a human physiological
response~\cite{Malacara2002}.
If you decide to interpret RGB images as simply three independent channels then
you should rather use the \doxygen{itk}{Vector} type as pixel type. In this way, you
will have access to the set of operations that are defined in Vector spaces.
The current implementation of the RGBPixel in ITK presumes that RGB color
images are intended to be used in applications where a formal interpretation of
color is desired, therefore only the operations that are valid in a color space
are available in the RGBPixel class.
The following example illustrates how RGB images can be represented in OTB.
\subsection{Vector Images}
\label{sec:DefiningVectorImages}
......
\chapter{Hyperspectral}
\input{HyperspectralUnmixingExample}
This diff is collapsed.
\chapter{Persistent filters}
\label{chapter:PersistentFilters}
\section{Introduction}
As presented in chapter~\ref{sec:StreamingAndThreading}, OTB has two
main mechanisms to handle efficiently large data: streaming allows to
process image piece-wise, and multi-threading allows to process
concurrently several pieces of one streaming block. Using these
concepts, one can easily write pixel-wise or neighborhood-based
filters and insert them into a pipeline which will be scalable with
respect to the input image size.
Yet, sometimes we need to compute global features on the whole image. One
example is to determine image mean and variance of the input image in
order to produce a centered and reduced image. The operation of
centering and reducing each pixel is fully compliant with streaming and
threading, but one has to first estimate the mean and variance of the
image. This first step requires to walk the whole image once, and
traditional streaming and multi-threading based filter architecture is
of no help here.
This is because there is a fundamental difference between these two
operations: one supports streaming, and the other needs to perform
streaming. In fact we would like to stream the whole image piece by
piece through some filter that will collect and keep mean and variance
cumulants, and then synthetize theses cumulants to compute the final
mean and variance once the full image as been streamed. Each
stream would also benefit from parallel processing. This is exactly
what persistent filters are for.
\section{Architecture}
There are two main objects in the persistent filters framework. The
first is the \doxygen{otb}{PersistentImageFilter}, the second is the
\doxygen{otb}{PersistentFilterStreamingDecorator}.
\subsection{The persistent filter class}
The \doxygen{otb}{PersistentImageFilter} class is a regular
\doxygen{itk}{ImageToImageFilter}, with two additional pure virtual
methods: the \verb?Synthetize()? and the \verb?Reset()? methods.
Imagine that the \verb?GenerateData()? or
\verb?ThreadedGenerateData()? progressively computes some global
feature of the whole image, using some member of the class to store
intermediate results. The \verb?Synthetize()? is an additional method
which is designed to be called one the whole image has been processed,
in order to compute the final results from the intermediate
results. The \verb?Reset()? method is designed to allow the reset of
the intermediate results members so as to start a fresh processing.
Any sub-class of the \doxygen{otb}{PersistentImageFilter} can be used
as a regular \doxygen{itk}{ImageToImageFilter} (provided that both
\verb?Synthetize()? and \verb?Reset()? have been implemented, but the
real interest of these filters is to be used with the streaming
decorator class presented in the next section.
\subsection{The streaming decorator class}
The \doxygen{otb}{PersistentFilterStreamingDecorator} is a class
designed to be templated with subclasses of the
\doxygen{otb}{PersistentImageFilter}. It provides the mechanism to
stream the whole image through the templated filter, using a third
class called \doxygen{otb}{StreamingImageVirtualWriter}. When the
\verb?Update()? method is called on a
\doxygen{otb}{PersistentFilterStreamingDecorator}, a pipeline
plugging the templated subclass of the
\doxygen{otb}{PersistentImageFilter} to an instance of
\doxygen{otb}{StreamingImageVirtualWriter} is created. The latter is
then updated, and acts like a regular
\doxygen{otb}{ImageFileWriter} but it does not actually write
anything to the disk : streaming pieces are requested and immediately
discarded. The \doxygen{otb}{PersistentFilterStreamingDecorator}
also calls the \verb?Reset()? method at the beginning and the
\verb?Synthetize()? method at the end of the