Commit 265770f2 authored by Christophe Palmann's avatar Christophe Palmann

ENH: doxygen, added missing files (2)

parent 048d09fb
/**
*
* \mainpage Monteverdi 2
*
* <div align="center"><img src="logoVectoriel.png" alt="logoVectoriel.png"></div>
*
* \section intro Introduction
*
* Welcome to CNES' Monteverdi 2, an open-source software for image visualization
* and manipulation. It is built on top of Orfeo ToolBox and OTB-Ice libraries.
*
* \section homepage Home Page
*
* The Home Page of the project can be found at:
*
* http://www.orfeo-toolbox.org
*
* \section howto How to use this documentation
*
* This documentation describes the API of Monteverdi2 components. The overall
* design often uses the Model-View-Controller pattern.
*
* The interface is based on a <a href="http://qt-project.org">Qt</a> framework.
* The OTB-Ice library is in charge of image rendering.
*
*/
/**
\page BloxPage Blox Framework
\section BloxIntroduction Introduction
The itk::BloxImage object is a regular, rectilinear lattice of ``blocks''
in n-dimensional space. The word ``blox'' was chosen to bring to mind a
set of ``city blocks'' in 2D or ``building blocks'' in 3D. Being a
regular lattice, itk::BloxImage logically derives from itkImage. In an
itk::BloxImage, each pixel represents an isometric space-filling block of
geometric space, called an itk::BloxPixel. Each itk::BloxPixel generally
covers many pixels in the underlying image and is used to store a
variable number of image primitives (such as boundary points) or
features (such as medial nodes) gathered within that region of geometric
space. To do this, each itk::BloxPixel contains a linked list.
The itk::BloxImage object facilitates certain forms of analysis by
providing geometric hashing. For example, if boundary points are stored
in an itk::BloxImage, pairs of boundary points that face each other
(called ``core atoms'') can be found by searching relatively small
regions of geometric space that face each boundary point for appropriate
mates. Because an itk::BloxImage is rectilinear in geometric space (even
though the underlying image may not be) subsequent analysis can be
invariant to rotation and translation.
*/
/**
\page GeometryPage Geometry Concepts
Insight provide basic classe to represent geometrical concepts, alghought
the aim of the toolkit is not computational geometry, some of these elements
are necessary for common procedures in image processing.
This document present the different geometric concepts used in the toolkit,
as well as their relationships and how they can effectively be used.
At the beggining there was the Point
*/
/**
\page ImageSimilarityMetricsPage Image Similarity Metrics
\section MetricsIntroduction Introduction
It is a common task in image analysis to require to compare how
similar two image might be. This comparison may be limited to a
particular region of each image. \em Image Similarity Metrics are
methods that produce a quantitative evaluation of the similarity
between two image or two image regions.
This techniques are used as a base for registration methods because
they provide the information that indicates when the registration
process is going in the right direction.
A large number of Image Similarity Metrics have been proposed in the
medical image and computer vision community. There is no a \em right
image similarity metric but a set of metrics that are appropiated for
particular applications. Metrics fit very well the notions of tools
in a toolkit. You need a set of them because none is able to perform
the same job as the other.
The following table presents a comparison between image similarity
metrics. This is by no means an exhaustive comparison but will at
least provide some guidance as to what metric can be appropiated for
particular problems.
\subsection RegistrationMetrics Similarity Metrics
Metrics are probably the most critical element of a registration problem. The metric defines what the goal of the process is, they measure how well the Target object is matched by the Reference object after the transform has been applied to it. The Metric should be selected in function of the types of objects to be registered and the expected kind of missalignment. Some metrics has a rather large capture region, which means that the optimizer will be able to find his way to a maximum even if the missalignment is high. Typicaly large capture regions are associated with low precision for the maximum. Other metrics can provide high precision for the final registration, but usually require to be initialized quite close to the optimal value.
Unfortunately there are no clear rules about how to select a metric, other that trying some of them in different conditions. In some cases could be and advantage to use a particular metric to get an initial approximation of the transformation, and then switch to another more sensitive metric to achieve better precision in the final result.
Metrics are depend on the objects they compare. The toolkit currently offers <em> Image To Image </em> and <em> PointSet to Image </em> metrics as follows:
\li <b> Mean Squares </b> Sum of squared differences between intensity values. It requires the two objects to have intensity values in the same range.
\li <b> Normalized Correlation </b> Correlation between intensity values divided by the square rooted autocorrelation of both target and reference objects: \f$ \frac{\sum_i^n{a_i * b_i }}{\sum_i^n{a_i^2}\sum_i^n{b_i^2}} \f$. This metric allows to register objects whose intensity values are related by a linear transformation.
\li <b> Pattern Intensity </b> Squared differences between intensity values transformed by a function of type \f$ \frac{1}{1+x} \f$ and summed them up. This metric has the advantage of increase simultaneously when more samples are available and when intensity values are close.
\li <b> Mutual Information </b> Mutual information is based in an information theory concept. Mutual information between two sets measures how much can be known from one set if only the other set is known. Given a set of values \f$ A=\{a_i\} \f$. Its entropy \f$ H(A) \f$ is defined by \f$ H(A) = \sum_i^n{- p(a_i) \log({p(a_i)})} \f$ where \f$ p(a_i) \f$ are the probabilities of the values in the set. Entropy can be interpreted as a measure of the mean uncertainty reduction that is obtained when one of the particular values is found during sampling. Given two sets \f$ A=\{a_i\} \f$ and \f$ B=\{b_i\} \f$ its joint entropy is given by the joint probabilities \f$ p_(a_i,b_i) \f$ as \f$ H(A,B) = \sum_i^n{-p(a_i,b_i) * log( p(a_i, b_i) )} \f$. Mutual information is obtained by subtracting the entropy of both sets from the joint entropy, as : \f$ H(A,B)-H(A)-H(B) \f$, and indicates how much uncertainty about one set is reduced by the knowledge of the second set. Mutual information is the metric of choice when image from different modalities need to be registered.
*/
/**
\page ImageIteratorsPage Image Iterators
\section ImageIteratorsIntroduction Introduction
ImageIterators are the mechanism used in ITK for walking through the image
data.
You probably learned image processing with the classical access to the
image data using <b>"for loops"</b> like:
\code
const int nx = 200;
const int ny = 100;
ImageType image(nx,ny);
for(int x=0; x<nx; x++) // for all Columns
{
for(int y=0; y<ny; y++) // for all Rows
{
image(x,y) = 10;
}
}
\endcode
When what you \em really mean is:
\code
ForAllThePixels p in image Do p = 100
\endcode
ImageIterators gets you closer to this algorithmic abstraction.
They abstract the low-level processing of images from the particular
implementation of the image class.
Here is how an image iterator is used in ITK:
\code
ImageType::Pointer im = GetAnImageSomeHow();
ImageIterator it( im, im->GetRequestedRegion() );
it.GoToBegin();
while( !it.IsAtEnd() )
{
it.Set( 10 );
++it;
}
\endcode
This code can also be written as:
\code
ImageType::Pointer im = GetAnImageSomeHow();
ImageIterator it( im, im->GetRequestedRegion() );
for (it = it.Begin(); !it.IsAtEnd(); ++it)
{
it.Set( 10 );
}
\endcode
One important advantage of ImageIterators is that they provide support for
the N-Dimensional images in ITK. Otherwise it would be impossible (or at
least very hard) to write algorithms that work independent of the image
dimension.
Another advantage of ImageIterators is that they support walking a region
of an image. In fact, one argument of an ImageIterator's constructor
defines the region or portion of an image to traverse.
Iterators know a lot about the internal composition of the image,
relieving the user from these details. Your algorithm can go through
all the pixels of an image without ever knowing the dimension of the image.
\section IteratorTypes Types of Iterators
The order in which the image pixels are visited can be quite important for
some image processing algorithms and may be inconsequential to other
algorithms as long as pixels are accessed as fast as possible.
To address these diverse requirements, ITK implements a set
of ImageIterators, always following the "C" philosophy of :
"You only pay for what you use"
Here is a list of some of the different ImageIterators implemented in ITK:
- itk::ImageRegionIterator
- itk::ImageRegionReverseIterator
Region iterators don't define any specific order to walk over the pixels on
the image. The user can be sure though, that all the pixels inside the region
will be visited.
The following iterators allow to walk the image in specific directions
- itk::ImageLinearIteratorWithIndex Along lines
- itk::ImageSliceIteratorWithIndex Along lines, then along planes
Iterators in general can have <B>Read/Write</B> access to image pixels.
A family of iterators provides <B>Read Only</B> access, in order to
preserve the image content. These iterators are equivalent to "C"
const pointers :
\code
const * PixelType iterator;
\endcode
or to STL const_iterators:
\code
vector<PixelType>::const_iterator it;
\endcode
The class name of the iterator makes clears if it provides const access
or not. Some of the <TT>const</TT> iterators available are
- itk::ImageConstIterator
- itk::ImageConstIteratorWithIndex
- itk::ImageLinearConstIteratorWithIndex
- itk::ImageRegionConstIteratorWithIndex
- itk::ImageSliceConstIteratorWithIndex
\subsection NeighbohoodIteratorType Other Types of Iterators
Another group of iterators support a moving neighborhood. Here the
neighborhood can "iterate" over an image and a calculation can iterate
over the neighborhood. This allows N-dimensional implementations of
convolution and finite differences to be implemented succintly.
This class of iterators is described in detail on the page
\ref NeighborhoodIteratorsPage.
\subsection STL ImageIterators vs. STL Iterators
Given the breadth and complexity of ImageIterators, they are designed to
operate slightly differently than STL iterators. In STL, you ask a
container for an iterator that will traverse the container. Furthermore,
in STL, you frequently compare an iterator against another iterator.
Here is a loop to walk over an STL vector.
\code
for (it = vec.begin(); it != vec.end(); ++it)
{}
\endcode
ImageIterators, unfortunately, are more complicated than STL iterators.
ImageIterators need to store more state information than STL iterators.
As one example, ImageIterators can walk a region of an image and an
image can have multiple ImageIterators traversing different
regions simultaneously. Thus, each ImageIterator must maintain which region
it traverses. This results in a fairly heavyweight iterator, where
comparing two ImageIterators and constructing iterators is an
expensive operation. To address this issue, ImageIterators have a
slightly different API than STL iterators.
First, you do not ask the container (the image) for an iterator. Instead,
you construct an iterator and tell it which image to traverse. Here
is a snippet of code to construct an iterator that will walk a region
of an image:
\code
ImageType::Pointer im = GetAnImageSomeHow();
ImageIterator it( im, im->GetRequestedRegion() );
\endcode
Second, since constructing and comparing ImageIterators is expensive,
ImageIterators know the beginning and end of the region. So you ask the
iterator rather than the container whether the iterator is at the end of
a region.
\code
for (it = it.Begin(); !it.IsAtEnd(); ++it)
{
it.Set( 10 );
}
\endcode
\subsection IteratorsRegions Regions
Iterators are typically defined to walk a region of an image. ImageRegions
are defined to be rectangular prisms. (Insight also has a number of
iterators that can walk a region defined by a spatial function.)
The region for an iterator is defined at constructor time. Regions
are not validated, so the programmer is responsible for assigning a
region that is within the image. Iterator methods Begin() and End()
are defined relative to the region. See below.
\section IteratorAPI Iterator API
\subsection IteratorsPositioning Position
\subsection IteratorsIntervals Half Open Intervals - Begin/End
Like most iterator implementations, ImageIterators walk a half-open
interval. Begin is defined as the first pixel in the region. End is
defined as one pixel past the last pixel in the region (one pixel
past in the same row). So Begin points a valid pixel in the region
and End points to a pixel that is outside the region.
\subsection IteratorsDereferencing Dereferencing
In order to get access to the image data pointed by the iterator,
dereferencing is required. This is equivalent to the classical
"C" dereferencing code :
\code
PixelType * p; // creation of the pointer
*p = 100; // write access to a data
PixelType a = *p; // read access to data
\endcode
Iterators dereference data using <TT>Set()</TT> and <TT>Get()</TT>
\code
imageIterator.Set( 100 );
PixelType a = imageIterator.Get();
\endcode
\subsection IteratorsOperatorPlusPlus operator++
The ++ operator will move the image iterator to the next pixel,
according to the particular order in which this iterator walks
the imaage.
\subsection IteratorsOperatorMinusMinus operator--
The -- operator will move the image iterator to the previous pixel,
according to the particular order in which this iterator walks
the imaage.
\subsection IteratorsIteratorsBegin Begin()
Begin() returns an iterator for the same image and region as the current
iterator but positioned at the first pixel in the region. The current iterator
is not modified.
\subsection IteratorsIteratorsEnd End()
End() returns an iterator for the same image and region as the current
iterator but positioned one pixel past the last pixel in the region.
The current iterator is not modified.
\subsection IteratorsIteratorsGotoBegin GotoBegin()
GotoBegin() repositions the iterator to the first pixel in the region.
\subsection IteratorsGotoEnd GotoEnd()
GotoEnd() repositions the iterator to one pixel past (in the same
row) the last pixel in the region.
\subsection IteratorsIsAtBegin IsAtBegin()
IsAtBegin() returns true if the iterator is positioned at the first
pixel in the region, returns false otherwise. IsAtBegin() is faster than
comparing an iterator for equivalence to the iterator returned by Begion().
\code
if (it.IsAtBegin()) {} // Fast
if (it == it.Begin()) {} // Slow
\endcode
\subsection IteratorsIsAtEnd IsAtEnd()
IsAtEnd() returns true if the iterator is positioned one pixel past
the last pixel in the region, returns false otherwise. IsAtEnd()
is faster than comparing an iterator for equivalence to the iterator
returned by End().
\code
if (it.IsAtEnd()) {} // Fast
if (it == it.End()) {} // Slow
\endcode
\section IteratorFinalComment Final Comments
In general, iterators are not the kind of objects that users of the
toolkit would need to use. They are rather designed to be used by
code developers that add new components to the toolkit, like writting
a new Image filter, for example.
Before starting to write code that use iterators, users should consider
to verify if the particular operation they intend to apply to the image
is not already defined in the form of an existing image filter.
*/
/**
*
* \mainpage Orfeo Toolbox
*
* <div align="center"><img src="logoVectoriel.png" alt="logoVectoriel.png"></div>
*
* \section intro Introduction
*
* Welcome to CNES' ORFEO Toolbox (OTB). OTB is an open-source image processing
* software, designed for remote sensing applications.
*
* \section homepage Home Page
*
* The Home Page of the project can be found at:
*
* http://www.orfeo-toolbox.org
*
* \section howto How to use this documentation
*
* This documentation describes the API of the toolbox. You can start your
* visit by the Classes link above which details all available classes in OTB. The Modules
* link presents a hierarchy of classes organized according to their
* functionality. The Related Pages link presents design details, in
* particular the use of the data pipeline model and the philosophy of
* iterators.
*
*/
This diff is collapsed.
This diff is collapsed.
/**
\page RegistrationPage Registration Techniques
\section RegistrationIntroduction Introduction
\b Registration is a technique aimed to align two objects using a
particular transformation.
A typical example of registration is to have two medical images
from the same patient taken at different dates. It is very likely
that the patient assume a different position during each acquisition.
A registration procedure would allow to take both images and find
a spatial transformation to find the corresponding pixel from one
image into the other.
Another typical example of registration is to have a geometrical model
of an organ, let's say a bone. This model can be used to find the
corresponding structure in a medical image. In this case, a spatial
transformation is needed to find the correct location of the structure
in the image.
\section RegistrationFramework ITK Registration Framework
The Insight Toolkit takes full advantage of the power provided by
generic programming. Thanks to that, it have been possible to create
an abstraction of the particular problems that the toolkit is intended
to solve.
The registration problem have been decomposed in a set of basic
elements. They are:
\li \b Target: the object that is assumed to be static.
\li \b Reference: the object that will be transformed in order to be superimpossed to the \e Target
\li \b Transform: the mapping that will conver one point from the \e Reference object space to the \e Target object space.
\li \b Metric: a measure that indicates how well the \e Target object matches the \e Reference object after transformation.
\li \b Mapper: the particular technique used for interpolating values when objects are resampled through the \e Transform.
\li \b Optimizer: the method used to find the \e Transform parameters that optimize the \e Metric.
A particular registration method is defined by selecting specific implemementations of each one of these basic elements.
In order to determine the registration method appropriated for a particular problem, is will be useful to answer the following questions:
\subsection TargetReference Target and Reference Objects
Currently the Target an Reference objects can be of type \b itkImage and \b itkPointSet. Methods have been instantiated for a variety of <em> Image to Image </em> and <em> PointSet to Image </em> registration cases.
\subsection Transforms Transforms
This is a rapid description of the transforms implemented in the toolkit
\li \b Affine: The affine transform is N-Dimensional. It is composed of a NxN matrix and a translation vector. The affine transform is a linear transformation that can manage rotations, translations, shearing and scaling.
\li \b Rigid3D: This transform is specific for 3D, it supports only rotations and translations. Rotations are represented using \e Quaternions.
\li \b Rigid3DPerspective: A composition of a \e Rigid3D transform followed by a perpective projection. This transformation is intended to be used in applications like X-Rays projections.
\li \b Translation: A N-Dimensional translation internally represented as a vector.
\li \b Spline: A kernel based spline is used to interpolate a mapping from a pair of point sets.
\subsection RegistrationMetrics Similarity Metrics
Metrics are probably the most critical element of a registration problem. The metric defines what the goal of the process is, they measure how well the Target object is matched by the Reference object after the transform has been applied to it. The Metric should be selected in function of the types of objects to be registered and the expected kind of missalignment. Some metrics has a rather large capture region, which means that the optimizer will be able to find his way to a maximum even if the missalignment is high. Typicaly large capture regions are associated with low precision for the maximum. Other metrics can provide high precision for the final registration, but usually require to be initialized quite close to the optimal value.
Unfortunately there are no clear rules about how to select a metric, other that trying some of them in different conditions. In some cases could be and advantage to use a particular metric to get an initial approximation of the transformation, and then switch to another more sensitive metric to achieve better precision in the final result.
Metrics are depend on the objects they compare. The toolkit currently offers <em> Image To Image </em> and <em> PointSet to Image </em> metrics as follows:
\li <b> Mean Squares </b> Sum of squared differences between intensity values. It requires the two objects to have intensity values in the same range.
\li <b> Normalized Correlation </b> Correlation between intensity values divided by the square rooted autocorrelation of both target and reference objects: \f$ \frac{\sum_i^n{a_i * b_i }}{\sum_i^n{a_i^2}\sum_i^n{b_i^2}} \f$. This metric allows to register objects whose intensity values are related by a linear transformation.
\li <b> Pattern Intensity </b> Squared differences between intensity values transformed by a function of type \f$ \frac{1}{1+x} \f$ and summed them up. This metric has the advantage of increase simultaneously when more samples are available and when intensity values are close.
\li <b> Mutual Information </b> Mutual information is based in an information theory concept. Mutual information between two sets measures how much can be known from one set if only the other set is known. Given a set of values \f$ A=\{a_i\} \f$. Its entropy \f$ H(A) \f$ is defined by \f$ H(A) = \sum_i^n{- p(a_i) \log({p(a_i)})} \f$ where \f$ p(a_i) \f$ are the probabilities of the values in the set. Entropy can be interpreted as a measure of the mean uncertainty reduction that is obtained when one of the particular values is found during sampling. Given two sets \f$ A=\{a_i\} \f$ and \f$ B=\{b_i\} \f$ its joint entropy is given by the joint probabilities \f$ p_(a_i,b_i) \f$ as \f$ H(A,B) = \sum_i^n{-p(a_i,b_i) * log( p(a_i, b_i) )} \f$. Mutual information is obtained by subtracting the entropy of both sets from the joint entropy, as : \f$ H(A,B)-H(A)-H(B) \f$, and indicates how much uncertainty about one set is reduced by the knowledge of the second set. Mutual information is the metric of choice when image from different modalities need to be registered.
\subsection RegistrationOptimizers Optimizers
The following optimization methods are available:
\li <b> Gradient Descent </b>: Advance following the direction and magnitud of the gradient scaled by a learning rate.
\li <b> Regular Step Gradient Descent </b>: Advances following the direction of the gradient and use a bipartition scheme to compute the step length.
\li <b> Conjugate Gradient </b>: Nonlinear Minimization that optimize search directions using a second order approximation of the cost function.
\li <b> Levenberg Marquardt </b>: Nonlinear Least Squares Minimization
\li <b> LBFGS </b>: Limited Memory Broyden, Fletcher, Goldfarb and Shannon minimization.
\li <b> Amoeba </b>: Nelder Meade Downhill Simplex.
\li <b> One Plus One Evolutionary </b>: Stategy that simulates the biological evolution of a set of samples in the search space.
\section MultiResolutionRegistration Multiresolution
The evaluation of a metric can be very expensive in computing time. An approach often used to improve performance is to register first reduced resolution versions of the target and reference objects. The resulting transform is used as the starting point for a second registration process performed in progresively higher resolution objects.
It is usual to create first a sequence of reduced resolution version of the objects, this set of objects is called a <em>pyramid representation</em>. A Multiresolution method is basically a set of consecutive registration process, each one performed at a particular level of the pyramid, and using as initial transform the resulting transform of the previous process.
Multiresolution offers the double advantage of increasing performance and at the same time improving the stability of the optimization by smoothing out local minima and increasing the capture region of the process.
*/
/**
\page StreamingPage Streaming
\section StreamingIntroduction Introduction
\image html Streaming.gif "Pipelines can be set up to stream data through filters in small pieces."
*/
/**
\page ThreadingPage Threading
\section ThreadingIntroduction Introduction
ITK is designed to run in multiprocessor environments. Many of
ITK's filters are multithreaded. When a multithreading filter
executes, it automatically divides the work amongst multiprocessors
in a shared memory configuration. We call this "Filter Level
Multithreading". Applications built with ITK can also manage their
own execution threads. For instance, an application might use one
thread for processing data and another thread for a user
interface. We call this "Application Level Multithreading".
\image html Threading.gif "Filters may process their data in multiple threads in a shared memory configuration."
\section FilterThreadSafety Filter Level Multithreading
A multithreaded filter provides an implementation of the
ThreadedGenerateData() method (see
itk::ImageSource::ThreadedGenerateData()) as opposed to the
normal single threaded GenerateData() method (see
itk::ImageSource::GenerateData()). A superclass of the filter will
spawn several threads (usually matching the number of processors in
the system) and call ThreadedGenerateData() in each thread
specifying the portion of the output that a given thread is
responsible for generating. For instance, on a dual processor
computer, an image processing filter will spawn two threads, each
processing thread will generate one half of the output image, and
each thread is restricted to writing to separate portions of the
output image. Note that the "entire" input and "entire" output
images (i.e. what would be available normally to the GenerateData()
method, see the discussion on Streaming) are available to each
call of ThreadedGenerateData(). Each thread is allowed to read
from anywhere in the input image but each thread can only write to
its designated portion of the output image.
The output image is a single contiguous block on memory that is
used for all processing threads. Each thread is informed which
pixels they are responsible for producing the output values. All
the threads write to this same block of memory but a given thread
is only allowed to set specific pixels.
\subsection FilterMemoryAllocation Memory Management
The GenerateData() method is responsible for allocation the output
bulk data. For an image processing filter, this corresponds to
calling Allocate() on the output image object. If a filter is
multithreaded, then it does not provide a GenerateData() method but
provides a ThreadedGenerateData() method. In this case, a
superclass' GenerateData() method will allocate the output bulk
data and call ThreadedGenerateData() for each thread. If a filter
is not multithreaded, then it must provided its own GenerateData()
method and allocate the bulk output data (for instance, calling
Allocate() on an output image) itself.
\section ApplicationThreadSafety Application Level Multithreading
ITK applications can be written to have multiple execution threads.
This is distinct from a given filter dividing its labor across
multiple execution threads. If the former, the application is
responsible for spawning the separate execution threads,
terminating threads, and handling all events
mechanisms. (itk::MultiThreader can be used to spawn threads and
terminate threads in a platform independent manner.) In the latter
case, an individual filter will automatically spawn threads, execute
an algorithm, and terminate the processing threads.
Care must in taken in setting up an application to have separate
application level (as opposed to filter level) execution threads.
Individual ITK objects are not guarenteed to be thread safe. By this
we mean that a single instance of an object should only be modified
by a single execution thread. You should not try to modify a single
instance of an object in multiple execution threads.
ITK is designed so that different instances of the same class can be
accessed in different execution threads. But multiple threads should
not attempt to modify a single instance. This granularity of thread
safety was chosen as a compromise between performance and flexibility.
If we allow ITK objects to be modified in multiple threads then ITK
would have to mutex every access to every instance variable of a
class. This would severly affect performance.
\section NumericsThreadSafety Thread Safety in the Numerics Library
ITK uses a C++ wrapper around the standard NETLIB distributions
(http://www.netlib.org). These NETLIB distributions were converted
from FORTRAN to C using the standard f2c converter
(http://www.netlib.org/f2c/). A cursory glance at the f2c
generated NETLIB C code yields the impression that the NETLIB code
is not thread safe (due to COMMON blocks being translated to
function scope statics). We are still investigating this matter.
*/
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment