Optimize Cutting step
Enable in-memory processing
The current cutting step is done after the SARCalibration test, and it uses GDAL to check the 100th rows (from the start, and from the end) in the calibrated images. The state of these two rows could be tested done directly on the S1 tiles, and the cutting could be done before the calibration.
This would enable to realize cutting, calibration, and orthorectification in-memory without any pause/break/stop in the pipeline.
Optimize Cutting Application
The exact cutting is actually a kind of ExtractBandROI without trimming the edges. Instead they are set to 0. Using BandMath for this processing is really inefficient. The typical algorithm (in ThreadedGenerateData()
for such a clamping application is
auto const thr1 = min(requestedRegion.EndY, threshold);
auto const thr2 = min(requestedRegion.EndY, image.EndY - threshold);
assert(thr1 <= requestedRegion.EndY && "Iterations shall stay within requested region");
assert(thr2 <= requestedRegion.EndY && "Iterations shall stay within requested region");
for ( y = requestedRegion.startY ; y < thr1 ; ++y)
out.Line(y) = 0;
for ( y = thr1 ; y < thr2 ; ++y)
out.Line(y) = in.Line(y);
for ( y = thr2 ; y < requestedRegion.EndY ; ++y)
out.Line(y) = 0;
TL;DR
-
Change step orders: cut before calibration -
Provide an OTB Band Cutting Application