Location>code7788 >text

[Quick Read VIII] HDR->LDR: Parsing and self-implementation of the tonemapfarbman function in Matlab.

Popularity:276 ℃/2024-10-09 12:47:56

Recently commissioned by a friend, I want to implement a HDR to LDR function in Matlab by myself, the function name is tonemapfarbman, riding on the 11th holiday, I browsed a little bit about this function and did a little bit of C++ implementation and optimization.

In order to see the effect of this function, you need at least matlab R2018b and above.

First, we downloaded the paper corresponding to this algorithm mentioned in the matlab help file: Edge-Preserving Decompositions for Multi-Scale Tone and Detail Manipulation.

Reading through the paper, it feels like he is not proposing a specific algorithm, but rather a broad framework whose most useful parts are 3.1 Multi-scale edge-preserving decompositions.

The meaning of this is to use a certain edge-preserving filter for K+1 level decomposition, but the decomposition is sequential, the K+1 times the edge-preserving filter using a certain kind of gradual enhancement of the parameters, so that the details of the information is less and less, but the edge of the information is still preserved as much as possible. At this time, the results of the previous layer minus the results of the next layer that is the details of the layer, the last sequential edge preserving filter results that is the base layer, at this time, from the base layer and then reverse plus each layer of the details of the layer that is to get the original data exactly the same.

What about this one, and the early16-bit RAW Image Processing I]: High-Dynamic Range (HDR) Image Display Technology Based on Fast Bilateral Filtering Algorithm In comparison, the biggest difference is actually the use of multiple layers of edge-preserving filters, whereas this post only uses one.

The core difference compared to some pyramid-based HDRs is that there is no downsampling, but rather it's all done in the full image.

Then to carry out the HDR to LDR operation, in general is a point, find a way to adjust the detail layer information, one of the simplest way is to multiply each detail layer by a coefficient, in the article follow-up also describes some of the process, for different needs have different ways to achieve, but we go through matlab for this function of the core code, found that m processing is relatively very simple, we excerpted his core part of the code:

 1 % Compute log luminance
 2 logLum = log(lum);
 3 
 4 % Apply diffusion filter for multi-scale decomposition
 5 % Compute detail layers
 6 % Recombine the layers together while moderately boosting multiple scale detail
 7 uPre = logLum;
 8 comLogLum = zeros(size(HDR,1),size(HDR,2), 'like', HDR);
 9 numItr = 5; % No. of iterations ('NumberOfIterations' of anisotropic diffusion) for the first level decomposition.
10 logLum(~isfinite(logLum)) = (realmax('single')/100); % replaced Inf with a high value.
11 rangeLum = sum([max(logLum(:)) - min(logLum(:)),eps('single')],'omitnan'); % range of the image
12 for scaleInd = 1:numScale
13     % gradThresh is the 'GradientThreshold' parameter of the anisotropic diffusion.
14     % Here, it is taken as 5% of the dynamic range of the image.
15     % gradThresh is increased for each decomposition level (multiplied by scaleInd (No. of iterations))
16     gradThresh = scaleInd*5*(rangeLum)/100;
17     uCurr = imdiffusefilt(logLum,'NumberOfIterations',numItr*scaleInd,'GradientThreshold',gradThresh);
18     detail = uPre - uCurr; % multi-scale decomposition (detail extraction)
19     comLogLum = comLogLum + weight(scaleInd).*detail; % weighted summation
20     uPre = uCurr;
21 end
22 comLogLum = comLogLum + uCurr;
23 
24 % Convert back to RGB
25 % The 99.9% intensities of the compressed image are remapped to give the output image a consistent look
26 newI = exp(comLogLum);
27 sI = sort(newI(:));
28 mx = sI(round(length(sI) * (99.9/100)));
29 newI = newI/mx;
30 
31 % Exposure, RangeCompression and saturation correction
32 expoCorrect = exposure*newI;
33 if size(HDR,3) == 1
34     LDR = expoCorrect.^gamma;
35 else
36     LDR = (expoCorrect .* (rgb .^ sat)).^gamma;
37 end
38 LDR = im2uint8(LDR);
39 end

The second line of code, which converts the image data into log space, is essentially standard practice for the first step of the HDR algorithm.

From line 12 to line 22 is the core part of the algorithm, in this loop, the use of imdiffusefilt function as edge-preserving filter, he is actually an iterative version of the multiple anisotropic filters, this filter has a gradient threshold and the number of iterations of the two parameters, the loop, the number of iterations with the increase in the loop increases linearly, and the gradient threshold is also done in each iteration to do the corresponding The gradient threshold is adjusted accordingly at each iteration to obtain a progressively blurred and edge-preserving image, as shown in the following figure:

Original GradientThreshold = 12, NumberOfIterations = 5GradientThreshold = 24, NumberOfIterations = 10GradientThreshold = 36, NumberOfIterations = 15

Line 18 uses detail = uPre - uCurr, i.e., previous edge-preserving result - current edge-preserving filter result to get the detail information of this layer, and then line 19

comLogLum = comLogLum + weight(scaleInd).*detail; 

Multiply the detail information by a user-specified enhancement factor (greater than 1 to increase the detail at this level, less than 1 to decrease the detail at this level), before accumulating it to the previous detail.

Line 22 comLogLum = comLogLum + uCurr; at this point in the uCurr stored in the last edge-preserving filter settlement results, so he added him to the previous details of the information connected to the results of our processing.

The 26th line of the logarithmic space of the data through the exp command to restore the normal space again, in fact, at this time with im2uint8 should be able to get the final LDR image, but in fact, this time the image details of the information has basically been enhanced, but the overall visibility or visual effect is very general, so the back then through the exposure and Gamma correction of two parameters So the output will be adjusted appropriately by the two parameters of Exposure and Gamma Correction later.

Lines 27 to 29 mainly take the first 99.9% of the value of the recovered data as the threshold value to eliminate those transition exposure points, and the subsequent code then exposures and gamma adjustments.

There's nothing particularly complicated about the whole process either.

Translate this function into C++ is not a very complex thing, mainly imdiffusefilt translation, for two-dimensional data, this function calls the anisotropicDiffusion2D function, specific view of this function it, he has a 4-field and 8-field way to the field as an example of its core code is as follows:

 1  % DiffusionRate is fixed to 1/4 because we considered nearest neighbour
 2         % differences in 4 directions(East,West,North,South)
 3         diffusionRate = 1/4;
 4         diffImgNorth = paddedImg(1:end-1,2:end-1) - paddedImg(2:end,2:end-1);
 5         diffImgEast = paddedImg(2:end-1,2:end) - paddedImg(2:end-1,1:end-1);
 6         switch conductionMethod
 7             % Conduction coefficients
 8             case 'exponential'
 9                 conductCoeffNorth = exp(-(abs(diffImgNorth)/gradientThreshold).^2);
10                 conductCoeffEast = exp(-(abs(diffImgEast)/gradientThreshold).^2);
11             case 'quadratic'
12                 conductCoeffNorth = 1./(1+(abs(diffImgNorth)/gradientThreshold).^2);
13                 conductCoeffEast = 1./(1+(abs(diffImgEast)/gradientThreshold).^2);
14         end
15         fluxNorth = conductCoeffNorth .* diffImgNorth;
16         fluxEast =  conductCoeffEast .* diffImgEast;
17         
18         % Discrete PDE solution
19         I = I + diffusionRate * (fluxNorth(1:end-1,:) - fluxNorth(2:end,:) + ...
20             fluxEast(:,2:end) - fluxEast(:,1:end-1));

Most of the calculations in there are still exp functions, and then there's the 3*3 critical domain involved, which is recommended not to be translated directly because matlab code is very vectorized, so you have to understand what he's saying first and then rewrite it.

I loaded a single channel 16-bit image around 1700*3700, and tested it in matlab, using the default parameters (3 layers), the processing time takes about 0.6s, I personally think that this speed is relatively very fast, because this algorithm internally involves too many floating-point calculations, especially the exp function, and my initial C++ version is much slower than matlab's. Subsequent tests found that the code in matlab uses multi-threading, while mine is a single-threaded version if statistics use multi-threading. I found that my initial C++ version is much slower than matlab's, and after SSE instruction optimization, it also needs 1100ms, and subsequent testing found that the code in matlab uses multi-threading, while this is a single-threaded version, and if the statistic uses multi-threading, this one of mine can be done in 300ms.

Further means of optimization are to modify the exp implementation with an approximate version, e.g. by using theFast exp algorithmThe iterative version here can be realized by the following code:

inline __m128 _mm_myexp_ps(__m128 x)
{
    __m128 T = _mm_add_ps(_mm_set1_ps(1.0f), _mm_mul_ps(x, _mm_set1_ps(1.0f / 256)));
    __m128 TT = _mm_mul_ps(T, T);
    TT = _mm_mul_ps(TT, TT);
    TT = _mm_mul_ps(TT, TT);
    TT = _mm_mul_ps(TT, TT);
    TT = _mm_mul_ps(TT, TT);
    TT = _mm_mul_ps(TT, TT);
    TT = _mm_mul_ps(TT, TT);
    TT = _mm_mul_ps(TT, TT);
    return TT;
}

This is much faster than standard exp, while the accuracy is basically about the same, but the single threaded case speed is basically 250ms.

I've integrated this algorithm into my DEMO as well, with the parameter interface shown below:

      

Personally feel that this function is not particularly general, part of the map or to carefully adjust the parameters in order to get more reasonable results.

If you want to always follow my latest articles, you can also follow the public number:

                             

rendering

look for sth.

make a copy of