[PATCH v6 17/18] libcamera: software_isp: Apply black level compensation

Hans de Goede hdegoede at redhat.com
Sun Apr 21 16:47:30 CEST 2024


Hi Laurent,

On 4/21/24 2:22 PM, Laurent Pinchart wrote:
> Hi Hans,
> 
> On Sun, Apr 21, 2024 at 02:16:06PM +0200, Hans de Goede wrote:
>> On 4/20/24 12:42 PM, Laurent Pinchart wrote:
>>> On Tue, Apr 16, 2024 at 05:31:38PM +0200, Milan Zamazal wrote:
>>>> Laurent Pinchart writes:
>>>>
>>>>> As for other part of this series, I'm fine starting with this initial
>>>>> implementation and improving it, but I think the black level should
>>>>> eventually be moved before debayering, and ideally the colour gains as
>>>>> well. I understand the need for optimizations to lower the CPU
>>>>> consumption, but at the same time I don't feel comfortable building up
>>>>> on top of an implementation that may work a bit more by chance than by
>>>>> correctness, as that's not very maintainable.
>>>>
>>>> Hi Laurent,
>>>>
>>>> to make sure we understand each other correctly: Can we agree that "be moved
>>>> before debayering" above means primarily separating the level adjustments
>>>> computations from debayering code and building them (together with other stuff)
>>>> on top of libipa/algorithm.h?  Rather than first applying the corrections on the
>>>> data and only then perform debayering, which luxury we cannot afford, at least
>>>> on CPU?  I.e. the first step could be something like
>>>> https://gitlab.freedesktop.org/mzamazal/libcamera-softisp/-/commit/6180d853005bb956b3d2469576b139ff523b76ad ?
>>>
>>> What I meant is subtracting the black level before applying the colour
>>> gains and performing the CFA interpolation (a.k.a. debayering). I'm fine
>>> with the black level subtraction and CFA interpolation being performed
>>> in a single pass by a single function for performance reason, but the
>>> order needs to be swapped.
>>
>> If we apply the per color lookup table (which does black-level compensation +
>> color-gain + gamma-curve in 1 step) to the raw bayer data then we either need to
>> do this on some extra copy before debayering or we need to do it multiple times
>> per pixel before averaging the surrounding pixels, neither one of which is ideal.
> 
> It's not the full lookup table that needs to be applied before
> debayering, it's only the black level subtraction and the colour gains.

The lookup table also gives us clamping for free. If we do actual substract
+ multiply we need to clamp and that means adding 2 conditional jumps
per pixel and testing has shown that that really really hurts on modern
processors.

> The gamma curve needs to go after debayering, and additional histogram
> stretching could go there too.
> 
> There's also a need to add a CCM matrix to the pipeline, multiplying the
> RGB values obtained by debayering by a 3x3 matrix before applying the
> gamma curve.

I have looked into CCM already, but IMHO that is too heavy to do on
the CPU. We might get away with that at the big Intel Cores, but at
a significant CPU usage and battery cost.

As for gamma correction I think that we need to live with that being applied
before debayering too (our debayering is a straight averaging of surrounding
pixels, so the impact of doing this before / after should be small).

> With that I think we would have a reasonable image quality already.

I agree that a CCM is the biggest missing item. But I think we can still
get reasonable quality without that and the CPU SoftwareISP really should
be a fallback for when all else fails.

I would rather see us focus on working on a GPU based SoftwareISP and then
we can add CCM and proper post debay gamma correction there.

> Additional processing, such as lens shading correction, defective pixel
> correction and denoising, are out of scope for CPU processing in my
> opinion. They could be implemented with a GPU soft ISP though.

Ack.

Regards,

Hans




More information about the libcamera-devel mailing list