[libcamera-devel] Colour spaces

Laurent Pinchart laurent.pinchart at ideasonboard.com
Tue Jan 12 03:01:40 CET 2021


Hi David,

On Thu, Jan 07, 2021 at 01:46:04PM +0000, David Plowman wrote:
> Hi everyone
> 
> I've just found myself wondering how I would signal to libcamera that
> I want a particular YUV colour space, such as JFIF for jpeg, BT601 for
> SD video, and so on. I haven't spotted a way to do this... have I
> perhaps missed something? (I notice a PixelFormatInfo class with a
> ColourEncoding enum, but it seems limited.)

You haven't missed anything, it's not there.

> If there really isn't a way to do this, then I suppose all the usual
> questions apply. Should it be a control? Or should it go in the stream
> configuration? And what values - start with the V4L2 ones?
> 
> Anyone have any thoughts on this?

Quite a few (sorry :-)). Please keep in mind that my knowledge is
limited regarding this topic though, so feel free to correct me when I'm
wrong.

First of all, colour space is an umbrella term that is often abused to
mean different things, so I'd like to know which parts you're interested
in (it may well be all of them).

I'm looking at BT.709, which nicely summarizes the colour-related
information in sections 1 and 3 (section 2 is not related to colours,
section 4 contains related information in the quantization levels, but
those are also present in section 3 as far as I can tell):

1. Opto-electronic conversion

This specifies the opto-electronic transfer characteristics and the
chromaticity coordinates (in CIE xyY space) of R, G, B and the D65
reference white.

3. Signal format

This specifies the transfer function (expressed as a gamma value), the
colour encoding and the quantization.


To obtain BT.709 from a given camera sensor, we need to take the
sensor's colour characteristics into account to calculate the correct
colour transformation matrix (and the tone mapping curve) to obtain the
BT.709 primaries. This could be done inside libcamera with an API to
specify the desired "colour space", but I don't think that's the best
option. Exposing the colour characteristics of the sensor is needed for
correct processing of RAW data, and with manual control of the colour
transformation matrix and tone mapping curve, we could then achieve any
output colour space.

We already expose the colour transformation matrix, so we're missing the
tone mapping curve as a control, and the sensor colour characteristics
as a property. The latter are exposed in Android as a combination of a
reference illuminant and a transform matrix. It's actually multiple
matrices, one to map CIE XYZ to the colour space of the reference sensor
(a.k.a. golden module) and one to map the reference sensor colour space
to the colour space of the device's sensor - I assume the latter will be
an identity matrix when devices don't undergo per-device calibration.
There's also a forward matrix that maps white balanced colours (using
the reference illuminant) from the reference sensor colour space to the
CIE XYZ D50 white point. I'm sure I'm missing a few subtle points.

This approach makes sense as it's very flexible, but it's also hard to
use, and we really need a helper class to deal with this and compute the
colour correction matrix for the sensor to produce standard preset
colourspaces.

I suppose we can deal with quantization in either the colour
transformation matrix or the RGB to YUV matrix, and I'm not sure what's
best, or even what hardware typically support. Does the colour
transformation typically have an offset vector, or is it only a
multiplication matrix ?

-- 
Regards,

Laurent Pinchart


More information about the libcamera-devel mailing list