[libcamera-devel] [PATCH v6] libcamera: properties: Define pixel array properties

Laurent Pinchart laurent.pinchart at ideasonboard.com
Fri Jul 31 04:21:25 CEST 2020


Hi Jacopo,

Thank you for the patch (and sorry for such a late reply)

First of all, I really like this, I think it's really good work.
Describing the pixel array size may seem like an easy task, but I know
how much variation we can see in camera sensors, and how much research
you've but into this. Coming up with the properties below that are both
flexible and clearly defined was not a simple task.

I have quite a few comments (you know me...), but nothing that requires
significant changes. Since you've posted this series David has submitted
a proposal for transformation (including rotation) support, which needs
to be taken into account as Sakari pointed out. I've proposed in the
review below a way to do so (don't worry, it's not a complete rewrite
:-)), please feel free to tell me if you disagree with some (or all) of
the proposals.

On Mon, Jun 08, 2020 at 12:55:09PM +0200, Jacopo Mondi wrote:
> On Mon, Jun 08, 2020 at 01:26:04PM +0300, Sakari Ailus wrote:
> > On Thu, Jun 04, 2020 at 05:31:22PM +0200, Jacopo Mondi wrote:
> > > Add definition of pixel array related properties.
> > >
> > > Signed-off-by: Jacopo Mondi <jacopo at jmondi.org>
> > > ---
> > >  src/libcamera/property_ids.yaml | 263 ++++++++++++++++++++++++++++++++
> > >  1 file changed, 263 insertions(+)
> > >
> > > diff --git a/src/libcamera/property_ids.yaml b/src/libcamera/property_ids.yaml
> > > index ce627fa042ba..762d60881568 100644
> > > --- a/src/libcamera/property_ids.yaml
> > > +++ b/src/libcamera/property_ids.yaml
> > > @@ -386,4 +386,267 @@ controls:
> > >                                |                    |
> > >                                |                    |
> > >                                +--------------------+
> > > +
> > > +  - UnitCellSize:
> > > +      type: Size
> > > +      description: |
> > > +        The pixel unit cell physical size, in nanometers.
> > > +
> > > +        The UnitCellSize properties defines the horizontal and vertical sizes
> > > +        of a single pixel unit, including its active and non-active parts.

Would it be useful to add "In other words, it expresses the horizontal
and vertical distance between the top-left corners of adjacent pixels."
?

> > > +
> > > +        The property can be used to calculate the physical size of the sensor's
> > > +        pixel array area and for calibration purposes.
> >
> > Do we need this? Could it not be calculated from PixelArrayPhysicalSize and
> > PixelArraySize?
> 
> Not really, as PixelArrayPhysicalSize reports the physical dimension
> of the full pixel array (readable and non readable pixels) while
> PixelArraySize reports the size in pixel of the largest readable
> image. To sum it up: PixelArrayPhysicalSize reports the chip area,
> which covers more space that the readable PixelArraySize.
> 
> > > +
> > > +  - PixelArrayPhysicalSize:
> > > +      type: Size
> > > +      description: |
> > > +        The camera sensor full pixel array size, in nanometers.
> > > +
> > > +        The PixelArrayPhysicalSize property reports the physical dimensions
> > > +        (width x height) of the full pixel array matrix, including readable
> > > +        and non-readable pixels.
> > > +
> > > +        \todo Rename this property to PhysicalSize once we will have property
> > > +              categories (i.e. Properties::PixelArray::PhysicalSize)

I understand Sakari's confusion regarding the UnitCellSize calculation
from PixelArrayPhysicalSize and PixelArraySize. I also agree with your
answer, but I think we could actually drop the PixelArrayPhysicalSize
property. I don't see what it could be used for, it describes the
physical size of something that can't be accessed at all from the host,
and should thus not matter at all.

If we later find a need for this, we can always at it. At that point, I
also think it would be best to express the property in pixel units
(although depending on the use case that drives addition of this
property, I could very well be wrong).

> > > +
> > > +  - PixelArraySize:
> > > +      type: Size
> > > +      description: |
> > > +        The camera sensor pixel array readable area vertical and horizontal
> > > +        sizes, in pixels.
> > > +
> > > +        The PixelArraySize property defines the size in pixel units of the
> > > +        readable part of full pixel array matrix, including optically black

s/optically/optical/

I've also come across the term "optical dark pixels" which I think is a
bit more accurate, as the pixels are covered and thus dark, but not
fully black due to leakages currents (otherwise there would be no need
to black level correction). "optical black" is used much more often
though, so I think we should keep that term.

> > > +        pixels used for calibration, pixels which are not considered valid for
> > > +        capture and active pixels valid for image capture.

Maybe "and active pixels containing valid image data" ?

> > > +
> > > +        The property describes a rectangle whose top-left corner is placed
> > > +        in position (0, 0) and whose vertical and horizontal sizes are defined
> > > +        by the Size element transported by the property.

I would move this paragraph below to [1].

> > > +
> > > +        The property describes the maximum size of the raw data produced by
> > > +        the sensor, which might not correspond to the physical size of the

Maybe s/produced by the sensor/captured by the camera/, as the CSI-2
receiver could drop data too ? For instance, the invalid lines between
the optical black and valid regions could be sent by the sensor, with a
specific data type, and dropped on the receiving side by a decision of
the pipeline handler that chooses to configure the CSI-2 receiver to
drop the corresponding data type.

> > > +        sensor pixel array matrix, as some portions of the physical pixel
> > > +        array matrix are not accessible and cannot be transmitted out.
> > > +
> > > +        For example, a pixel array matrix assembled as follow

s/a pixel array/let's consider a pixel array/
s/follow/follows/

> > > +
> > > +             +--------------------------------------------------+
> > > +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|
> > > +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|
> > > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> > > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> > > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> > > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> > > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> > > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> > > +             ...          ...           ...      ...          ...
> > > +
> > > +             ...          ...           ...      ...          ...
> > > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> > > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> > > +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|
> > > +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|
> > > +             +--------------------------------------------------+
> > > +
> > > +        composed by two lines of non-readable pixels (x) followed by N lines of

s/composed by/starting with/
s/(x)/(x),/

> > > +        readable data (D) surrounded by two columns of non-readable pixels on
> > > +        each side, only the readable portion is transmitted to the receiving

s/each side,/each side, and ending with two more lines of non-readable pixels./
s/only/Only/

> > > +        side, defining the sizes of the largest possible buffer of raw data

s/sizes/size/

> > > +        that can be presented to applications.
> > > +
> > > +                               PixelArraySize[0]

should this be PixelArraySize.width ? Same for height instead of [1],
and in other locations below.

> > > +               /----------------------------------------------/
> > > +               +----------------------------------------------+ /
> > > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> > > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> > > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> > > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> > > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> > > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| | PixelArraySize[1]
> > > +               ...        ...           ...      ...        ...
> > > +               ...        ...           ...      ...        ...
> > > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> > > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> > > +               +----------------------------------------------+ /
> > > +

[1], with a small modification:

"This defines a rectangle ... sizes are defined by this property."

and merge it with the following paragraph in a single paragraph.

> > > +        All other rectangles that describes portions of the pixel array, such as

s/describes/describe/

> > > +        the optical black pixels rectangles and active pixel areas are defined

s/areas/areas,/

> > > +        relatively to the rectangle described by this property.
> > > +
> > > +        \todo Rename this property to Size once we will have property
> > > +              categories (i.e. Properties::PixelArray::Size)
> > > +
> > > +  - PixelArrayOpticalBlackRectangles:
> > > +      type: Rectangle
> > > +      size: [1 x n]

This can just be [n], not all sizes need to be bi-dimensional.

> > > +      description: |
> > > +        The raw data buffer regions which contains optical black pixels

s/raw data buffer/pixel array/ for the reason explained below in [2]
s/regions/region(s)/ ?
s/contains/contain/

> > > +        considered valid for calibration purposes.
> > > +
> >
> > How does this interact with the rotation property?
> >
> > If the sensor is rotated 180°, V4L2 sub-device sensor drivers generally
> > invert the flipping controls. I presume the same would apply to e.g. dark
> > pixels in case they are read out, but that should be something for a driver
> > to handle.
> 
> Right. I think this also depends how black pixels are read out. I here
> assumed the sensor is capable of reading out the whole PixelArraySize area,
> and the receiver is able to capture its whole content in a single
> buffer, where application can then go and pick the areas of interest
> using the information conveyed by this property. If this model holds,
> then if sensor flipping is enabled, indeed the here reported
> information are mis-leading, unless the chip architecture is
> perfectly symmetric in vertical and horizontal dimensions (which seems
> unlikely).
> 
> > But if the frame layout isn't conveyed to the user space by the driver,
> > then we need another way to convey the sensor is actually rotated. Oh well.
> 
> Not sure how to interpret this part. Do you mean "convey that a
> rotation is applied by, ie, setting the canonical V\HFLIP controls,
> which cause the sensor pixel array to be transmitted out with its
> vertical/horizontal read-out directions inverted ?"
> 
> We currently have a read-only property that reports the mounting
> rotation (like the dt-property you have just reviewed :) I assume we
> will have a rotation control that reports instead if a V/HFLIP is
> applied, so that application know how to compensate that.
> 
> > Not every sensor that is mounted upside down has flipping controls so the
> > user space will still somehow need to manage the rotation in that case.
> 
> I think it should, and I think we'll provide all information to be
> able to do so.

I think all the properties here should be expressed relative to the
default sensor readout direction. Any transformation (such as h/v flip)
will need to be taken into account by the application if the use cases
require it. That would be the easiest course of action, otherwise these
properties couldn't be static, as they would depend on the selected
transformation.

David has submitted a patch series with a Transform control (actually
specified in CameraConfiguration, not as a control). We will need to
document how the properties in this patch, the Rotation property, and
the Transform control, interact.

How about adding in the PixelArraySize documentation a paragraph that
states

"All the coordinates are expressed relative to the default sensor
readout direction, without any transformation (such as horizontal and
vertical flipping) applied. When mapping them to the raw pixel buffer,
applications shall take any configured transformation into account."

?

> > > +        The PixelArrayOpticalBlackRectangles describes (possibly multiple)

s/describes/property describes/ (it feels weird to have a plural name
with a singular verb).

> > > +        rectangular areas of the raw data buffer, where optical black pixels are

s/buffer,/buffer/

> > > +        located and could be accessed for calibration and black level
> > > +        correction.

Maybe just "... of the raw data buffers that contain optical black
pixels." ? That duplicates the previous paragraph a bit, maybe we could
just drop it.

> > > +
> > > +        This property describes the position and size of optically black pixel

s/optically/optical/

> > > +        rectangles relatively to their position in the raw data buffer as stored

s/rectangles/regions/
s/relatively to their position in the raw data buffer/in the raw data buffer/

> > > +        in memory, which might differ from their actual physical location in the
> > > +        pixel array matrix.
> > > +
> > > +        It is important to note, in facts, that camera sensor might

s/facts/fact/
s/sensors/sensor/

> > > +        automatically re-order, shuffle or skip portions of their pixels array

s/re-order/reorder/

I'd drop shuffle, I think "reorder or skip" is enough, hopefully the
sensors won't shuffle pixels randomly :-)

> > > +        matrix when transmitting data to the receiver.

Should we add an example ?

"For instance, a sensor may merge the top and bottom optical black
rectangles into a single rectangle, transmitted at the beginning of the
frame."

> > > +
> > > +        The pixel array contains several areas with different purposes,
> > > +        interleaved by lines and columns which are said not to be valid for
> > > +        capturing purposes. Invalid lines and columns are defined as invalid as
> > > +        they could be positioned too close to the chip margins or to the optical
> > > +        blank shielding placed on top of optical black pixels.

s/blank/black/

> > > +
> > > +                                PixelArraySize[0]
> > > +               /----------------------------------------------/
> > > +                  x1                                       x2
> > > +               +--o---------------------------------------o---+ /
> > > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> > > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> > > +            y1 oIIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |
> > > +               |IIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |
> > > +               |IIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |
> > > +            y2 oIIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |
> > > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> > > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> > > +            y3 |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |
> > > +               |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| | PixelArraySize[1]
> > > +               |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |
> > > +               ...          ...           ...     ...       ...
> > > +               ...          ...           ...     ...       ...
> > > +            y4 |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |
> > > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> > > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> > > +               +----------------------------------------------+ /
> > > +
> > > +        The readable pixel array matrix is composed by
> > > +        2 invalid lines (I)
> > > +        4 lines of valid optically black pixels (O)

s/optically/optical/

> > > +        2 invalid lines (I)
> > > +        n lines of valid pixel data (P)
> > > +        2 invalid lines (I)
> > > +
> > > +        And the position of the optical black pixel rectangles is defined by
> > > +
> > > +            PixelArrayOpticalBlackRectangles = {
> > > +               { x1, y1, x2 - x1 + 1, y2 - y1 + },
> >
> > s/\+ \K}/1 }/

:perldo s/\+ \K}/1 }/

;-)

> > > +               { x1, y3, 2, y4 - y3 + 1 },
> > > +               { x2, y3, 2, y4 - y3 + 1 },
> > > +            };
> > > +
> > > +        If the sensor, when required to output the full pixel array matrix,
> > > +        automatically skip the invalid lines and columns, producing the

s/skip/skips/

> > > +        following data buffer, when captured to memory

For the same reason as above, this could become

        If the camera, when capturing the full pixel array matrix, automatically skips the invalid lines and columns, producing the following data buffer, when captured to memory

> > > +
> > > +                                    PixelArraySize[0]
> > > +               /----------------------------------------------/
> > > +                                                           x1
> > > +               +--------------------------------------------o-+ /
> > > +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |
> > > +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |
> > > +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |
> > > +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |
> > > +            y1 oOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |
> > > +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |
> > > +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| | PixelArraySize[1]
> > > +               ...       ...          ...       ...         ... |
> > > +               ...       ...          ...       ...         ... |
> > > +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |
> > > +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |
> > > +               +----------------------------------------------+ /
> > > +
> > > +        The invalid lines and columns should not be reported as part of the

s/The/then the/

> > > +        PixelArraySize property in first place.
> > > +
> > > +        In this case, the position of the black pixel rectangles will be
> > > +
> > > +            PixelArrayOpticalBlackRectangles = {
> > > +               { 0, 0, y1 + 1, PixelArraySize[0] },
> > > +               { 0, y1, 2, PixelArraySize[1] - y1 + 1 },
> > > +               { x1, y1, 2, PixelArraySize[1] - y1 + 1 },
> > > +            };

I think part of this should be moved to the PixelArraySize property, as
it defines what the PixelArraySize property should report. Let's merge
this patch first though, and improvements can go on top (unless you
would prefer addressing this first :-)).

> > > +
> > > +        \todo Rename this property to Size once we will have property
> > > +              categories (i.e. Properties::PixelArray::OpticalBlackRectangles)
> > > +
> > > +  - PixelArrayActiveAreas:
> > > +      type: Rectangle
> > > +      size: [1 x n]

[n] here too.

> > > +      description: |
> > > +        The PixelArrayActiveAreas property defines the (possibly multiple and
> > > +        overlapping) portions of the camera sensor readable pixel matrix
> > > +        which are considered valid for image acquisition purposes.
> > > +
> > > +        Each rectangle is defined relatively to the PixelArraySize rectangle,
> > > +        with its top-left corner defined by its horizontal and vertical
> > > +        distances from the PixelArraySize rectangle top-left corner, placed in
> > > +        position (0, 0).

I think you can drop the last three lines, they're a bit confusing.

> > > +
> > > +        This property describes an arbitrary number of overlapping rectangles,
> > > +        with each rectangle representing the maximum image size that the camera
> > > +        sensor can produce for a particular aspect ratio.

Maybe even moving the previous paragraph here as "They are defined
relatively to the PixelArraySize rectangle." ?

> > > +
> > > +        When multiple rectangles are reported, they shall be ordered from the
> > > +        tallest to the shortest.
> > > +
> > > +        Example 1
> > > +        A camera sensor which only produces images in the 4:3 image resolution
> > > +        will report a single PixelArrayActiveAreas rectangle, from which all
> > > +        other image formats are obtained by either cropping the field-of-view
> > > +        and/or applying pixel sub-sampling techniques such as pixel skipping or
> > > +        binning.
> > > +
> > > +                     PixelArraySize[0]
> > > +                    /----------------/
> > > +                      x1          x2
> > > +            (0,0)-> +-o------------o-+  /
> > > +                 y1 o +------------+ |  |
> > > +                    | |////////////| |  |
> > > +                    | |////////////| |  | PixelArraySize[1]
> > > +                    | |////////////| |  |
> > > +                 y2 o +------------+ |  |
> > > +                    +----------------+  /
> > > +
> > > +        The property reports a single rectangle
> > > +
> > > +                 PixelArrayActiveAreas = (x1, y1, x2 - x1 + 1, y2 - y1 + 1)
> > > +
> > > +        Example 2
> > > +        A camera sensor which can produce images in different native
> > > +        resolutions will report several overlapping rectangles, one for each
> > > +        natively supported resolution.
> > > +
> > > +                     PixelArraySize[0]
> > > +                    /------------------/
> > > +                      x1  x2    x3  x4
> > > +            (0,0)-> +o---o------o---o+  /
> > > +                 y1 o    +------+    |  |
> > > +                    |    |//////|    |  |
> > > +                 y2 o+---+------+---+|  |
> > > +                    ||///|//////|///||  | PixelArraySize[1]
> > > +                 y3 o+---+------+---+|  |
> > > +                    |    |//////|    |  |
> > > +                 y4 o    +------+    |  |
> > > +                    +----+------+----+  /
> > > +
> > > +        The property reports two rectangles
> > > +
> > > +                PixelArrayActiveAreas = ((x2, y1, x3 - x2 + 1, y4 - y1 + 1),
> > > +                                         (x1, y2, x4 - x1 + 1, y3 - y2 + 1))
> > > +
> > > +        The first rectangle describes the maximum field-of-view of all image
> > > +        formats in the 4:3 resolutions, while the second one describes the
> > > +        maximum field of view for all image formats in the 16:9 resolutions.

Should we add one paragraph here as a clarification:

Multiple rectangles shall only be reported when the sensor can't capture
the pixels in the corner regions. If all the pixels in the (x1,y1) -
(x4,y4) area can be captured, the PixelArrayActiveAreas property shall
contains the single rectangle (x1,y1) - (x4,y4).

Or do you think it's overkill ?

Reviewed-by: Laurent Pinchart <laurent.pinchart at ideasonboard.com>

> > > +
> > > +        \todo Rename this property to ActiveAreas once we will have property
> > > +              categories (i.e. Properties::PixelArray::ActiveAreas)
> > > +
> > >  ...

-- 
Regards,

Laurent Pinchart


More information about the libcamera-devel mailing list