[libcamera-devel] [PATCH v6] libcamera: properties: Define pixel array properties

Jacopo Mondi jacopo at jmondi.org
Mon Jun 8 12:55:09 CEST 2020


Hi Sakari,

On Mon, Jun 08, 2020 at 01:26:04PM +0300, Sakari Ailus wrote:
> Hi Jacopo,
>
> Thank you for the patch.

Thanks a lot for taking time to comment on this

>
> On Thu, Jun 04, 2020 at 05:31:22PM +0200, Jacopo Mondi wrote:
> > Add definition of pixel array related properties.
> >
> > Signed-off-by: Jacopo Mondi <jacopo at jmondi.org>
> > ---
> >  src/libcamera/property_ids.yaml | 263 ++++++++++++++++++++++++++++++++
> >  1 file changed, 263 insertions(+)
> >
> > diff --git a/src/libcamera/property_ids.yaml b/src/libcamera/property_ids.yaml
> > index ce627fa042ba..762d60881568 100644
> > --- a/src/libcamera/property_ids.yaml
> > +++ b/src/libcamera/property_ids.yaml
> > @@ -386,4 +386,267 @@ controls:
> >                                |                    |
> >                                |                    |
> >                                +--------------------+
> > +
> > +  - UnitCellSize:
> > +      type: Size
> > +      description: |
> > +        The pixel unit cell physical size, in nanometers.
> > +
> > +        The UnitCellSize properties defines the horizontal and vertical sizes
> > +        of a single pixel unit, including its active and non-active parts.
> > +
> > +        The property can be used to calculate the physical size of the sensor's
> > +        pixel array area and for calibration purposes.
>
> Do we need this? Could it not be calculated from PixelArrayPhysicalSize and
> PixelArraySize?
>

Not really, as PixelArrayPhysicalSize reports the physical dimension
of the full pixel array (readable and non readable pixels) while
PixelArraySize reports the size in pixel of the largest readable
image. To sum it up: PixelArrayPhysicalSize reports the chip area,
which covers more space that the readable PixelArraySize.

> > +
> > +  - PixelArrayPhysicalSize:
> > +      type: Size
> > +      description: |
> > +        The camera sensor full pixel array size, in nanometers.
> > +
> > +        The PixelArrayPhysicalSize property reports the physical dimensions
> > +        (width x height) of the full pixel array matrix, including readable
> > +        and non-readable pixels.
> > +
> > +        \todo Rename this property to PhysicalSize once we will have property
> > +              categories (i.e. Properties::PixelArray::PhysicalSize)
> > +
> > +  - PixelArraySize:
> > +      type: Size
> > +      description: |
> > +        The camera sensor pixel array readable area vertical and horizontal
> > +        sizes, in pixels.
> > +
> > +        The PixelArraySize property defines the size in pixel units of the
> > +        readable part of full pixel array matrix, including optically black
> > +        pixels used for calibration, pixels which are not considered valid for
> > +        capture and active pixels valid for image capture.
> > +
> > +        The property describes a rectangle whose top-left corner is placed
> > +        in position (0, 0) and whose vertical and horizontal sizes are defined
> > +        by the Size element transported by the property.
> > +
> > +        The property describes the maximum size of the raw data produced by
> > +        the sensor, which might not correspond to the physical size of the
> > +        sensor pixel array matrix, as some portions of the physical pixel
> > +        array matrix are not accessible and cannot be transmitted out.
> > +
> > +        For example, a pixel array matrix assembled as follow
> > +
> > +             +--------------------------------------------------+
> > +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|
> > +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|
> > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> > +             ...          ...           ...      ...          ...
> > +
> > +             ...          ...           ...      ...          ...
> > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> > +             |xxDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDxx|
> > +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|
> > +             |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx|
> > +             +--------------------------------------------------+
> > +
> > +        composed by two lines of non-readable pixels (x) followed by N lines of
> > +        readable data (D) surrounded by two columns of non-readable pixels on
> > +        each side, only the readable portion is transmitted to the receiving
> > +        side, defining the sizes of the largest possible buffer of raw data
> > +        that can be presented to applications.
> > +
> > +                               PixelArraySize[0]
> > +               /----------------------------------------------/
> > +               +----------------------------------------------+ /
> > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| | PixelArraySize[1]
> > +               ...        ...           ...      ...        ...
> > +               ...        ...           ...      ...        ...
> > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> > +               |DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD| |
> > +               +----------------------------------------------+ /
> > +
> > +        All other rectangles that describes portions of the pixel array, such as
> > +        the optical black pixels rectangles and active pixel areas are defined
> > +        relatively to the rectangle described by this property.
> > +
> > +        \todo Rename this property to Size once we will have property
> > +              categories (i.e. Properties::PixelArray::Size)
> > +
> > +  - PixelArrayOpticalBlackRectangles:
> > +      type: Rectangle
> > +      size: [1 x n]
> > +      description: |
> > +        The raw data buffer regions which contains optical black pixels
> > +        considered valid for calibration purposes.
> > +
>
> How does this interact with the rotation property?
>
> If the sensor is rotated 180°, V4L2 sub-device sensor drivers generally
> invert the flipping controls. I presume the same would apply to e.g. dark
> pixels in case they are read out, but that should be something for a driver
> to handle.

Right. I think this also depends how black pixels are read out. I here
assumed the sensor is capable of reading out the whole PixelArraySize area,
and the receiver is able to capture its whole content in a single
buffer, where application can then go and pick the areas of interest
using the information conveyed by this property. If this model holds,
then if sensor flipping is enabled, indeed the here reported
information are mis-leading, unless the chip architecture is
perfectly symmetric in vertical and horizontal dimensions (which seems
unlikely).

>
> But if the frame layout isn't conveyed to the user space by the driver,
> then we need another way to convey the sensor is actually rotated. Oh well.

Not sure how to interpret this part. Do you mean "convey that a
rotation is applied by, ie, setting the canonical V\HFLIP controls,
which cause the sensor pixel array to be transmitted out with its
vertical/horizontal read-out directions inverted ?"

We currently have a read-only property that reports the mounting
rotation (like the dt-property you have just reviewed :) I assume we
will have a rotation control that reports instead if a V/HFLIP is
applied, so that application know how to compensate that.

>
> Not every sensor that is mounted upside down has flipping controls so the
> user space will still somehow need to manage the rotation in that case.
>

I think it should, and I think we'll provide all information to be
able to do so.

Thanks
  j

> > +        The PixelArrayOpticalBlackRectangles describes (possibly multiple)
> > +        rectangular areas of the raw data buffer, where optical black pixels are
> > +        located and could be accessed for calibration and black level
> > +        correction.
> > +
> > +        This property describes the position and size of optically black pixel
> > +        rectangles relatively to their position in the raw data buffer as stored
> > +        in memory, which might differ from their actual physical location in the
> > +        pixel array matrix.
> > +
> > +        It is important to note, in facts, that camera sensor might
> > +        automatically re-order, shuffle or skip portions of their pixels array
> > +        matrix when transmitting data to the receiver.
> > +
> > +        The pixel array contains several areas with different purposes,
> > +        interleaved by lines and columns which are said not to be valid for
> > +        capturing purposes. Invalid lines and columns are defined as invalid as
> > +        they could be positioned too close to the chip margins or to the optical
> > +        blank shielding placed on top of optical black pixels.
> > +
> > +                                PixelArraySize[0]
> > +               /----------------------------------------------/
> > +                  x1                                       x2
> > +               +--o---------------------------------------o---+ /
> > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> > +            y1 oIIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |
> > +               |IIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |
> > +               |IIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |
> > +            y2 oIIOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOII| |
> > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> > +            y3 |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |
> > +               |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| | PixelArraySize[1]
> > +               |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |
> > +               ...          ...           ...     ...       ...
> > +               ...          ...           ...     ...       ...
> > +            y4 |IIOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOOII| |
> > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> > +               |IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII| |
> > +               +----------------------------------------------+ /
> > +
> > +        The readable pixel array matrix is composed by
> > +        2 invalid lines (I)
> > +        4 lines of valid optically black pixels (O)
> > +        2 invalid lines (I)
> > +        n lines of valid pixel data (P)
> > +        2 invalid lines (I)
> > +
> > +        And the position of the optical black pixel rectangles is defined by
> > +
> > +            PixelArrayOpticalBlackRectangles = {
> > +               { x1, y1, x2 - x1 + 1, y2 - y1 + },
>
> s/\+ \K}/1 }/
>
> > +               { x1, y3, 2, y4 - y3 + 1 },
> > +               { x2, y3, 2, y4 - y3 + 1 },
> > +            };
> > +
> > +        If the sensor, when required to output the full pixel array matrix,
> > +        automatically skip the invalid lines and columns, producing the
> > +        following data buffer, when captured to memory
> > +
> > +                                    PixelArraySize[0]
> > +               /----------------------------------------------/
> > +                                                           x1
> > +               +--------------------------------------------o-+ /
> > +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |
> > +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |
> > +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |
> > +               |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO| |
> > +            y1 oOOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |
> > +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |
> > +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| | PixelArraySize[1]
> > +               ...       ...          ...       ...         ... |
> > +               ...       ...          ...       ...         ... |
> > +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |
> > +               |OOPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPOO| |
> > +               +----------------------------------------------+ /
> > +
> > +        The invalid lines and columns should not be reported as part of the
> > +        PixelArraySize property in first place.
> > +
> > +        In this case, the position of the black pixel rectangles will be
> > +
> > +            PixelArrayOpticalBlackRectangles = {
> > +               { 0, 0, y1 + 1, PixelArraySize[0] },
> > +               { 0, y1, 2, PixelArraySize[1] - y1 + 1 },
> > +               { x1, y1, 2, PixelArraySize[1] - y1 + 1 },
> > +            };
> > +
> > +        \todo Rename this property to Size once we will have property
> > +              categories (i.e. Properties::PixelArray::OpticalBlackRectangles)
> > +
> > +  - PixelArrayActiveAreas:
> > +      type: Rectangle
> > +      size: [1 x n]
> > +      description: |
> > +        The PixelArrayActiveAreas property defines the (possibly multiple and
> > +        overlapping) portions of the camera sensor readable pixel matrix
> > +        which are considered valid for image acquisition purposes.
> > +
> > +        Each rectangle is defined relatively to the PixelArraySize rectangle,
> > +        with its top-left corner defined by its horizontal and vertical
> > +        distances from the PixelArraySize rectangle top-left corner, placed in
> > +        position (0, 0).
> > +
> > +        This property describes an arbitrary number of overlapping rectangles,
> > +        with each rectangle representing the maximum image size that the camera
> > +        sensor can produce for a particular aspect ratio.
> > +
> > +        When multiple rectangles are reported, they shall be ordered from the
> > +        tallest to the shortest.
> > +
> > +        Example 1
> > +        A camera sensor which only produces images in the 4:3 image resolution
> > +        will report a single PixelArrayActiveAreas rectangle, from which all
> > +        other image formats are obtained by either cropping the field-of-view
> > +        and/or applying pixel sub-sampling techniques such as pixel skipping or
> > +        binning.
> > +
> > +                     PixelArraySize[0]
> > +                    /----------------/
> > +                      x1          x2
> > +            (0,0)-> +-o------------o-+  /
> > +                 y1 o +------------+ |  |
> > +                    | |////////////| |  |
> > +                    | |////////////| |  | PixelArraySize[1]
> > +                    | |////////////| |  |
> > +                 y2 o +------------+ |  |
> > +                    +----------------+  /
> > +
> > +        The property reports a single rectangle
> > +
> > +                 PixelArrayActiveAreas = (x1, y1, x2 - x1 + 1, y2 - y1 + 1)
> > +
> > +        Example 2
> > +        A camera sensor which can produce images in different native
> > +        resolutions will report several overlapping rectangles, one for each
> > +        natively supported resolution.
> > +
> > +                     PixelArraySize[0]
> > +                    /------------------/
> > +                      x1  x2    x3  x4
> > +            (0,0)-> +o---o------o---o+  /
> > +                 y1 o    +------+    |  |
> > +                    |    |//////|    |  |
> > +                 y2 o+---+------+---+|  |
> > +                    ||///|//////|///||  | PixelArraySize[1]
> > +                 y3 o+---+------+---+|  |
> > +                    |    |//////|    |  |
> > +                 y4 o    +------+    |  |
> > +                    +----+------+----+  /
> > +
> > +        The property reports two rectangles
> > +
> > +                PixelArrayActiveAreas = ((x2, y1, x3 - x2 + 1, y4 - y1 + 1),
> > +                                         (x1, y2, x4 - x1 + 1, y3 - y2 + 1))
> > +
> > +        The first rectangle describes the maximum field-of-view of all image
> > +        formats in the 4:3 resolutions, while the second one describes the
> > +        maximum field of view for all image formats in the 16:9 resolutions.
> > +
> > +        \todo Rename this property to ActiveAreas once we will have property
> > +              categories (i.e. Properties::PixelArray::ActiveAreas)
> > +
> >  ...
>
> --
> Kind regards,
>
> Sakari Ailus


More information about the libcamera-devel mailing list