[libcamera-devel] [PATCH v3 1/6] libcamera: properties: Define pixel array properties

Kieran Bingham kieran.bingham at ideasonboard.com
Tue Mar 24 14:59:43 CET 2020


Hi Jacopo,

On 09/03/2020 18:04, Jacopo Mondi wrote:
> Add definition of pixel array related properties.
> 
> Signed-off-by: Jacopo Mondi <jacopo at jmondi.org>
> 
> ---
> v2 -> v3:
> - Pluralize PixelAreaSizes
> - Use the new 'size' property in place of 'compound'.
>   I tried to set meaningful values but it's not easy to get them right..
> ---
>  src/libcamera/property_ids.yaml | 177 ++++++++++++++++++++++++++++++++
>  1 file changed, 177 insertions(+)
> 
> diff --git a/src/libcamera/property_ids.yaml b/src/libcamera/property_ids.yaml
> index ce627fa042ba..4cecb5ad9ac3 100644
> --- a/src/libcamera/property_ids.yaml
> +++ b/src/libcamera/property_ids.yaml
> @@ -386,4 +386,181 @@ controls:
>                                |                    |
>                                |                    |
>                                +--------------------+
> +
> +  - PixelArraySize:
> +      type: float

Do we really need a float to store millimetres?

Hrm ... actually yes - A quick google for instance "IMX219 sensor size"
returns 4.60 mm (diagonal) ... :-)



> +      size: [2]
> +      description: |
> +        The physical sizes of the pixel array (width and height), in
> +        millimeters.
> +

Looking at the IMX219 product brief/datasheet:

Device Structure
* CMOS image sensor
* Image size : Diagonal 4.60 mm (Type 1/4.0)
* Total number of pixels : 3296 (H) × 2512 (V) approx. 8.28 M pixels
* Number of effective pixels : 3296 (H) × 2480 (V) approx. 8.17 M pixels
* Number of active pixels : 3280 (H) × 2464 (V) approx. 8.08 M pixels
* Chip size : 5.095 mm (H) × 4.930 mm (V) (w/ Scribe)
* Unit cell size : 1.12 μm (H) × 1.12 μm (V)
* Substrate material : Silicon

This is stating an Image Size (as a diagonal...) Presumably someone will
have to convert/identify that to a width/height appropriately?

I assume the chip-size is not relevant here.

For instance, the OV5647 specifies:

* image area: 3673.6 um x 2738.4um,

as well as die-dimensions.





> +  - PixelArrayBounds:
> +      type: int32_t
> +      size: [2]
> +      description: |
> +        The camera sensor pixel array bounding rectangle vertical and
> +        horizontal sizes.

That reads like a string of unassociated words.

Perhaps some hyphenation would help?

The camera-sensor pixel-array bounding-rectangle vertical and horizontal
sizes.


Or:

The vertical and horizontal sizes for the maximum bounding rectangle of
the pixel array on the camera sensor.

> +
> +        For image sensors with a rectangular pixel array the sizes described by

/array/array,/

> +        this property are the same as the PixelAreaSize property size.
> +
> +        For image sensors with more complex pixel array displacements (such as
> +        cross-shaped pixel arrays, non symmetrical pixel arrays etc) this
> +        property represents the bounding rectangle in which all pixel array
> +        dimensions are inscribed into.
> +
> +        In example, the bounding rectangle sizes for image sensor with a

/In/For/

> +        cross-shaped pixel array is described as
> +
> +
> +                     PixelArrayBound(0) = width
> +                    /-----------------/
> +
> +            (0,0)-> +----+------+----+  /
> +                    |    |//////|    |  |
> +                    +----+//////+----+  |
> +                    |////////////////|  | PixelArrayBound(1) = height
> +                    +----+//////+----+  |
> +                    |    |//////|    |  |
> +                    +----+------+----+  /
> +                            |
> +                             -> Cross-shaped pixel area
> +
> +  - PixelArrays:
> +      type: int32_t
> +      size: [8]
> +      description: |
> +        The sensor pixel array rectangles, relative to the rectangle described
> +        by the PixelArrayBounds property.
> +
> +        This property describes an arbitrary number of (likely overlapping)
> +        rectangles, representing the pixel array areas the sensor is composed
> +        of.
> +
> +        Each rectangle is defined by its displacement from pixel (0, 0) of
> +        the bounding rectangle described by the PixelArrayBound property.
> +
> +        For image sensors with a rectangular pixel array, a single rectangle
> +        is required. For sensors with more complex pixel array displacements
> +        multiple rectangles shall be specified, ordered from the tallest to the

Should this state 'multiple overlapping rectangles' ?

> +        shorter one.
> +
> +        For each rectangle, this property reports the full pixel array size,
> +        including non-active pixels, black level calibration pixels etc.
> +
> +        In example, a simple sensor with a rectangular pixel array is described
> +        as
> +
> +                     PixelArrayBound(0) = width
> +                    /-----------------/
> +                      x1          x2
> +            (0,0)-> +-o------------o-+  /
> +                 y1 o +------------+ |  |
> +                    | |////////////| |  |
> +                    | |////////////| |  | PixelArrayBound(1) = height
> +                    | |////////////| |  |
> +                 y2 o +------------+ |  |
> +                    +----------------+  /
> +
> +                 PixelArray = (x1, y1, (x2 - x1), (y2 - y1))

So for a single 'described rect', this is
struct rect {
	int32_t x;
	int32_t y;

	int32_t width;
	int32_t height;
}

What are the remaining 4 values? 0?

(Is there a means to recognise when or when not to parse the second 'rect'?)


> +
> +        A more complex sensor, with a cross shaped pixel array displacement
> +        is described with 2 rectangles, with the vertical rectangle
> +        described first
> +
> +                     PixelArrayBound(0) = width
> +                    /-----------------/
> +                    x1  x2     x3  x4 W
> +            (0,0)-> +o---o------o---o+  /
> +                    |    |//////|    |  |
> +                 y1 o+---+------+---+|  |
> +                    ||///|//////|///||  | PixelArrayBound(1) = height
> +                 y2 o+---+------+---+|  |
> +                    |    |//////|    |  |
> +                 H  +----+------+----+  /
> +
> +
> +                PixelArray = ( (x2, 0, (x3 - x2), H),
> +                               (x1, y1, (x4 - x1), (y2 - y1))

Wouldn't it make sense to show y1,y2,y3,y4 as well, and use y4-y1 in
place of H? or are you trying to show that the W and H are not required
parameters, and the rectangles do not have to utilise the full width or
height of the sensor?


But why would the width of the sensor be larger than the width of the
largest horizontal rectangle ...


So we are really expressing:

> +                     PixelArrayBound(0) = width
> +                    /-----------------/
> +                    x1  x2     x3  x4 W
> +            (0,0)-> +o---o------o---o+  /
> +                    |    |//////|    |  |
> +                 y1 o    +------+    |  |
> +                    |    |//////|    |  | PixelArrayBound(1) = height
> +                 y2 o    +------+    |  |
> +                    |    |//////|    |  |
> +                 H  +----+------+----+  /

followed by:

> +                     PixelArrayBound(0) = width
> +                    /-----------------/
> +                    x1  x2     x3  x4 W
> +            (0,0)-> +o---o------o---o+  /
> +                    |                |  |
> +                 y1 o+---+------+---+|  |
> +                    ||///|//////|///||  | PixelArrayBound(1) = height
> +                 y2 o+---+------+---+|  |
> +                    |                |  |
> +                 H  +----+------+----+  /


So really, PixelArrays describes up to two rectangles...

What happens if the shape is more complex? Why do we support a cross
shaped pixel array, but not a hexagonal pixel array?

(perhaps cross shaped pixel arrays are a standard thing, I don't know,
or maybe its' just because we only support a maximum of two rects...,
and a polygon would be more bytes to encode...)


Querying this it seems it's for advanced sensors which have layouts to
support different aspect ratios. It would be good to express that here
somewhere, but I'm not sure how yet.


> +
> +  - ActiveAreaSizes:
> +      type: int32_t
> +      size: [8]
> +      description: |
> +        The sensor active pixel area sizes, represented as rectangles

/sensor/sensor's/ (the active pixel area belongs to the sensor)


> +        inscribed in the ones described by the PixelArrays property.
> +
> +        One ActiveAreaSize rectangle per each rectangle described in the

/per/for/

> +        PixelArrays property is required. As a consequence, the two properties
> +        shall transport the same number of elements.
> +
> +        The ActiveAreaSize rectangles represent the maximum image sizes the
> +        sensor can produce.
> +
> +  - BayerFilterArrangement:
> +      type: int32_t
> +      description: |
> +        The pixel array color filter displacement.
> +
> +        This property describes the arrangement and readout sequence of the
> +        three RGB color components of the sensor's Bayer Color Filter Array
> +        (CFA).
> +
> +        Color filters are usually displaced in line-alternating fashion on the
> +        sensor pixel array. In example, one line might be composed of Red-Green
> +        while the successive is composed of Blue-Green color information.
> +
> +        The value of this property represent the arrangement of color filters

/represent/represents/

> +        in the top-left 2x2 pixel square.
> +
> +        In example, for a sensor with the following color filter displacement

/In example,/For example,/

> +
> +                 (0, 0)               (max-col)
> +           +---+    +--------------...---+
> +           |B|G|<---|B|G|B|G|B|G|B|...B|G|
> +           |G|R|<---|G|R|G|R|G|R|G|...G|R|
> +           +---+    |B|G|B|G|B|G|B|...B|G|
> +                    ...                  ..
> +                    ...                  ..
> +                    |G|R|G|R|G|R|G|...G|R|
> +                    |B|G|B|G|B|G|B|...B|G|   (max-lines)
> +                    +--------------...---+
> +
> +        The filer arrangement is represented by the BGGR value, which correspond

/filer/filter/
/correspond/corresponds/


> +        to the pixel readout sequence in line interleaved mode.
> +
> +      enum:
> +        - name: BayerFilterRGGB

Do we need to prefix these enum values (BayerFilerXXXX) for namespacing?
Or are they already namespaced by being specific to BayerFilterArrangement?

I.e. can we arrange that these become:

BayerFilterArrangement::RGGB ?

Rather than:

BayerFilterArrangement::BayerFilterRGGB ?

> +          value: 0
> +          description: |
> +            Color filter array displacement is Red-Green/Green-Blue
> +
> +        - name: BayerFilterGRBG
> +          value: 1
> +          description: |
> +            Color filter array displacement is Green-Red/Blue-Green
> +
> +        - name: BayerFilterGBRG
> +          value: 2
> +          description: |
> +            Color filter array displacement is Green-Blue/Red-Green
> +
> +        - name: BayerFilterBGGR
> +          value: 3
> +          description: |
> +            Color filter array displacement is Blue-Green/Green-Red
> +
> +        - name: BayerFilterNonStandard
> +          value: 4
> +          description: |
> +            The pixel array color filter does not use the standard Bayer RGB
> +            color model
> +
> +  - ISOSensitivityRange:
> +      type: int32_t
> +      size: [2]
> +      description: |
> +        The range of supported ISO sensitivities, as documented by the
> +        ISO 12232:2006 standard

The ISO 1232:2006 appears to be superseded by ISO 12232:2019... and
itself supersedes the :2006 version. Do we need to specify versions? Or
should we state 'or later' .... or ... ?

 (I see that "Speed ratings greater than 10000 have finally been defined
in ISO 12232:2019"...)


This is 2 int32_t entries. Presumably, thus one is the minimum, and one
is the maximum. I can guess which order they are - but do they need to
be specified explicitly? (maybe not, but I ask as devils advocate at least)


> +
>  ...
> 

-- 
Regards
--
Kieran


More information about the libcamera-devel mailing list