[libcamera-devel] [RFC PATCH 0/2] Sensor mode hints

Jacopo Mondi jacopo at jmondi.org
Tue Sep 21 15:48:38 CEST 2021


Hi David,

On Thu, Sep 16, 2021 at 02:20:13PM +0100, David Plowman wrote:
> Hi everyone
>
> Here's a first attempt at functionality that allows applications to
> provide "hints" as to what kind of camera mode they want.
>
> 1. Bit Depths and Sizes
>
> So far I'm allowing hints about bit depth and the image sizes that can
> be read out of a sensor, and I've gathered these together into
> something I've call a SensorMode.
>
> I've added a SensorMode field to the CameraConfiguration so that
> applications can optionally put in there what they think they want,
> and also a CameraSensor::getSensorModes function that returns a list
> of the supported modes (raw formats only...).

This is more about the theory, but do you think it's a good idea
to allow applications to be written based on some sensor-specific
parameter ? I mean, I understand it's totally plausible to write a
specialized application for say, RPi + imx477. This assumes the
developers knows the sensor modes, the sensor configuration used to
produce it and low level details of the sensor. Most of those
information assumed to be known by the developer also because there's
currently no way to expose from V4L2 how a mode has been
realized in the sensor, if through binning or skipping or cropping. It
can be deduced probably looking at the analogue cropping and full
pixel array size, but it's a guessing exercise.

Now, if we assume that a very specialized application knows the sensor
it deals with, what is the use case for writing a generic application
that instead inspects the (limited) information of the sensor produced
modes and selects its favourite one generically enough ? Wouldn't it
be better to just assume the application precisely knows what RAW
formats it wants and adds a StreamConfiguration for it ?

>
> There are various ways an application could use this:
>
> * It might not care and would ignore the new field altogether. The
>   pipeline handler will stick to its current behaviour.
>
> * It might have some notion of what it wants, perhaps a larger bit
>   depth, and/or a range of sizes. It can fill some or all of those
>   into the SensorMode and the pipeline handler should respect it.
>
> * Or it could query the CameraSensor for its list of SensorModes and
>   then sift through them looking for the one that it likes best.

This is the part I fail to fully grasp. Could you summarize what are
the parameters that would drive the mode selection policy in the
application ?

>
> 2. Field of View and Framerates
>
> The SensorMode should probably include FoV and framerate information

FoV it's problematic, I understand. We have a property that describes
the pixel array size, and I understand comparing the RAW output size
is not enough as the same resolution can be theoretically be obtained
by cropping or subsampling. I would not be opposite to report it
somehow as it might represent an important selection criteria.

Duration, on the other hand can't be read from the limits of
controls::FrameDurationLimits ? I do expect those control to be
populated with the sensor's durations (see
https://git.linuxtv.org/libcamera.git/tree/src/ipa/ipu3/ipu3.cpp#n256)

Sure, applications should try all the supported RAW modes, configure
the camera with them, and read back the control value.

> so that applications can make intelligent choices automatically.
> However, this is a bit trickier for various reasons so I've left it
> out. There could be a later phase of work that adds these.
>
> Even without this, however, the implementation gets us out of our
> rather critical hole where we simply can't get 10-bit modes. It also

Help me out here: why can't you select a RAW format with 10bpp ?

> provides a better alternative to the current nasty practice of
> requesting a raw stream specifically to bodge the camera mode
> selection, even when the raw stream is not actually wanted!

I see it the other way around actually :) Tying application to sensor
modes goes in the opposite direction of 'abstracting camera
implementation details from application' (tm)

Also goes in the opposite direction of the long-term dream of having
sensor driver not being tied to a few fixed modes because that's what
producers gave us to start with, but I get this is a bit far-fetched.

Of course the line between abstraction and control is as usual draw on
the sand and I might be too concerned about exposing sensor details in
our API, even if in an opt-in way.

Thanks
   j

>
> 3. There are 2 commits here
>
> The first adds the SensorMode class, puts it into the
> CameraConfiguration, and allows the supported modes to be listed from
> the CameraSensor. (All the non-Pi stuff.)
>
> The second commit updates our mode selection code to select according
> to the hinted SensorMode (figuring out defaults if it was empty). But
> it essentially works just the same, if in a slightly more generic way.
>
> The code here is fully functional and seems to work fine. Would other
> pipeline handlers be able to adapt to the idea of a "hinted
> SensorMode" as easily?
>
>
> As always, I'm looking forward to people's thoughts!
>
> Thanks
> David
>
> David Plowman (2):
>   libcamera: Add SensorMode class
>   libcamera: pipeline_handler: raspberrypi: Handle the new SensorMode
>     hint
>
>  include/libcamera/camera.h                    |   3 +
>  include/libcamera/internal/camera_sensor.h    |   4 +
>  include/libcamera/meson.build                 |   1 +
>  include/libcamera/sensor_mode.h               |  50 +++++++++
>  src/libcamera/camera_sensor.cpp               |  15 +++
>  src/libcamera/meson.build                     |   1 +
>  .../pipeline/raspberrypi/raspberrypi.cpp      | 105 +++++++++++++-----
>  src/libcamera/sensor_mode.cpp                 |  60 ++++++++++
>  8 files changed, 212 insertions(+), 27 deletions(-)
>  create mode 100644 include/libcamera/sensor_mode.h
>  create mode 100644 src/libcamera/sensor_mode.cpp
>
> --
> 2.20.1
>


More information about the libcamera-devel mailing list