[libcamera-devel] Constraints on the sensor subdevice

Naushir Patuck naush at raspberrypi.com
Fri Feb 21 12:02:55 CET 2020


Hi Jacopo,

On Fri, 21 Feb 2020 at 09:37, Jacopo Mondi <jacopo at jmondi.org> wrote:
>
> Hi Naush,
>    sorry for late reply
>
> On Tue, Feb 18, 2020 at 05:18:19PM +0000, Naushir Patuck wrote:
> > Hi Jacopo,
>
> [snip]
>
> > > > > >
> > > > > > 3) Sensor subdevices must advertise the same frame sizes for all MBUS codes.
> > > > > > There are sensors that we currently use (e.g. ov5647) that advertise a
> > > > > > number of 10-bit modes (preferred) as well as a single 8-bit more
> > > > > > (legacy use).  Other sensors may require lower bit depth formats for
> > > > > > fast frame rate modes as another example.  With these configurations,
> > > > > > this constraint seems too restrictive.
> > > > > >
> > > > > > From the code in camera_sensor.cpp, I can see why this is needed
> > > > > > (mbusCodes_ and sizes_ are associated std::vectors.  Switching to
> > > > > > using something like class ImageFormats (std::map with MBUS code as a
> > > > > > key) would seem to solve this problem.  What do you think?
> > > > > >
> > >
> > > I think you're right, that returning a plain ImageFormats where each
> > > mbus code is associated with a set of available resolutions would
> > > expose to CameraSensor class the whole set of possible combinations.
> > >
> > > We decided not to do so as the idea was to provide a better API for
> > > pipeline handlers to retrieve the right mbus/size combination,
> > > abstracting away from what the v4l2 subdevice provides. This is
> > > implemented in CameraSensor::getFormat() as you can see.
> > > Right now getFormat() is limited to work on single set of resolutions
> > > (CameraSensor::sizez_) which is assumed to be the same for all mbus
> > > code, as you have noticed, at CameraSensor::init() time.
> > >
> > > My personal feeling is that we should refrain from exposing the whole
> > > ImageFormat provided by the v4l2 subdevice to pipeline handlers, and
> > > still enforce the usage of getFormat() as the 'right' way to obtain the
> > > desired mbus/resolution. That method should be modified to work on
> > > multiple sets of sizes, one per reported per-mbus-code and that would
> > > require changing the way resolutions are stored at init() time and
> > > inspected at getFormat() time.
> > >
> > > Do you think expanding getFormat() in the way described above would
> > > work for your use case?
> > >
> >
> > I understand the idea to abstract the getFormat() away from the
> > pipeline handlers.  However, what happens with a vendor has
> > constraints that conflict with the chosen format via getFormat()?  For
> > example, if some hardware cannot do upscaling, then the pipeline
> > handler must setup a sensor mode that is always larger/equal to the
> > application configured output size.  All constraints may not easily be
> > captured in V4L2Subdevice::getFormat().  Could we do something in
> > CameraConfiguration->validate() to correct this?  From what I can
> > tell, this is only validating the user/application configuration.
>
> The logic to select which format/resolution has to be requested from
> the camera sensor should not live in CameraSensor::getFormat() imho,
> because, as you rightfully point out here, the selection criteria
> depends mostly on the platform's capabilities and constraints (can the
> ISP do upscaling? Which image format should be input to the ISP to
> produce images for the still capture role? etc etc).
>
> All this constraints/requirements should be handled by the pipeline
> handler indeed, by matching the application requests (size, format and
> stream role) with the platform characteristics.
>

Agreed, this is our thinking as well.  We do exactly this in our
pipeline handler.  But does that mean that CameraSensor::getFormat()
is somewhat redundant?

> The right place to do so is indeed the validation phase. I think the
> IPU3 pipeline provides a good example in that regard. validate() is
> not only intended to validate the application requested configuration,
> but also serves to adjust it to a valid version and to prepare the
> platform to support the requested use case by, as an example,
> selecting which format/size to request to the camera sensor to produce
> the requested streams configuration.
>

> >
> > Another thing to keep in mind is that for our pipeline, we do a
> > setFormat() on the CSI2 image node as a V4L2DeviceFormat and never on
> > the subdevice directly.  The mode gets propagated by the CSI2 kernel
> > driver to the subdevice to set the appropriate sensor mode.  This
> > means that our pipeline handler never actually deals with MBUS codes,
> > rather only V4L2 4CCs and the V4L2Subdevice::getFormat() call does not
> > get used by us.  Now I'm not sure that this is the right or wrong
> > thing to do :)  I seem to recall talking with @Dave Stevenson about
> > this, and we concluded that this is probably ok?
>
> If I got you right here, on your platform the image format selected on
> the CSI-2 receiver gets propagated to the sensor subdevice by
> performing the 4cc->mbus translation logic in kernel space. This
> implies the sensor subdevice and the CSI-2 receiver drivers have to be
> tightly coupled, unless complex mbus selection heuristics are
> implemented in the CSI-2 receiver driver to support a wide range of
> sensor drivers.

Yes, this is correct.  The CSI-2 Rx and sensor subdevice are
inherently coupled on our platform.

There is a distinction to make here though.  A CSI-2 Rx could either
be directly connected to an ISP pipeline, or dump out image data into
memory to be read back by an ISP.  For the latter case, it might also
be possible for the CSI-2 block to do some simple format conversion,
or even rescaling.  In our platform, we only allow our CSI-2 RX to
optionally do a data unpack operation (so a format conversion).  So,
during probe, the kernel driver enumerates the subdevice formats, and
generates a list of V4L formats to advertise - and these formats may
be duplicated to represent packed and unpacked options.

> This seems not standard behaviour to me, but I can't tell if upstream
> would be fine with it or not. From a libcamera perspective, you won't
> be able to use CameraSensor::getFormat() but you will have to select
> which format to set on the CSI-2 receiver taking into account the mbus
> selection logic performed by the same driver.
>
> Can I ask if you have any requirement that prevents you from decoupling that
> logic in kernel space, and leave the mbus code selection on the sensor
> to userspace ? It seems a lot of complication in the driver that
> should instead live in your pipeline handler...
>

With our approach described above, there is no need to apply a MBUS
format on the sensor subdevice as well as a V4L format on the CSI-2 Rx
device as they are inherently the same operation, if that makes sense
:)

For more complicated topologies (where the CSI-2 Rx could rescale for
example), I understand that we may want to decouple this, but for now,
MBUS codes are essentially hidden from our pipeline handler.

> Thanks
>    j

Regards,
Naush


More information about the libcamera-devel mailing list