[libcamera-devel] Constraints on the sensor subdevice

Jacopo Mondi jacopo at jmondi.org
Fri Feb 21 10:39:58 CET 2020


Hi Naush,
   sorry for late reply

On Tue, Feb 18, 2020 at 05:18:19PM +0000, Naushir Patuck wrote:
> Hi Jacopo,

[snip]

> > > > >
> > > > > 3) Sensor subdevices must advertise the same frame sizes for all MBUS codes.
> > > > > There are sensors that we currently use (e.g. ov5647) that advertise a
> > > > > number of 10-bit modes (preferred) as well as a single 8-bit more
> > > > > (legacy use).  Other sensors may require lower bit depth formats for
> > > > > fast frame rate modes as another example.  With these configurations,
> > > > > this constraint seems too restrictive.
> > > > >
> > > > > From the code in camera_sensor.cpp, I can see why this is needed
> > > > > (mbusCodes_ and sizes_ are associated std::vectors.  Switching to
> > > > > using something like class ImageFormats (std::map with MBUS code as a
> > > > > key) would seem to solve this problem.  What do you think?
> > > > >
> >
> > I think you're right, that returning a plain ImageFormats where each
> > mbus code is associated with a set of available resolutions would
> > expose to CameraSensor class the whole set of possible combinations.
> >
> > We decided not to do so as the idea was to provide a better API for
> > pipeline handlers to retrieve the right mbus/size combination,
> > abstracting away from what the v4l2 subdevice provides. This is
> > implemented in CameraSensor::getFormat() as you can see.
> > Right now getFormat() is limited to work on single set of resolutions
> > (CameraSensor::sizez_) which is assumed to be the same for all mbus
> > code, as you have noticed, at CameraSensor::init() time.
> >
> > My personal feeling is that we should refrain from exposing the whole
> > ImageFormat provided by the v4l2 subdevice to pipeline handlers, and
> > still enforce the usage of getFormat() as the 'right' way to obtain the
> > desired mbus/resolution. That method should be modified to work on
> > multiple sets of sizes, one per reported per-mbus-code and that would
> > require changing the way resolutions are stored at init() time and
> > inspected at getFormat() time.
> >
> > Do you think expanding getFormat() in the way described above would
> > work for your use case?
> >
>
> I understand the idea to abstract the getFormat() away from the
> pipeline handlers.  However, what happens with a vendor has
> constraints that conflict with the chosen format via getFormat()?  For
> example, if some hardware cannot do upscaling, then the pipeline
> handler must setup a sensor mode that is always larger/equal to the
> application configured output size.  All constraints may not easily be
> captured in V4L2Subdevice::getFormat().  Could we do something in
> CameraConfiguration->validate() to correct this?  From what I can
> tell, this is only validating the user/application configuration.

The logic to select which format/resolution has to be requested from
the camera sensor should not live in CameraSensor::getFormat() imho,
because, as you rightfully point out here, the selection criteria
depends mostly on the platform's capabilities and constraints (can the
ISP do upscaling? Which image format should be input to the ISP to
produce images for the still capture role? etc etc).

All this constraints/requirements should be handled by the pipeline
handler indeed, by matching the application requests (size, format and
stream role) with the platform characteristics.

The right place to do so is indeed the validation phase. I think the
IPU3 pipeline provides a good example in that regard. validate() is
not only intended to validate the application requested configuration,
but also serves to adjust it to a valid version and to prepare the
platform to support the requested use case by, as an example,
selecting which format/size to request to the camera sensor to produce
the requested streams configuration.

>
> Another thing to keep in mind is that for our pipeline, we do a
> setFormat() on the CSI2 image node as a V4L2DeviceFormat and never on
> the subdevice directly.  The mode gets propagated by the CSI2 kernel
> driver to the subdevice to set the appropriate sensor mode.  This
> means that our pipeline handler never actually deals with MBUS codes,
> rather only V4L2 4CCs and the V4L2Subdevice::getFormat() call does not
> get used by us.  Now I'm not sure that this is the right or wrong
> thing to do :)  I seem to recall talking with @Dave Stevenson about
> this, and we concluded that this is probably ok?

If I got you right here, on your platform the image format selected on
the CSI-2 receiver gets propagated to the sensor subdevice by
performing the 4cc->mbus translation logic in kernel space. This
implies the sensor subdevice and the CSI-2 receiver drivers have to be
tightly coupled, unless complex mbus selection heuristics are
implemented in the CSI-2 receiver driver to support a wide range of
sensor drivers.

This seems not standard behaviour to me, but I can't tell if upstream
would be fine with it or not. From a libcamera perspective, you won't
be able to use CameraSensor::getFormat() but you will have to select
which format to set on the CSI-2 receiver taking into account the mbus
selection logic performed by the same driver.

Can I ask if you have any requirement that prevents you from decoupling that
logic in kernel space, and leave the mbus code selection on the sensor
to userspace ? It seems a lot of complication in the driver that
should instead live in your pipeline handler...

Thanks
   j
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: not available
URL: <https://lists.libcamera.org/pipermail/libcamera-devel/attachments/20200221/c6d08f76/attachment.sig>


More information about the libcamera-devel mailing list