[libcamera-devel] Constraints on the sensor subdevice

Naushir Patuck naush at raspberrypi.com
Fri Feb 21 14:58:27 CET 2020


Hi Jacopo,

On Fri, 21 Feb 2020 at 11:55, Jacopo Mondi <jacopo at jmondi.org> wrote:
>
> Hi Naush,

 <Snip>

> > Agreed, this is our thinking as well.  We do exactly this in our
> > pipeline handler.  But does that mean that CameraSensor::getFormat()
> > is somewhat redundant?
> >
>
> Well, not really, as there is no guarantee on what the sensor can
> actually produce. CameraSensor::getFormat() is designed to provide the
> closest available approximation to the pipeline handler requested
> format/resolution, and to avoid each pipeline handler to implement
> that logic by themselves in a slightly different way.
>
> The alternative is for pipeline handlers to deal with that by
> themselves and inspect all the possible format/sizes combination, but
> we really would like to provide a better API than what the raw V4L2
> API would give.
>
> Do you think you would need to implement that logic by yourselves in
> the pipeline handler ?

I still struggle to see how getFormat() could abstract out all the
constraints of the hardware platform.  It seems to me the pipeline
handler might be better suited to make this decision based on a list
of all available sensor modes and formats, particularly if the CSI-2
Rx is more capable of doing things like reformat and rescaling.

> I would really try to make CameraSenso::getFormat()
> aware of the fact that different mbus codes could be associated with
> different resolutions (which was the issue where this thread actually
> started from :D).

Yes, that was what originally begun this discussion :)  There is a
hard failure if this is true.

>
> Anyway, scratch that, seems like you have no camera sensor at all :)
>
>
> > > The right place to do so is indeed the validation phase. I think the
> > > IPU3 pipeline provides a good example in that regard. validate() is
> > > not only intended to validate the application requested configuration,
> > > but also serves to adjust it to a valid version and to prepare the
> > > platform to support the requested use case by, as an example,
> > > selecting which format/size to request to the camera sensor to produce
> > > the requested streams configuration.
> > >
> >
> > > >
> > > > Another thing to keep in mind is that for our pipeline, we do a
> > > > setFormat() on the CSI2 image node as a V4L2DeviceFormat and never on
> > > > the subdevice directly.  The mode gets propagated by the CSI2 kernel
> > > > driver to the subdevice to set the appropriate sensor mode.  This
> > > > means that our pipeline handler never actually deals with MBUS codes,
> > > > rather only V4L2 4CCs and the V4L2Subdevice::getFormat() call does not
> > > > get used by us.  Now I'm not sure that this is the right or wrong
> > > > thing to do :)  I seem to recall talking with @Dave Stevenson about
> > > > this, and we concluded that this is probably ok?
> > >
> > > If I got you right here, on your platform the image format selected on
> > > the CSI-2 receiver gets propagated to the sensor subdevice by
> > > performing the 4cc->mbus translation logic in kernel space. This
> > > implies the sensor subdevice and the CSI-2 receiver drivers have to be
> > > tightly coupled, unless complex mbus selection heuristics are
> > > implemented in the CSI-2 receiver driver to support a wide range of
> > > sensor drivers.
> >
> > Yes, this is correct.  The CSI-2 Rx and sensor subdevice are
> > inherently coupled on our platform.
> >
>
> Oh :(
>
> > There is a distinction to make here though.  A CSI-2 Rx could either
> > be directly connected to an ISP pipeline, or dump out image data into
> > memory to be read back by an ISP.  For the latter case, it might also
> > be possible for the CSI-2 block to do some simple format conversion,
> > or even rescaling.  In our platform, we only allow our CSI-2 RX to
> > optionally do a data unpack operation (so a format conversion).  So,
>
> CSI-2 receivers could as well perform format conversion/scaling when
> used with an inline ISP (CSI-2 rx feeding data directly to the
> ISP without going through memory). To me this sounds like a separate
> topic as where the ISP is fed from is not directly dependent on the
> ability of the CSI-2 receiver to perform image manipulation.
>
> > during probe, the kernel driver enumerates the subdevice formats, and
> > generates a list of V4L formats to advertise - and these formats may
> > be duplicated to represent packed and unpacked options.
> >
>
> This still sounds like userspace duty to me. I understand the list of
> 4cc formats available on the receiver might be constructed by
> inspecting the list of available subdevice mbus codes, and that's imo
> a fair thing to do in kernel space. But format selection should be
> delegated to uspace, and fail eventually at pipeline start time if the
> mbus/4cc combination programmed on the (sensor source) -> (csi2 sink)
> link is not supported.
>

The logic to select the format still lives in userland (in code very
similar to CameraSensor::getFormat).  The only thing the CSI-2 kernel
driver does is effectively translate MBUS code to V4L2 4CC.

> Anyway, not something to discuss with me only as I can only provide my
> personal interpretation of the issue :) I'm sure on your way to upstream
> you'll have better feedback than this.
>

Yes, of course we do want to follow the community's guidelines, so if
there is pushback, we will have to re-evaluate our approach here.

> > > This seems not standard behaviour to me, but I can't tell if upstream
> > > would be fine with it or not. From a libcamera perspective, you won't
> > > be able to use CameraSensor::getFormat() but you will have to select
> > > which format to set on the CSI-2 receiver taking into account the mbus
> > > selection logic performed by the same driver.
> > >
> > > Can I ask if you have any requirement that prevents you from decoupling that
> > > logic in kernel space, and leave the mbus code selection on the sensor
> > > to userspace ? It seems a lot of complication in the driver that
> > > should instead live in your pipeline handler...
> > >
> >
> > With our approach described above, there is no need to apply a MBUS
> > format on the sensor subdevice as well as a V4L format on the CSI-2 Rx
> > device as they are inherently the same operation, if that makes sense
> > :)
>
> It does, if I got your driver architecture right.
>
> Back to our original topics:
>
> 1) Complex topologies handling: I initially stated this should be
> delegated to a specialized CameraSensor subclass that handles all the
> required subdevices exposed by the sensor driver. As the interesting
> part for you here is the additional embedded data subdevice, what
> operation do you envision to happen on it ? Does format configuration
> happen independently from the format configuration on the video data
> subdevice or do you have any kernel space logic to handle that too ?
> I'm thinking that you should probably better go with V4L2Subdevice
> direct interfacing, even if that would require you to duplicate a lot
> of what we do in CameraSensor class for properties handling in your
> pipeline handler, and I would really like that not to happen :)
>

The CSI-2 Rx driver advertises two nodes, image and embedded data.  So
format configuration for each of those does happen as independent
operations via userland through the standard V4L2 API.

> 2) For image format selection: for you it would be really enough to
> access the raw ImageFormats even if, at this point, I'm not sure what
> you would use them for, if all the mbus code selection logic is
> deferred to kernel space. Is that a matter of just retrieving the
> available resolution per-mbus code to correctly program
> scaling/cropping on the CSI-2 Rx video device I assume.
>

Our pipeline hander does access the raw ImageFormats to choose the
mode.  The CSI-2 Rx driver does not do any selection of modes, it
merely advertises the available modes on the subdevice (but with MBUS
codes translated to V4L2 4cc).  For our use cases, doing this mode
selection on V4L2 4CC vs doing it on MBUS codes does not make much of
a difference really.  Given that ImageFormats presented format 4CCs +
resolutions, that is what I went with.

> For libcamera this last point is indeed something that should be
> solved in the long run, in a way that supports other platforms making use of
> CameraSensor::getFormat(), by expanding that operation to take into
> account per-mbus code resolution lists, as suggested in some previous email.
> Giving access to the whole list of ImageFormats should be possible too,
> even if that would make it tempting for pipeline handler implementers
> to shortcut the CameraSensor class format selection logic and
> re-implement the same in their own custom way (and that would be just
> more duplicated code to review/maintain), which is exactly the reason
> why we refrained from doing that in first place :)
>
> Now that I hope we have nailed down a little better what the
> requirements are, I would really like to know what others think as
> well :)

Yes, it would be useful to get other perspectives on this :)

> Thanks
>    j
>
> >
> > For more complicated topologies (where the CSI-2 Rx could rescale for
> > example), I understand that we may want to decouple this, but for now,
> > MBUS codes are essentially hidden from our pipeline handler.
> >
> > > Thanks
> > >    j
> >
> > Regards,
> > Naush

Cheers,
Naush


More information about the libcamera-devel mailing list