[libcamera-devel] Constraints on the sensor subdevice

Naushir Patuck naush at raspberrypi.com
Tue Feb 18 18:18:19 CET 2020


Hi Jacopo,


On Tue, 18 Feb 2020 at 11:46, Jacopo Mondi <jacopo at jmondi.org> wrote:
>
> Hello Naush,
>
> On Fri, Feb 07, 2020 at 04:01:10PM +0000, Naushir Patuck wrote:
> > HI Jacopo,
> >
> >
> > On Thu, 6 Feb 2020 at 19:03, Jacopo Mondi <jacopo at jmondi.org> wrote:
> > >
> > > Hello Naush,
> > >
> > > On Thu, Feb 06, 2020 at 10:21:06AM +0000, Naushir Patuck wrote:
> > > > Hi all,
> > > >
> > > > I just wanted to query a couple of constraints on the sensor subdevice
> > > > that we have come across (in camera._sensor.cpp -
> > > > CameraSensor::init()):
> > > >
> > >
> > > Thanks, both your points are very appropriate. Limitations where known
> > > https://git.linuxtv.org/libcamera.git/tree/src/libcamera/camera_sensor.cpp#n37
> > > but I think it's now the right time to address them \o/
> > >
> > > I will reply to only the first point here, but won't forget the
> > > second, I pinky-promise that :)
> >
> > I will remember this :)
> >
> > >
> > > > 1) Sensor subdevice must have only one pad.
> > > > As you know, right now we are overloading the subdevice pad to
> > > > advertise sensor embedded data is available.  Eventually, we intend to
> > > > add a "stream" property to the subdevice that can be used instead, but
> > > > that may be some time before that's ready.  Would it be possible to
> > > > remove this constraint?
> > >
> > > yes indeed. I just submitted a series that introduces a factory of
> > > camera sensor handler, as the idea here is that management of more
> > > complex devices (hopefully all of them :) should be delegated to a
> > > CameraSensor sub-class.
> > >
> > > Sensor-specific handlers are meant to precisely interface with
> > > sensor and their associated drivers, including handling more complex
> > > topologies exposed to userspace such as in the case you have here
> > > described.
> > >
> > > The question I now would have is which one, among the available entites
> > > exposed by a sensor driver, should be used to 'identify' the sensor
> > > when asking the factory to create an handler (have a look at
> > > CameraSensorFactory::createSensor(MediaEntity *entity) in my series).
> > >
> > > I presume it is fair to consider the sensor's subdevice most close to
> > > the ISP the one that is used to identify the sensor. In most cases, if
> > > not all, it will be the CSI-2 transmitter entity. Once the specialized
> > > sensor handler class receives this at creation time, it can walk the
> > > links directed to its sink pads backward, to discover all the other
> > > subdevices incrementally.
> > >
> > > Would something like that work for you ?
> > >
> >
> > I need to have a look at the patch-set to see what is available, but
> > it sounds like this could replace the code that I was talking about
> > earlier.  I'll look at this again early next week and let me get back
> > to you then.
>
> I have now sent to the ML v3 of said patch series in case you want to
> have a look and comment there.
>
> >
> > Regards,
> > Naush
> >
> >
> > > Thanks
> > >    j
> > >
> > > >
> > > > 3) Sensor subdevices must advertise the same frame sizes for all MBUS codes.
> > > > There are sensors that we currently use (e.g. ov5647) that advertise a
> > > > number of 10-bit modes (preferred) as well as a single 8-bit more
> > > > (legacy use).  Other sensors may require lower bit depth formats for
> > > > fast frame rate modes as another example.  With these configurations,
> > > > this constraint seems too restrictive.
> > > >
> > > > From the code in camera_sensor.cpp, I can see why this is needed
> > > > (mbusCodes_ and sizes_ are associated std::vectors.  Switching to
> > > > using something like class ImageFormats (std::map with MBUS code as a
> > > > key) would seem to solve this problem.  What do you think?
> > > >
>
> I think you're right, that returning a plain ImageFormats where each
> mbus code is associated with a set of available resolutions would
> expose to CameraSensor class the whole set of possible combinations.
>
> We decided not to do so as the idea was to provide a better API for
> pipeline handlers to retrieve the right mbus/size combination,
> abstracting away from what the v4l2 subdevice provides. This is
> implemented in CameraSensor::getFormat() as you can see.
> Right now getFormat() is limited to work on single set of resolutions
> (CameraSensor::sizez_) which is assumed to be the same for all mbus
> code, as you have noticed, at CameraSensor::init() time.
>
> My personal feeling is that we should refrain from exposing the whole
> ImageFormat provided by the v4l2 subdevice to pipeline handlers, and
> still enforce the usage of getFormat() as the 'right' way to obtain the
> desired mbus/resolution. That method should be modified to work on
> multiple sets of sizes, one per reported per-mbus-code and that would
> require changing the way resolutions are stored at init() time and
> inspected at getFormat() time.
>
> Do you think expanding getFormat() in the way described above would
> work for your use case?
>

I understand the idea to abstract the getFormat() away from the
pipeline handlers.  However, what happens with a vendor has
constraints that conflict with the chosen format via getFormat()?  For
example, if some hardware cannot do upscaling, then the pipeline
handler must setup a sensor mode that is always larger/equal to the
application configured output size.  All constraints may not easily be
captured in V4L2Subdevice::getFormat().  Could we do something in
CameraConfiguration->validate() to correct this?  From what I can
tell, this is only validating the user/application configuration.

Another thing to keep in mind is that for our pipeline, we do a
setFormat() on the CSI2 image node as a V4L2DeviceFormat and never on
the subdevice directly.  The mode gets propagated by the CSI2 kernel
driver to the subdevice to set the appropriate sensor mode.  This
means that our pipeline handler never actually deals with MBUS codes,
rather only V4L2 4CCs and the V4L2Subdevice::getFormat() call does not
get used by us.  Now I'm not sure that this is the right or wrong
thing to do :)  I seem to recall talking with @Dave Stevenson about
this, and we concluded that this is probably ok?


> Thanks
>    j
>
>
> > > > Regards,
> > > > Naush
> > > > _______________________________________________
> > > > libcamera-devel mailing list
> > > > libcamera-devel at lists.libcamera.org
> > > > https://lists.libcamera.org/listinfo/libcamera-devel

Regards,
Naush


More information about the libcamera-devel mailing list