[libcamera-devel] Constraints on the sensor subdevice

Laurent Pinchart laurent.pinchart at ideasonboard.com
Fri Feb 21 15:57:34 CET 2020


Hi Dave,

On Fri, Feb 21, 2020 at 02:48:34PM +0000, Dave Stevenson wrote:
> Hi Jacopo
> 
> I'll preface this with please don't take it as a personal attack!
> It's not meant as one, and most uses of the word "you" are more the
> V4L2 spec/behaviour than any individual.

I've mostly ignored this mail thread so far due to lack of time (and
also because seeing lots of mails being exchanged gave me the excuse of
thinking things were progressing nicely without a need for me to get
involved :-)). Jacopo asked me to take a look as it seems there was some
disagreement. I will do so as quickly as possible, most probably
tonight, otherwise tomorrow at the latest.

Differences in approaches and opinions are normal, and expected as we
have different backgrounds. With everybody being passionate by their
work, heated debates are expected, let's all remember there's no
personal attack involved. We're all here to create the most amazing
camera stack ever, and that's very much a collective effort :-)

> On Fri, 21 Feb 2020 at 12:04, Jacopo Mondi <jacopo at jmondi.org> wrote:
> > On Fri, Feb 21, 2020 at 11:25:05AM +0000, Dave Stevenson wrote:
> > > Hi Jacopo
> >
> > [snip]
> >
> > > > > If I got you right here, on your platform the image format selected on
> > > > > the CSI-2 receiver gets propagated to the sensor subdevice by
> > > > > performing the 4cc->mbus translation logic in kernel space. This
> > > > > implies the sensor subdevice and the CSI-2 receiver drivers have to be
> > > > > tightly coupled, unless complex mbus selection heuristics are
> > > > > implemented in the CSI-2 receiver driver to support a wide range of
> > > > > sensor drivers.
> > > >
> > > > Yes, this is correct.  The CSI-2 Rx and sensor subdevice are
> > > > inherently coupled on our platform.
> > > >
> > > > There is a distinction to make here though.  A CSI-2 Rx could either
> > > > be directly connected to an ISP pipeline, or dump out image data into
> > > > memory to be read back by an ISP.  For the latter case, it might also
> > > > be possible for the CSI-2 block to do some simple format conversion,
> > > > or even rescaling.  In our platform, we only allow our CSI-2 RX to
> > > > optionally do a data unpack operation (so a format conversion).  So,
> > > > during probe, the kernel driver enumerates the subdevice formats, and
> > > > generates a list of V4L formats to advertise - and these formats may
> > > > be duplicated to represent packed and unpacked options.
> > > >
> > > > > This seems not standard behaviour to me, but I can't tell if upstream
> > > > > would be fine with it or not. From a libcamera perspective, you won't
> > > > > be able to use CameraSensor::getFormat() but you will have to select
> > > > > which format to set on the CSI-2 receiver taking into account the mbus
> > > > > selection logic performed by the same driver.
> > > > >
> > > > > Can I ask if you have any requirement that prevents you from decoupling that
> > > > > logic in kernel space, and leave the mbus code selection on the sensor
> > > > > to userspace ? It seems a lot of complication in the driver that
> > > > > should instead live in your pipeline handler...
> > > > >
> > > >
> > > > With our approach described above, there is no need to apply a MBUS
> > > > format on the sensor subdevice as well as a V4L format on the CSI-2 Rx
> > > > device as they are inherently the same operation, if that makes sense
> > > > :)
> > > >
> > > > For more complicated topologies (where the CSI-2 Rx could rescale for
> > > > example), I understand that we may want to decouple this, but for now,
> > > > MBUS codes are essentially hidden from our pipeline handler.
> > >
> > > I originally wrote this driver based on the TI VPFE [1]. That's a
> > > non-MC device, and suited our requirements nicely.
> > > From an application perspective we want the simple use cases, so you
> > > set the format and that is what you get. Beyond repacking from eg
> > > V4L2_PIX_FMT_SBGGR10P to V4L2_PIX_FMT_SBGGR10 there is no image
> > > processing going on.
> > >
> > > The change to use media controller was actually your libcamera
> > > requirement for enumeration of devices. We have no need for it, so
> > > would sooner rip it back out.
> > > Complicating userspace apps for such a simple hardware block is nuts.
> >
> > I'm sorry I really don't agree here, but I don't think this comes
> > unexpected :)
> >
> > Implementing heuristics for 4cc->mbus code translation in kernel space
> > requires the csi-2 receiver and the sensor subdevice driver to be
> > somehow coupled,
> 
> Please define what you mean by coupling?
> They are coupled in that they are given handles to each other via
> device tree. They have to be coupled to start/stop streaming if
> nothing else. V4L2 subdev API calls are made between the two. The V4L2
> subdev API also defines pad operations for kernel drivers to set and
> get formats.
> 
> > or better, it's very hard to guarantee that your csi-2
> > driver format selection logic work with all sensor drivers out there.
> > Am I wrong ?
> 
> Looking in the mainline tree I can see:
> - TI am437x
> - Atmel ISC/ISI
> - Exynos4-is
> - possibly via-camera
> - (TI) cal
> all of which have a table of V4L2_PIX_FMT_ to MEDIA_BUS_FMT_ format conversions.
> 
> Yes there is the potential for missing one, but we've already worked
> around the slight craziness of there being both
> MEDIA_BUS_FMT_YUYV8_1X8 and MEDIA_BUS_FMT_YUYV8_1X16 and similar (CSI
> is really 16X1 as well seeing as it's serial!).
> Where is the *obligation* to update libcamera format selection logic
> when one adds a new MBUS format to the kernel?
> 
> > Delegating that to userspace would allow you to de-couple the two drivers
> > completely, but I understand libcamera was not there when you first
> > designed that. Now that you have a place in userspace where to
> > implement those heuristics (your pipeline handler) I really would not
> > push them to kernel space.
> 
> Where's the obligation to use libcamera in all situations?!
> 
> Forget the complex camera use case, we have a trivially simple
> hardware pipeline. CSI2 data is sent from a source and is written to
> memory. Why should userspace be forced to worry about media controller
> and MBUS formats at all?
> 
> We have existing pipelines working with bcm2835-unicam using the
> Toshiba TC358743 HDMI to CSI2 chip, and Analog Devices ADV728xM
> analogue video to CSI chip that should only be broken with very good
> reason. In both cases you have to set the timings/standard first, but
> then it streams data on request.
> GStreamer's v4l2src works very nicely with it, and nothing worries
> about media controller, and why should it. GStreamer can switch
> merrily between the UYVY or BGR3 output formats of TC358743 via
> V4L2_PIX_FMT_xxx as they get forwarded to the TC358743 driver (as
> MEDIA_BUS_FMT_xxx) to action.
> 
> With the behaviour of media controller you're now proposing,
> GStreamer, and FFmpeg, and ALL other clients every have to talk to
> media controller to set up that format change. I don't see code
> supporting that in v4l2src, so I have an issue that has to be
> resolved.
> 
> Paragraph 8 of [1] does state
> "Formats are not propagated across links, as that would involve
> propagating them from one sub-device file handle to another.
> Applications must then take care to configure both ends of every link
> explicitly with compatible formats. Identical formats on the two ends
> of a link are guaranteed to be compatible. Drivers are free to accept
> different formats matching device requirements as being compatible."
> which to my mind is barking mad in the simple case, and puts a
> significant burden on all apps if the pipeline is simple.
> 
> How does one support the simple applications if the underlying device
> insists on using media controller?
> 
> [1] https://linuxtv.org/downloads/v4l-dvb-apis-new/uapi/v4l/dev-subdev.html#format-negotiation
> 
> 
> A question in return. Other than enumeration, what does media
> controller gain us in this situation?
> We don't need the request API.
> We don't need to configure/validate any links or set/get pad formats.
> 
> > Again, my 2c only, the linux-media community will provide much more
> > interesting feedbacks then mine :)
> 
> At the moment I'm seeing us pushing the driver without media
> controller, and we'll have to come up with an alternative mechanism
> for enumeration.
> Media controller support can come later because it is going to create
> regressions. Possibly make media controller usage optional? Switch
> based on device tree, or perhaps based on whether /dev/mediaX is
> opened or not. (but how do you tell if that is the same app that is
> asking for /dev/videoX?!)
> Perhaps we need to look at uvcvideo as that supports media controller
> and yet allows setting resolution and seemingly it propagating to the
> sensor device. Does it only get away with this as the sub entities are
> not V4L2 subdevices? It seems to be a very similar situation.

-- 
Regards,

Laurent Pinchart


More information about the libcamera-devel mailing list