[libcamera-devel] [PATCH 1/1] libcamera: controls: Add StartupFrame control

David Plowman david.plowman at raspberrypi.com
Wed Aug 9 12:53:30 CEST 2023


Hi Laurent

Thanks for the comments!

On Tue, 8 Aug 2023 at 09:14, Laurent Pinchart
<laurent.pinchart at ideasonboard.com> wrote:
>
> Hello,
>
> On Mon, Jul 31, 2023 at 03:35:36PM +0100, David Plowman via libcamera-devel wrote:
> > On Tue, 11 Jul 2023 at 15:59, Naushir Patuck wrote:
> > > On Tue, 11 Jul 2023 at 11:31, Kieran Bingham wrote:
> > > > Quoting Naushir Patuck via libcamera-devel (2023-07-11 11:23:32)
> > > > > Hi all,
> > > > >
> > > > > On a semi-related topic, we talked offline about improving the drop frame
> > > > > support by queuing a request buffer multiple times to avoid the need for
> > > > > allocating internal buffers.  I've tried this out and here are my findings.
> > > > >
> > > > > Firstly, to handle queuing a single buffer multiple times, I need to increase
> > > > > the number of cache slots in V4L2BufferCache().  Perhaps
> > > > > V4L2VideoDevice::importBuffers()
> > > > > should be updated to not take in a count parameter and we just allocate slots
> > > > > for the maximum buffer count possible in V4L2 (32 I think)?  There has been a
> > > > > long-standing \todo in the RPi code to choose an appropriate value, and the
> > > > > maximum number is really the only appropriate value I can think of.
> > > >
> > > > I still think allocating the maximum here in the v4l2 components is
> > > > appropriate as they are 'cheap' ...
> > >
> > > Agree, I think 32 is the limit according to V4L2.
>
> There's one "small" drawback though. Allocating buffers is indeed cheap
> (for DMABUF), but once a V4L2 buffer has been queued with dmabuf
> objects, those objects will stay referenced until the V4L2 buffer is
> freed. On systems where the user keeps allocating and freeing buffers,
> this means that we will hold on to 32 buffers until the Camera is
> reconfigured or released.
>
> This is an issue with V4L2, and we're already affected by it. Increasing
> the number of buffers will make it worse in some use cases, but doesn't
> significantly change the nature of the problem. The proposed new V4L2
> DELETE_BUFS ioctl may help solving this. In the meantime, I think we can
> increase the number of buffers despite the issue, but I would limit the
> increase to a smaller value.
>
> > > > > Once I got this working, unfortunately I realised this method would never
> > > > > actually work correctly in the common scenario where the application configures
>
> Should I quote Henri from An American Tail ? :-)
>
> > > > > and uses a RAW stream.  In such cases we would queue the RAW buffer into Unicam
> > > > > N times for N dropped frames.  However this buffer is also imported into the ISP
> > > > > for processing and stats generation, all while it is also being filled by Unicam
> > > > > for the next sensor frame.  This makes the stats entirely unusable.
> > > >
> > > > Aha that's a shame. I thought the restrictions were going to be in the
> > > > kernel side, so at least it's interesting to know that we /can/ queue
> > > > the same buffer multiple times (with a distinct v4l2_buffer id) and get
> > > > somewhat of the expected behaviour....
> > > >
> > > > > So in the end we either have to allocate additional buffers for drop frames
> > > > > (like we do right now), or we implement something like this series where the
> > > > > application is responsible for dropping/ignoring these frames.
>
> There may be something that escapes me, but I think you're trying to
> burry the idea too fast.
>
> Looking at the problem from an API point of view, ignoring the internal
> implementation in the pipeline handler for a moment, your proposal is to
> add a mechanism to tell applications that they should ignore the
> contents of a request and resubmit it. If this is possible to do for
> applications without any negative side effects such as the one you've
> described above, then I don't see why it would be impossible for
> the pipeline handler to do the same before the request reaches the
> application.
>
> Addressing the exact issue you're facing, it seems that the problem is
> caused by using the raw buffer from the first request only. Under normal
> operation conditions, the pipeline handler will need at least two raw
> buffers, otherwise frames will be dropped. It's the application's
> responsibility to queue enough requests for proper operation if it wants
> to avoid frame drops. I think you can thus use raw buffers supplied in
> the first two requests.
>
> > > > Of course if the expected behaviour doesn't suit the use case ... then
> > > > ...
> > > >
> > > > This may all be different for pipeline's with an inline ISP though ...
> > > > so the research is still useful.
> > >
> > > So the question is... should we continue with this series as a possible
> > > improvement if the pipeline handler wants to support this control?
> >
> > So yes, I would indeed like to revisit this question. I still think
> > it's useful for the original reasons, and the new use-case I'd like to
> > bring into the mix now is HDR.
> >
> > One HDR method combines a number of different exposures to create the
> > output. Now this isn't so relevant for the Pi seeing as we have no
> > hardware for combining images, but you could imagine it being
> > important to platforms more widely. The problem comes when this HDR
> > mode is engaged... what do we do while we wait for all those different
> > exposures to come through? Because until we have one of each, we can't
> > really produce the image that the application is expecting.
> >
> > The idea would be that requests cycle round as normal, but come with
> > metadata saying "I'm not ready yet" while HDR is "starting up". Note
> > that this could of course happen at any time now, not just when the
> > camera starts (or mode switches).
> >
> > I still like the idea of generic "I'm not ready" metadata because
> > applications won't want to understand all the different ways in which
> > things might not be ready. Though supplementary information that
> > details what we're still waiting for might be helpful. Thoughts...?
>
> For the HDR case, would this supplementary information be conveyed in
> the form of an HDR-specific control in the request metadata ?
>
> If an application decides to enable HDR in the middle of a capture
> session, is it unreasonable to expect the application to understand this
> particular reason why frames are not "ready" ?

Just to answer this last question, what I really want is to be
nice to our application developers. For instance:

1. There will be more features like this in future. Frame
stacking, night modes, various things. Do we want to force
applications to have to check all this stuff explicitly?
(A bit like "validating" a configuration - you get a
handy "something got adjusted" indication.)

2. As new features get added, it would be good to minimse the
amount of pain we cause to existing applications. A simple "not
ready" means application developers will have fewer code paths
that need to be found and edited. Vendors might be able to add
new features to their processing without breaking existing
applications.

A couple of notes to this:

* I guess we could handle this by "swallowing" frames and not
  letting them out into the world. This obviously refers back to
  the discussion further up which needs to be resolved.

* I certainly think it's good if further information is available
  for those who are interested, this is mostly about making
  libcamera easier to use when folks aren't.

David

>
> > > > > On Wed, 31 May 2023 at 13:50, David Plowman via libcamera-devel wrote:
> > > > > >
> > > > > > This control is passed back in a frame as metadata to indicate whether
> > > > > > the camera system is still in a startup phase, and the application is
> > > > > > advised to avoid using the frame.
> > > > > >
> > > > > > Signed-off-by: David Plowman <david.plowman at raspberrypi.com>
> > > > > > ---
> > > > > >  src/libcamera/control_ids.yaml | 15 +++++++++++++++
> > > > > >  1 file changed, 15 insertions(+)
> > > > > >
> > > > > > diff --git a/src/libcamera/control_ids.yaml b/src/libcamera/control_ids.yaml
> > > > > > index adea5f90..4742d907 100644
> > > > > > --- a/src/libcamera/control_ids.yaml
> > > > > > +++ b/src/libcamera/control_ids.yaml
> > > > > > @@ -694,6 +694,21 @@ controls:
> > > > > >              Continuous AF is paused. No further state changes or lens movements
> > > > > >              will occur until the AfPauseResume control is sent.
> > > > > >
> > > > > > +  - StartupFrame:
> > > > > > +      type: bool
> > > > > > +      description: |
> > > > > > +        The value true indicates that the camera system is still in a startup
> > > > > > +        phase where the output images may not be reliable, or that certain of
> > > > > > +        the control algorithms (such as AEC/AGC, AWB, and possibly others) may
> > > > > > +        still be changing quite rapidly.
> > > > > > +
> > > > > > +        Applications are advised to avoid using these frames. Mostly, they will
> > > > > > +        occur when the camera system starts for the first time, although,
> > > > > > +        depending on the sensor and the implementation, they could occur at
> > > > > > +        other times.
> > > > > > +
> > > > > > +        The value false indicates that this is a normal frame.
> > > > > > +
> > > > > >    # ----------------------------------------------------------------------------
> > > > > >    # Draft controls section
> > > > > >
>
> --
> Regards,
>
> Laurent Pinchart


More information about the libcamera-devel mailing list