[libcamera-devel] Initial settings for camera exposure and analogue gain

Naushir Patuck naush at raspberrypi.com
Mon Jun 29 12:54:12 CEST 2020


Hi Laurent,

Thanks for the feedback.

On Sun, 28 Jun 2020 at 13:32, Laurent Pinchart
<laurent.pinchart at ideasonboard.com> wrote:
>
> Hi Naush,
>
> On Mon, Jun 22, 2020 at 11:18:13AM +0100, Naushir Patuck wrote:
> > On Mon, 22 Jun 2020 at 05:27, Laurent Pinchart wrote:
> > > On Thu, Jun 11, 2020 at 10:08:59AM +0100, David Plowman wrote:
> > > > On Thu, 11 Jun 2020 at 08:59, Jacopo Mondi <jacopo at jmondi.org> wrote:
> > > > > On Thu, Jun 11, 2020 at 08:41:27AM +0100, Naushir Patuck wrote:
> > > > > > On Wed, 10 Jun 2020 at 17:06, Jacopo Mondi <jacopo at jmondi.org> wrote:
> > > > > > > On Wed, Jun 10, 2020 at 02:22:59PM +0100, David Plowman wrote:
> > > > > > > > On Wed, 10 Jun 2020 at 13:23, Jacopo Mondi <jacopo at jmondi.org> wrote:
> > > > > > > > > On Tue, Jun 09, 2020 at 03:00:49PM +0100, David Plowman wrote:
> > > > > > > > > > Hi
> > > > > > > > > >
> > > > > > > > > > I'd like to discuss the question of being able to set a sensor's initial
> > > > > > > > > > exposure time and analogue gain before the sensor is started by calling the
> > > > > > > > > > Camera::start method. Currently I believe this is not possible; they can
> > > > > > > > > > only be adjusted in the ControlList passed with a Request.
> > > > > > > > >
> > > > > > > > > For application-supplied controls I think you're right. IPAs can
> > > > > > > > > pre-configure sensors I guess. If I'm not mistaken your IPA protocol
> > > > > > > > > has a RPI_IPA_ACTION_SET_SENSOR_CONFIG event, which is be used to
> > > > > > > > > pre-configure sensor with known delay values, but that works at
> > > > > > > > > start-up time only...
> > > > > > > > >
> > > > > > > > > > This would be a helpful feature, for example when switching from preview to
> > > > > > > > > > capture you might want to use different exposure/gain combinations. Or for
> > > > > > > > > > capturing multiple exposures - you could straight away get the frame you
> > > > > > > > > > want without waiting a couple of frames for the camera to start streaming
> > > > > > > > > > with the wrong values and then change.
> > >
> > > I fully agree it would be a useful feature. It also addresses the issue
> > > of setting expensive parameters that require, internally, a stop-restart
> > > sequence. With an API to set those parameters prior to starting the
> > > stream there would be no restart sequence cost (setting them in requests
> > > would still be expensive, but that's not something we can address).
> >
> > I've been working on allowing a ControlList to be passed into
> > pipeline_handler::start(), but it is proving to be somewhat useless.
> > Take an example of a user applicaion wanting to set a fixed exposure
> > time on the sensor before starting. The control list with the
> > requested exposure time wants to then be passed into the IPA from
> > pipeline_handler::start(), where we convert from exposure times to
> > exposure lines (for example).  The IPA then needs to pass the exposure
> > lines back to the pipeline_handler to program into the sensor driver.
> > However, because of the threading model, pipeline_handler::start()
> > will then go on and start streaming the device drivers before it has a
> > chance to send the relevant v4l2 ctrls to the device drivers.
> >
> > This could be fixed in two ways:
> >
> > 1) if ipa::process_event() could be called as a blocking method, and
> > allow the IPA to return values back to the caller.  This will ensure
> > the v4l2 ctrls needed before the device stream on is available to the
> > pipeline handler.
> >
> > 2) Move the ControlList into pipeline_handler::configure().  This is
> > more inefficient, but it does get around the threading problem.
> > However, we still need a way of allowing the ipa::configure() method
> > to return arbitrary parameters back to the caller.  This is something
> > we talked about before - we could use this for things like the
> > pipeline delays for staggered_write as well.
>
> I think we need this anyway, for staggered write delays as you
> mentioned, and I'm sure there will be other data to pass through
> configure() in the future in both directions. I would rather start with
> this, and possibly address additional performance issues on top, by
> passing parameters to start() (are with any other solution we will agree
> on). I'll have a look at this.

Thank you, that would clean up a few things in the pipeline handler.

>
> > > > > > > > > > As regards an API, we were thinking in terms of adding a "ControlList const
> > > > > > > > > > &controls" parameter to the Camera::start method, which would forward the
> > > > > > > > > > controls to the pipeline handler prior to the camera actually being
> > > > > > > > > > started. (We're happy to work on the feature once there's sufficient
> > > > > > > > > > consensus.)
> > > > > > > > >
> > > > > > > > > We started planning support for the same feature, with the idea to
> > > > > > > > > support Android's session parameters
> > > > > > > > > https://source.android.com/devices/camera/session-parameters
> > > > > > > > >
> > > > > > > > > When we (briefly) discussed this, we were considering providing a
> > > > > > > > > ControlList at Camera:;configure() time, probably adding it to the
> > > > > > > > > CameraConfiguration parameter. Thinking out loud, this would allow,
> > > > > > > > > in example, to return a set of pre-configured control sets from
> > > > > > > > > Camera::generateConfiguration() depending on the requested stream role
> > > > > > > > > and yet-to-be-designed capture profile.
> > > > > > > > >
> > > > > > > > > Why would you prefer to do at start() time ? That's certainly possible
> > > > > > > > > if there's good reasons to do so...
> > > > > > > >
> > > > > > > > As you say, having a ControlList in the configure() method would allow
> > > > > > > > control of these parameters whenever the camera mode has to be
> > > > > > > > changed, but not when it is started. I think multi-exposure/image-fusion is
> > > > > > > > a use-case where this would be helpful.
> > > > > > > >
> > > > > > > > For example, suppose we've been running preview and have decided
> > > > > > > > that we're going to use 2 exposures to capture an HDR image. The
> > > > > > > > sequence of events in an application might then be:
> > > > > > > >
> > > > > > > > 1. Stop the preview and teardown the preview configuration
> > > > > > > > 2. Configure for still capture mode.
> > > > > > > > 3. Write the short exposure and start the camera.
> > > > > > > > 4. Grab the first frame that comes back and immediately stop the camera.
> > > > > > > > 5. Write the long exposure and start the camera.
> > > > > > > > 6. Grab the first frame again, and immediately stop the camera.
> > > > > > > > 7. Now images can be fused and HDR processing performed...
> > > > > > > >
> > > > > > > > This would require a ControlList in the start() method (or equivalent).
> > > > > > > > I appreciate there are other ways to do this, but it seems a particularly
> > > > > > > > straightforward template that applications might wish to adopt.
> > > > > > >
> > > > > > > Surely not my strongest expertise, but if you asked me I would have
> > > > > > >
> > > > > > > 1. Stop the preview and teardown the preview configuration
> > > > > > > 2. Configure the camera for still capture with an initial short exposure time
> > > > > > > 3. Start the camera
> > > > > > > 4. Queue a request with the new long exposure value
> > > > > > > 5. Grab the first frame that comes back which will have the initially
> > > > > > > programmed short exposure time
> > > > > > > 6. Continue capturing frames until the desired long exposure time is
> > > > > > > not locked
> > > > > > > 7. Use the two images for HDR/fusion whatever.
> > > > > > >
> > > > > > > I know I'm probably missing something, so consider this exercise
> > > > > > > useful for my own education :)
> > > > > >
> > > > > > The above steps could indeed be used for HDR captures.  It's a toss-up
> > > > > > as to which is more efficient - starting/stopping the camera over
> > > > > > multiple exposures or letting it run on but having to wait (generally
> > > > > > 2 frames) before the sensor will consume the next exposure values.
> > >
> > > I think I agree with Jacopo, for this specific example, I believe
> > > waiting would be more efficient, as stop and especially start operations
> > > are very often expensive (either because if the hardware requirements,
> > > or at least because of suboptimal implementations in the kernel).
> > >
> > > I'd even go one step further, if you start the camera and queue two
> > > requests with two different exposure values, you should get the frames
> > > you need with a delay, but in consecutive requests. We've thought for
> > > quite some time about the ability to queue requests before starting the
> > > camera, and that could allow pipeline handlers to configure parameters
> > > at start time by looking at the first queued request (and possibly even
> > > subsequent ones).
> > >
> > > In general, do you think it would be useful to be able to prequeue
> > > requests as a way to provide initial parameters ? We may not even need
> > > to support this explicitly, pipeline handlers could possibly delay the
> > > real start internally until the first request is queued. I think we need
> > > to take all these ideas into account and decide what would be the best
> > > API.
> >
> > Allowing us to defer stream-on until a Request has been passed in was
> > actually a workaround I considered for the problems described above.
> > However, I thought it would not be acceptable :)
> >
> > If we agreed that this is acceptable, I think it will be a good way of
> > passing in parameters before the sensor starts streaming (without
> > extending the API).  Pre-queueing more than one Request does not
> > provide any further advantages, but does no harm.
> >
> > Let me know if you are ok with this, and I can work on an
> > implementation for review.
>
> Before starting working on an implementation, do you think we should
> delay the real start internally until the first request is queued, or
> should we allow Camera::queueRequest() to be called before
> Camera::start() ? The latter would have the advantage of being
> controllable by the application, as well as possibly centralizing part
> of the logic in the libcamera core (depending on the implementation),
> making this feature available in a uniform way across all pipeline
> handlers.

Agreed, allowing Camera::queueRequest() to be called before
Camera::start() would allow the application to set the controls needed
before steraming on the sensor.  Does libcamera already allow this?
If so, presumably it's only a change in qcam that would be needed.

Regards,
Naush


More information about the libcamera-devel mailing list