[libcamera-devel] Initial settings for camera exposure and analogue gain

Naushir Patuck naush at raspberrypi.com
Mon Jun 22 12:18:13 CEST 2020


Hi Laurent,

On Mon, 22 Jun 2020 at 05:27, Laurent Pinchart
<laurent.pinchart at ideasonboard.com> wrote:
>
> Hello everybody,
>
> Sorry for the late reply.
>
> On Thu, Jun 11, 2020 at 10:08:59AM +0100, David Plowman wrote:
> > On Thu, 11 Jun 2020 at 08:59, Jacopo Mondi <jacopo at jmondi.org> wrote:
> > > On Thu, Jun 11, 2020 at 08:41:27AM +0100, Naushir Patuck wrote:
> > > > On Wed, 10 Jun 2020 at 17:06, Jacopo Mondi <jacopo at jmondi.org> wrote:
> > > > > On Wed, Jun 10, 2020 at 02:22:59PM +0100, David Plowman wrote:
> > > > > > On Wed, 10 Jun 2020 at 13:23, Jacopo Mondi <jacopo at jmondi.org> wrote:
> > > > > > > On Tue, Jun 09, 2020 at 03:00:49PM +0100, David Plowman wrote:
> > > > > > > > Hi
> > > > > > > >
> > > > > > > > I'd like to discuss the question of being able to set a sensor's initial
> > > > > > > > exposure time and analogue gain before the sensor is started by calling the
> > > > > > > > Camera::start method. Currently I believe this is not possible; they can
> > > > > > > > only be adjusted in the ControlList passed with a Request.
> > > > > > >
> > > > > > > For application-supplied controls I think you're right. IPAs can
> > > > > > > pre-configure sensors I guess. If I'm not mistaken your IPA protocol
> > > > > > > has a RPI_IPA_ACTION_SET_SENSOR_CONFIG event, which is be used to
> > > > > > > pre-configure sensor with known delay values, but that works at
> > > > > > > start-up time only...
> > > > > > >
> > > > > > > > This would be a helpful feature, for example when switching from preview to
> > > > > > > > capture you might want to use different exposure/gain combinations. Or for
> > > > > > > > capturing multiple exposures - you could straight away get the frame you
> > > > > > > > want without waiting a couple of frames for the camera to start streaming
> > > > > > > > with the wrong values and then change.
>
> I fully agree it would be a useful feature. It also addresses the issue
> of setting expensive parameters that require, internally, a stop-restart
> sequence. With an API to set those parameters prior to starting the
> stream there would be no restart sequence cost (setting them in requests
> would still be expensive, but that's not something we can address).

I've been working on allowing a ControlList to be passed into
pipeline_handler::start(), but it is proving to be somewhat useless.
Take an example of a user applicaion wanting to set a fixed exposure
time on the sensor before starting. The control list with the
requested exposure time wants to then be passed into the IPA from
pipeline_handler::start(), where we convert from exposure times to
exposure lines (for example).  The IPA then needs to pass the exposure
lines back to the pipeline_handler to program into the sensor driver.
However, because of the threading model, pipeline_handler::start()
will then go on and start streaming the device drivers before it has a
chance to send the relevant v4l2 ctrls to the device drivers.

This could be fixed in two ways:

1) if ipa::process_event() could be called as a blocking method, and
allow the IPA to return values back to the caller.  This will ensure
the v4l2 ctrls needed before the device stream on is available to the
pipeline handler.

2) Move the ControlList into pipeline_handler::configure().  This is
more inefficient, but it does get around the threading problem.
However, we still need a way of allowing the ipa::configure() method
to return arbitrary parameters back to the caller.  This is something
we talked about before - we could use this for things like the
pipeline delays for staggered_write as well.

>
> > > > > > > > As regards an API, we were thinking in terms of adding a "ControlList const
> > > > > > > > &controls" parameter to the Camera::start method, which would forward the
> > > > > > > > controls to the pipeline handler prior to the camera actually being
> > > > > > > > started. (We're happy to work on the feature once there's sufficient
> > > > > > > > consensus.)
> > > > > > >
> > > > > > > We started planning support for the same feature, with the idea to
> > > > > > > support Android's session parameters
> > > > > > > https://source.android.com/devices/camera/session-parameters
> > > > > > >
> > > > > > > When we (briefly) discussed this, we were considering providing a
> > > > > > > ControlList at Camera:;configure() time, probably adding it to the
> > > > > > > CameraConfiguration parameter. Thinking out loud, this would allow,
> > > > > > > in example, to return a set of pre-configured control sets from
> > > > > > > Camera::generateConfiguration() depending on the requested stream role
> > > > > > > and yet-to-be-designed capture profile.
> > > > > > >
> > > > > > > Why would you prefer to do at start() time ? That's certainly possible
> > > > > > > if there's good reasons to do so...
> > > > > >
> > > > > > As you say, having a ControlList in the configure() method would allow
> > > > > > control of these parameters whenever the camera mode has to be
> > > > > > changed, but not when it is started. I think multi-exposure/image-fusion is
> > > > > > a use-case where this would be helpful.
> > > > > >
> > > > > > For example, suppose we've been running preview and have decided
> > > > > > that we're going to use 2 exposures to capture an HDR image. The
> > > > > > sequence of events in an application might then be:
> > > > > >
> > > > > > 1. Stop the preview and teardown the preview configuration
> > > > > > 2. Configure for still capture mode.
> > > > > > 3. Write the short exposure and start the camera.
> > > > > > 4. Grab the first frame that comes back and immediately stop the camera.
> > > > > > 5. Write the long exposure and start the camera.
> > > > > > 6. Grab the first frame again, and immediately stop the camera.
> > > > > > 7. Now images can be fused and HDR processing performed...
> > > > > >
> > > > > > This would require a ControlList in the start() method (or equivalent).
> > > > > > I appreciate there are other ways to do this, but it seems a particularly
> > > > > > straightforward template that applications might wish to adopt.
> > > > >
> > > > > Surely not my strongest expertise, but if you asked me I would have
> > > > >
> > > > > 1. Stop the preview and teardown the preview configuration
> > > > > 2. Configure the camera for still capture with an initial short exposure time
> > > > > 3. Start the camera
> > > > > 4. Queue a request with the new long exposure value
> > > > > 5. Grab the first frame that comes back which will have the initially
> > > > > programmed short exposure time
> > > > > 6. Continue capturing frames until the desired long exposure time is
> > > > > not locked
> > > > > 7. Use the two images for HDR/fusion whatever.
> > > > >
> > > > > I know I'm probably missing something, so consider this exercise
> > > > > useful for my own education :)
> > > >
> > > > The above steps could indeed be used for HDR captures.  It's a toss-up
> > > > as to which is more efficient - starting/stopping the camera over
> > > > multiple exposures or letting it run on but having to wait (generally
> > > > 2 frames) before the sensor will consume the next exposure values.
>
> I think I agree with Jacopo, for this specific example, I believe
> waiting would be more efficient, as stop and especially start operations
> are very often expensive (either because if the hardware requirements,
> or at least because of suboptimal implementations in the kernel).
>
> I'd even go one step further, if you start the camera and queue two
> requests with two different exposure values, you should get the frames
> you need with a delay, but in consecutive requests. We've thought for
> quite some time about the ability to queue requests before starting the
> camera, and that could allow pipeline handlers to configure parameters
> at start time by looking at the first queued request (and possibly even
> subsequent ones).
>
> In general, do you think it would be useful to be able to prequeue
> requests as a way to provide initial parameters ? We may not even need
> to support this explicitly, pipeline handlers could possibly delay the
> real start internally until the first request is queued. I think we need
> to take all these ideas into account and decide what would be the best
> API.

Allowing us to defer stream-on until a Request has been passed in was
actually a workaround I considered for the problems described above.
However, I thought it would not be acceptable :)

If we agreed that this is acceptable, I think it will be a good way of
passing in parameters before the sensor starts streaming (without
extending the API).  Pre-queueing more than one Request does not
provide any further advantages, but does no harm.

Let me know if you are ok with this, and I can work on an
implementation for review.

Regards,
Naush


>
> I believe there are other use cases for setting parameters prior to
> starting the camera (or when starting the camera), so overall this
> discussion is still relevant.
>
> > > > There may be other reasons (I cannot think of anything specific apart
> > > > from HDR right now) where an application would want to sequence
> > > > start(), stop(), start() and have the last start() take in some new
> > > > ControlList parameters.  If the ControlList was passed into
> > > > configure(), the sequence would have to run like start(), stop(),
> > > > configure(), start(), which might be less efficient.  Hence having the
> > > > ControlList passed into start() would cover all use cases.
> > >
> > > Indeed not having to go through configure() is more efficient, and my
> > > reasoning and above example is not actually accurate, as if you're
> > > running a (viewfinder+still capture), at configure() time you would
> > > have instructed the camera to run with an exposure time ideal for
> > > viewfinder, not the 'short exposure' I mentioned in step 2.
> > >
> > > I would not rule out the option to pass parameters to configure(), but
> > > I see your point and to me it's something worth exploring. What do
> > > others think ?
> >
> > Yes, I certainly agree one could do it either way. I think that in the end
> > there's some benefit in giving applications a choice.
> >
> > Submitting new exposure/gain values with requests is fine, but you have
> > to pay attention to when those requests will be submitted, whether the
> > new exposure/gain values will have taken effect when that request
> > completes (or later? I'm not entirely sure...). And you need to consider
> > how many requests you want to queue up initially (now some might need
> > extra control lists), and you also have to handle the case where you
> > have more exposure/gain combinations than you have buffers that you
> > can fill so some have to be handled as part of your "event loop".
> >
> > So all in all, I think this is OK, but I can see application writers
> > thinking "hmm, why can't I just stop the camera, change the exposure,
> > and start it again..."
>
> Because it's expensive ? :-)
>
> On a more serious note, stopping, reconfiguring and restarting is the
> simplest option for application writers, but also the most expensive
> one. Even with an API to pass parameters to start() (or configure()),
> the stop/restart sequence will have a cost. It could be partly optimized
> by using runtime PM and autosuspend on the kernel side, to avoid cutting
> the power off to the sensor when stopping and restarting in a quick
> sequence, but even in that case, starting the stream is usually an
> expensive operation, and there's not always an easy option to avoid lots
> of I2C register writes at that point, which is also expensive. I would
> thus try to avoid giving false hopes to application writers with an API
> to optimizes the use case a bit, but makes the operation still
> expensive. I think it would be best to provide example code of how
> common use cases should be implemented, and to also provide higher-level
> helpers for common operations.
>
> --
> Regards,
>
> Laurent Pinchart
> _______________________________________________
> libcamera-devel mailing list
> libcamera-devel at lists.libcamera.org
> https://lists.libcamera.org/listinfo/libcamera-devel


More information about the libcamera-devel mailing list