[libcamera-devel] [PATCH v3 1/3] ipa: raspberrypi: Make CamHelper exposure methods virtual

David Plowman david.plowman at raspberrypi.com
Tue May 4 11:15:26 CEST 2021


Hi again

Thanks for all the various opinions. If I may wade in again... :)

Like Naush, I have some more requirements for what the Exposure
functions need to do. The sensor I'm currently dealing with has the
constraint - in some readout modes only - that the number of exposure
lines has to be a multiple of 2. It's one of those "quad Bayer" or
"tetracell" sensors, so you can imagine where this is coming from. You
might speculate that so-called "nonacell" sensors might want multiples
of 3, though I've never had my hands on one of those!

I suppose we could argue that the ControlInfos needs to record and
advertise the "step" that V4L2 parameters expose - we clearly need to
fix up the exposure correctly before we pass it into the delayed
control. Or at least, it must have the true value when we read it out.

Perhaps the problem with making this function virtual is that we've
never been totally clear what it's for. Maybe it really is supposed to
do nothing but a division, though at that point, shouldn't we remove
it and just do the division? Perhaps it's purpose is to hide away
peculiarities of particular sensors (and modes) on the grounds that
they're going to be difficult to predict in advance for all future
devices, that probably sums up how I've always viewed it.

I already treat the GainCode function rather like this. For example,
where different modes have different minimum gains, or even different
sensitivities, I soak that up in the virtual GainCode method, so it's
not "give me the code for this gain", it's "give me the code I need to
program for this gain, even if it's not really this exact gain". Here
I'm just wanting to give developers the *option* to get minimum gain
images if they're prepared deliberately to alter the exposure. I won't
argue that some solution involving sensitivities and per-mode
sensitivities is not better, but we'd need to decide how that
interacts with the AEC/AGC, with applications and so on. I'm feeling
like that might be another one of those rather long discussions...

Thanks!
David

On Wed, 28 Apr 2021 at 15:12, Naushir Patuck <naush at raspberrypi.com> wrote:
>
>
>
> On Wed, 28 Apr 2021 at 15:05, Laurent Pinchart <laurent.pinchart at ideasonboard.com> wrote:
>>
>> Hi Naush,
>>
>> On Wed, Apr 28, 2021 at 02:40:26PM +0100, Naushir Patuck wrote:
>> > On Wed, 28 Apr 2021 at 14:33, Laurent Pinchart wrote:
>> > > On Wed, Apr 28, 2021 at 01:35:08PM +0100, Naushir Patuck wrote:
>> > > > On Wed, 28 Apr 2021 at 13:16, Laurent Pinchart wrote:
>> > > > > On Tue, Apr 27, 2021 at 09:22:00AM +0100, David Plowman wrote:
>> > > > > > On Tue, 27 Apr 2021 at 07:20, Laurent Pinchart wrote:
>> > > > > > > On Sat, Apr 17, 2021 at 09:19:51AM +0100, David Plowman wrote:
>> > > > > > > > On Fri, 16 Apr 2021 at 22:56, Laurent Pinchart wrote:
>> > > > > > > > > On Wed, Apr 14, 2021 at 11:29:53AM +0100, David Plowman wrote:
>> > > > > > > > > > This allows derived classes to override them if they have any special
>> > > > > > > > > > behaviours to implement. For instance if a particular camera mode
>> > > > > > > > > > produces a different signal level to other modes, you might choose to
>> > > > > > > > > > address that in the gain or exposure methods.
>> > > > > > > > >
>> > > > > > > > > For my information, would you be able to give an example of how this
>> > > > > > > > > would be used ? What kind of particular camera mode would produce a
>> > > > > > > > > different signal level, is this related to binning, or something else ?
>> > > > > > > >
>> > > > > > > > Yes, so the sensor I'm looking at produces twice the signal level in
>> > > > > > > > the binning modes that it does in the full resolution mode. I think
>> > > > > > > > this difference would be awkward for applications which means we have
>> > > > > > > > to hide it somewhere.
>> > > > > > > >
>> > > > > > > > One of my earlier emails talked a little bit about hiding this in the
>> > > > > > > > AGC. You'd have to know a "base gain" for every mode (so 1 for full
>> > > > > > > > res, 2 for binned), and then have some process where we have public
>> > > > > > > > exposure/gain numbers (which are the same for all modes) and there's a
>> > > > > > > > process of conversion into and out of the AGC and IPA which deal with
>> > > > > > > > real exposure/gain values. But I'm really not sure I want to go there,
>> > > > > > > > at least right now!
>> > > > > > > >
>> > > > > > > > I think this is a pragmatic alternative. You can already hide the
>> > > > > > > > difference by changing the CamHelper's Gain and GainCode functions, to
>> > > > > > > > apply the magic extra factor of 2 when necessary. This change here
>> > > > > > > > makes it possible to apply that factor in the exposure instead - you
>> > > > > > > > could imagine someone wanting the lowest noise possible and accepting
>> > > > > > > > that the real exposure will be double (I wouldn't want to double
>> > > > > > > > exposures without people explicitly making that choice). And finally,
>> > > > > > > > this change doesn't make it any more difficult to do the compensation
>> > > > > > > > in the AGC at a later date, should we wish.
>> > > > > > > >
>> > > > > > > > In the CamHelper for my new sensor, I even make it easy to mix the
>> > > > > > > > factor of 2 across both gain and exposure, so you could, for example,
>> > > > > > > > have sqrt(2) in each.
>> > > > > > > >
>> > > > > > > > That got rather long... I hope it was understandable!
>> > > > > > >
>> > > > > > > Thanks for the information. There may be something that I don't get
>> > > > > > > though, as I don't see why the exposure needs to be affected. The
>> > > > > > > exposure time has a big influence on motion blur, so it seems to me that
>> > > > > > > we shouldn't cheat here. The gain seems to be a better place to hide
>> > > > > > > this sensor feature. I wonder if we should consider moving from gain to
>> > > > > > > sensor sensitivity.
>> > > > > >
>> > > > > > You're right, exposure doesn't *need* to be affected, and indeed my
>> > > > > > default position is to handle the difference using gain instead. The
>> > > > > > intention is that people are *able* to keep the gain to a minimum if
>> > > > > > they want to, at the cost of changing the exposure - but this has to
>> > > > > > be a deliberate choice.
>> > > > > >
>> > > > > > I agree the "sensitivity" is a good idea, and indeed a much better
>> > > > > > term than I used above, which was "base gain".
>> > > > > >
>> > > > > > Note that sensitivity is a per-mode thing here. I think it's a good
>> > > > > > solution but has some consequences:
>> > > > > >
>> > > > > > - If an application wants to change camera mode and set the exposure
>> > > > > > they're going to have to deal with per-mode sensitivities and adapt
>> > > > > > their numbers accordingly. Do we like this? I'm always very nervous of
>> > > > > > complicating an application's life!
>> > > > > >
>> > > > > > - You could try and handle per-mode sensitivities either outside the
>> > > > > > AGC (effectively that's what this allows) or you could handle it
>> > > > > > within the AGC. Whenever the camera mode changes you'd have to adjust
>> > > > > > the state variables according to the ratio of old_sensitivity /
>> > > > > > new_sensitivity. You'd probably have to use a different exposure
>> > > > > > profile, possibly it needs to be set by the application - or could we
>> > > > > > do something more automatic? The implementation would also depend on
>> > > > > > whether we want applications to have to pay attention to the "per-mode
>> > > > > > sensitivities" or not.
>> > > > > >
>> > > > > > So overall, I'm in several minds about going this "per-mode
>> > > > > > sensitivity" route and the work it requires on the AGC, as well as the
>> > > > > > implications for applications. In the meantime, making the exposure
>> > > > > > methods virtual allows these sensors to work and doesn't make a later
>> > > > > > step towards per-mode sensitivities any more difficult. You would
>> > > > > > simply do that work and then delete your virtual exposure methods.
>> > > > >
>> > > > > I'm sorry, but I still don't see why it requires making the below
>> > > > > functions virtual. They convert between exposure expressed in lines and
>> > > > > exposure expressed as a time, why would that need to be sensor-specific
>> > > > > ? I guess I need to see an actual use case first.
>> > > >
>> > > > This is entirely tangential to David's usage, but I also need this method to
>> > > > be virtual so I can override this for imx477 ultra long exposure modes. In
>> > > > this mode, we have some additional scaling to apply to the standard
>> > > > equation.
>> > >
>> > > I assume that in that case the exposure_lines argument, and the return
>> > > value of ExposureLines(), won't be expressed as a number of lines, right
>> > > ?
>> >
>> > I have not worked through the details yet, but yet, this is possibly going
>> > to be the case with the expected implementation.
>> >
>> > > Given that sensors also often have a fine exposure time expressed in
>> > > pixels, in addition to the coarse exposure time expressed in lines,
>> > > could we create an Exposure structure that could model exposure settings
>> > > (coarse in lines, fine in pixels, plus other parameters for long
>> > > exposures) in a way that wouldn't be sensor-specific ?
>> >
>> > Perhaps that might be a way to go that could be generalised for all sensors.
>>
>> Would you be able to check if this could be achieved for the different
>> sensors that you need to support ?
>
>
> Yes, I'll have a look and see what we can come up with.
>
>>
>>
>> > I do still wonder about global shutter sensors though.  I've not worked with
>> > many of them, but the ones I have expressed exposure in terms entirely
>> > unrelated to HTS/VTS, but maybe something to worry about when we
>> > come across it :-)
>>
>> Agreed.
>>
>> > > > > > Possibly I've repeated myself rather a lot there - but maybe I've
>> > > > > > explained it better this time? :)
>> > > > > >
>> > > > > > > > > > Signed-off-by: David Plowman <david.plowman at raspberrypi.com>
>> > > > > > > > > > Reviewed-by: Naushir Patuck <naush at raspberrypi.com>
>> > > > > > > > >
>> > > > > > > > > Reviewed-by: Laurent Pinchart < laurent.pinchart at ideasonboard.com>
>> > > > > > > > >
>> > > > > > > > > > ---
>> > > > > > > > > >  src/ipa/raspberrypi/cam_helper.hpp | 4 ++--
>> > > > > > > > > >  1 file changed, 2 insertions(+), 2 deletions(-)
>> > > > > > > > > >
>> > > > > > > > > > diff --git a/src/ipa/raspberrypi/cam_helper.hpp b/src/ipa/raspberrypi/cam_helper.hpp
>> > > > > > > > > > index 1b2d6eec..4053a870 100644
>> > > > > > > > > > --- a/src/ipa/raspberrypi/cam_helper.hpp
>> > > > > > > > > > +++ b/src/ipa/raspberrypi/cam_helper.hpp
>> > > > > > > > > > @@ -66,8 +66,8 @@ public:
>> > > > > > > > > >       virtual ~CamHelper();
>> > > > > > > > > >       void SetCameraMode(const CameraMode &mode);
>> > > > > > > > > >       MdParser &Parser() const { return *parser_; }
>> > > > > > > > > > -     uint32_t ExposureLines(double exposure_us) const;
>> > > > > > > > > > -     double Exposure(uint32_t exposure_lines) const; // in us
>> > > > > > > > > > +     virtual uint32_t ExposureLines(double exposure_us) const;
>> > > > > > > > > > +     virtual double Exposure(uint32_t exposure_lines) const; // in us
>> > > > > > > > > >       virtual uint32_t GetVBlanking(double &exposure_us, double minFrameDuration,
>> > > > > > > > > >                                     double maxFrameDuration) const;
>> > > > > > > > > >       virtual uint32_t GainCode(double gain) const = 0;
>>
>> --
>> Regards,
>>
>> Laurent Pinchart


More information about the libcamera-devel mailing list