[libcamera-devel] [PATCH v3 1/3] ipa: raspberrypi: Make CamHelper exposure methods virtual

Laurent Pinchart laurent.pinchart at ideasonboard.com
Wed Apr 28 15:32:59 CEST 2021


Hi Naush,

On Wed, Apr 28, 2021 at 01:35:08PM +0100, Naushir Patuck wrote:
> On Wed, 28 Apr 2021 at 13:16, Laurent Pinchart wrote:
> > On Tue, Apr 27, 2021 at 09:22:00AM +0100, David Plowman wrote:
> > > On Tue, 27 Apr 2021 at 07:20, Laurent Pinchart wrote:
> > > > On Sat, Apr 17, 2021 at 09:19:51AM +0100, David Plowman wrote:
> > > > > On Fri, 16 Apr 2021 at 22:56, Laurent Pinchart wrote:
> > > > > > On Wed, Apr 14, 2021 at 11:29:53AM +0100, David Plowman wrote:
> > > > > > > This allows derived classes to override them if they have any special
> > > > > > > behaviours to implement. For instance if a particular camera mode
> > > > > > > produces a different signal level to other modes, you might choose to
> > > > > > > address that in the gain or exposure methods.
> > > > > >
> > > > > > For my information, would you be able to give an example of how this
> > > > > > would be used ? What kind of particular camera mode would produce a
> > > > > > different signal level, is this related to binning, or something else ?
> > > > >
> > > > > Yes, so the sensor I'm looking at produces twice the signal level in
> > > > > the binning modes that it does in the full resolution mode. I think
> > > > > this difference would be awkward for applications which means we have
> > > > > to hide it somewhere.
> > > > >
> > > > > One of my earlier emails talked a little bit about hiding this in the
> > > > > AGC. You'd have to know a "base gain" for every mode (so 1 for full
> > > > > res, 2 for binned), and then have some process where we have public
> > > > > exposure/gain numbers (which are the same for all modes) and there's a
> > > > > process of conversion into and out of the AGC and IPA which deal with
> > > > > real exposure/gain values. But I'm really not sure I want to go there,
> > > > > at least right now!
> > > > >
> > > > > I think this is a pragmatic alternative. You can already hide the
> > > > > difference by changing the CamHelper's Gain and GainCode functions, to
> > > > > apply the magic extra factor of 2 when necessary. This change here
> > > > > makes it possible to apply that factor in the exposure instead - you
> > > > > could imagine someone wanting the lowest noise possible and accepting
> > > > > that the real exposure will be double (I wouldn't want to double
> > > > > exposures without people explicitly making that choice). And finally,
> > > > > this change doesn't make it any more difficult to do the compensation
> > > > > in the AGC at a later date, should we wish.
> > > > >
> > > > > In the CamHelper for my new sensor, I even make it easy to mix the
> > > > > factor of 2 across both gain and exposure, so you could, for example,
> > > > > have sqrt(2) in each.
> > > > >
> > > > > That got rather long... I hope it was understandable!
> > > >
> > > > Thanks for the information. There may be something that I don't get
> > > > though, as I don't see why the exposure needs to be affected. The
> > > > exposure time has a big influence on motion blur, so it seems to me that
> > > > we shouldn't cheat here. The gain seems to be a better place to hide
> > > > this sensor feature. I wonder if we should consider moving from gain to
> > > > sensor sensitivity.
> > >
> > > You're right, exposure doesn't *need* to be affected, and indeed my
> > > default position is to handle the difference using gain instead. The
> > > intention is that people are *able* to keep the gain to a minimum if
> > > they want to, at the cost of changing the exposure - but this has to
> > > be a deliberate choice.
> > >
> > > I agree the "sensitivity" is a good idea, and indeed a much better
> > > term than I used above, which was "base gain".
> > >
> > > Note that sensitivity is a per-mode thing here. I think it's a good
> > > solution but has some consequences:
> > >
> > > - If an application wants to change camera mode and set the exposure
> > > they're going to have to deal with per-mode sensitivities and adapt
> > > their numbers accordingly. Do we like this? I'm always very nervous of
> > > complicating an application's life!
> > >
> > > - You could try and handle per-mode sensitivities either outside the
> > > AGC (effectively that's what this allows) or you could handle it
> > > within the AGC. Whenever the camera mode changes you'd have to adjust
> > > the state variables according to the ratio of old_sensitivity /
> > > new_sensitivity. You'd probably have to use a different exposure
> > > profile, possibly it needs to be set by the application - or could we
> > > do something more automatic? The implementation would also depend on
> > > whether we want applications to have to pay attention to the "per-mode
> > > sensitivities" or not.
> > >
> > > So overall, I'm in several minds about going this "per-mode
> > > sensitivity" route and the work it requires on the AGC, as well as the
> > > implications for applications. In the meantime, making the exposure
> > > methods virtual allows these sensors to work and doesn't make a later
> > > step towards per-mode sensitivities any more difficult. You would
> > > simply do that work and then delete your virtual exposure methods.
> >
> > I'm sorry, but I still don't see why it requires making the below
> > functions virtual. They convert between exposure expressed in lines and
> > exposure expressed as a time, why would that need to be sensor-specific
> > ? I guess I need to see an actual use case first.
> 
> This is entirely tangential to David's usage, but I also need this method to
> be virtual so I can override this for imx477 ultra long exposure modes.  In
> this mode, we have some additional scaling to apply to the standard
> equation.

I assume that in that case the exposure_lines argument, and the return
value of ExposureLines(), won't be expressed as a number of lines, right
?

Given that sensors also often have a fine exposure time expressed in
pixels, in addition to the coarse exposure time expressed in lines,
could we create an Exposure structure that could model exposure settings
(coarse in lines, fine in pixels, plus other parameters for long
exposures) in a way that wouldn't be sensor-specific ?

> > > Possibly I've repeated myself rather a lot there - but maybe I've
> > > explained it better this time? :)
> > >
> > > > > > > Signed-off-by: David Plowman <david.plowman at raspberrypi.com>
> > > > > > > Reviewed-by: Naushir Patuck <naush at raspberrypi.com>
> > > > > >
> > > > > > Reviewed-by: Laurent Pinchart <laurent.pinchart at ideasonboard.com>
> > > > > >
> > > > > > > ---
> > > > > > >  src/ipa/raspberrypi/cam_helper.hpp | 4 ++--
> > > > > > >  1 file changed, 2 insertions(+), 2 deletions(-)
> > > > > > >
> > > > > > > diff --git a/src/ipa/raspberrypi/cam_helper.hpp b/src/ipa/raspberrypi/cam_helper.hpp
> > > > > > > index 1b2d6eec..4053a870 100644
> > > > > > > --- a/src/ipa/raspberrypi/cam_helper.hpp
> > > > > > > +++ b/src/ipa/raspberrypi/cam_helper.hpp
> > > > > > > @@ -66,8 +66,8 @@ public:
> > > > > > >       virtual ~CamHelper();
> > > > > > >       void SetCameraMode(const CameraMode &mode);
> > > > > > >       MdParser &Parser() const { return *parser_; }
> > > > > > > -     uint32_t ExposureLines(double exposure_us) const;
> > > > > > > -     double Exposure(uint32_t exposure_lines) const; // in us
> > > > > > > +     virtual uint32_t ExposureLines(double exposure_us) const;
> > > > > > > +     virtual double Exposure(uint32_t exposure_lines) const; // in us
> > > > > > >       virtual uint32_t GetVBlanking(double &exposure_us, double minFrameDuration,
> > > > > > >                                     double maxFrameDuration) const;
> > > > > > >       virtual uint32_t GainCode(double gain) const = 0;

-- 
Regards,

Laurent Pinchart


More information about the libcamera-devel mailing list