[libcamera-devel] Sharpness control proposal

David Plowman david.plowman at raspberrypi.com
Wed Jun 17 18:46:55 CEST 2020


Hi Laurent   (oops, forgot to "Reply All" previously...)

I wonder if we're starting to talk ourselves in circles! Maybe we accept that
there are min/default/max values, and allow them to be floats. I think
that covers everyone.

We can recommend a "roughly linear" scale (which probably means the
minimum value should be zero) as this helps application developers by
giving them an intuition as to what the numbers mean, but in the end we
can't enforce that. If a particular device or platform has done something
different, well, folks will just have to cope with it.

Do we think everyone can go with that?

Best regards
David

On Wed, 17 Jun 2020 at 02:32, Laurent Pinchart
<laurent.pinchart at ideasonboard.com> wrote:
>
> Hi David,
>
> On Tue, Jun 16, 2020 at 11:53:56AM +0100, David Plowman wrote:
> > On Tue, 16 Jun 2020 at 04:13, Laurent Pinchart wrote:
> > > On Mon, Jun 15, 2020 at 12:20:15PM +0100, David Plowman wrote:
> > > > Hi
> > > >
> > > > We're busy trying to produce libcamera-based alternatives to existing
> > > > camera applications. One feature they quite often have is the ability to let
> > > > the user change the image sharpness.
> > > >
> > > > I note that V4L2 has a sharpness control, wherein the "minimum value"
> > > > means no sharpening, and "larger values increase the amount of
> > > > sharpening". Slightly in line with this, I'd like to suggest:
> > > >
> > > > A sharpening control taking a float value, where:
> > > > * A value of zero means no sharpening.
> > > > * 1.0 applies a "reasonable" default level of sharpening.
> > > > * We can stipulate some fairly arbitrary maximum.
> > >
> > > Do you think we need floating point values for this, or would an integer
> > > give enough precision ?
> >
> > I would agree that a float is more precision than we need. On the other
> > hand, it saves us from defining something that might have less precision
> > than some future pipeline handler wants, and it also has a natural
> > interpretation - you don't have to think what the scale really is.
> >
> > > > Beyond this, we could recommend that the value here is loosely
> > > > "proportional" to the amount of sharpening applied. (For the purposes
> > > > of this definition, we might refer to the amount of sharpening as
> > > > being the difference between a sharpened and unsharpened image.)
> > > >
> > > > What do people think? I feel there is some benefit in trying to be a
> > > > bit more prescriptive about what the control means; otherwise it
> > > > becomes more difficult for a cross-platform application to know how to
> > > > use it.
> > >
> > > Overall I think your proposal is good. I also would prefer being more
> > > prescriptive if possible. As libcamera has support for reporting default
> > > values, we may want to specify a fixed maximum value here, and let the
> > > default being device-specific. Or do you think we could specify both a
> > > fixed default (1.0) and a fixed maximum ?
> >
> > So I imagine two places where the sharpness value gets interpreted.
> >
> > Firstly, there's the application. This may present a GUI with a slider
> > returning (for example) values from 0 to 100, where 0 = no sharpening,
> > 100 = some very large amount and maybe 50 = "default".
> >
> > The application would have to transform this into the libcamera
> > parameter. It will have to make a decision on how the 0 to 50 range
> > maps onto 0.0 to 1.0 (or whatever), and how 50 to 100 maps onto
> > 1.0 to "maximum".
>
> In that use case, from a user point of view, would it make sense for the
> slider to be linear (whatever linear means for the perceived
> sharpness...) ? If so, and if we document, as you proposed, that "the
> value [should be] loosely "proportional" to the amount of sharpening
> applied" (which I think is a good idea too), that would imply that the
> maximum value would need to be 2.0 :-) I wonder how to reconcile
> linearity for both the slider and the control. Maybe we shouldn't ?
>
> > Secondly the pipeline handler or IPA (or someone!) will turn this into
> > something the hardware understands, which is probably beyond our
> > concern.
> >
> > So my feeling is that it wouldn't hurt to allow pipeline handlers to pick
> > their own default values and scale for the sharpness parameter, but
> > I don't really see that it's necessary - why push complexity (OK, only a
> > little complexity!) to the application if the pipeline handler can deal
> > with it anyway?
>
> Agreed, the more we can standardize minimum, maximum and default values,
> the better.
>
> > But certainly, even if an application has to look up default values before
> > generating the sharpness parameter that libcamera wants, it's hardly a
> > big deal, so I don't feel terribly strongly.
> >
> > As regards a maximum value, we could certainly impose one. For
> > example, once you've reached, say, 4x the default, you've got crazy
> > amounts of sharpening, though there's never a theoretical limit as to
> > how high it could go.
>
> Please however note that it may be an issue for UVC devices.  The UVC
> protocol reports min, max and default values, but doesn't specify if the
> scale is linear. If we require the minimum to be 0.0 and the default to
> be 1.0, then the maximum would vary between different UVC devices, if we
> map the libcamera sharpness control linearly to the UVC sharpness
> parameter. This is an issue only if we want to specify a fixed maximum
> in our control definition, which we may not want to do. One option would
> be to recommend a maximum value, but not make it mandatory. Applications
> would always have to retrieve the maximum dynamically at runtime.
>
> > It would be interesting if other pipeline handler / ISP developers had
> > any thoughts on the matter.
>
> I can't agree more. I wish other vendors were as open as Raspberry Pi
> here :-) I however fear we need to pioneer this on our own for some time
> until other vendors decide to jump on the bandwagon.
>
> > > How is this implemented in the Rasperry Pi case, do you use an unsharpen
> > > filter, or something different ?
> >
> > Not too sure what I can say, but clearly there will be high pass
> > filters involved. These responses will get converted into a certain
> > amount of sharpening and added to the image. But you're right, an
> > unsharp mask is a way of making a high pass filter so it can be
> > thought of in these terms.
>
> I understand you can't share all hardware details, no worries.
>
> --
> Regards,
>
> Laurent Pinchart


More information about the libcamera-devel mailing list