[libcamera-devel] libcamera transform control

Naushir Patuck naush at raspberrypi.com
Mon Jul 13 10:16:19 CEST 2020


Hi Laurent,

David is away this week, so I'll reply with some thoughts and comments
in his absence.

On Sun, 12 Jul 2020 at 00:28, Laurent Pinchart
<laurent.pinchart at ideasonboard.com> wrote:
>
> Hi David,
>
> On Thu, Jul 09, 2020 at 09:13:56AM +0100, David Plowman wrote:
> > Replying to my own post to add comments from Naush and Jacopo!
> >
> > On Mon, 6 Jul 2020 at 14:32, David Plowman wrote:
> > >
> > > Hi everyone
> > >
> > > This email takes the previous discussion of image transformations and
> > > moves it into a thread of its own.
> > >
> > > The problem at hand is how to specify the 2D planar transformation to
> > > be applied to a camera image. These are well known to us as flips,
> > > rotations and so on (the usual eight members of dihedral group D4).
> > >
> > > Laurent's suggestion was to apply them as standard libcamera controls.
> > > Some platforms may be able to apply them on a per-frame basis; others
> > > will not (they will require the transform to be set before the camera
> > > is started - once this mechanism exists). (Is there anyway to signal a
> > > platform's capabilities in this regard?)
>
> Not at the moment, but I think we should add that information to the
> ControlInfo class. It should be fairly straightforward, we can add a
> flags field, with a single field defined for now to tell whether the
> control can be modified per frame or has to be set before starting the
> camera.

That would work.  However, I do struggle to see what would require an
application to change the orientation on a per-frame basis.  In
particular, if the change in orientation requires a transpose
operation, then the buffer sizing might also change due to alignment
constraints, and these would almost certainly require a set of buffer
re-allocations unless you pre-allocate for the largest possible size.
Of course, that is not to say that there may be other controls that
can only be set on startup (I cannot think of any specific ones right
now).

>
> > > For the time being we are ignoring the possibility that some platforms
> > > might be able to apply different transformations to different streams
> > > (I believe there is in any case no mechanism for per-stream controls).
>
> Indeed. That's something we may add in the future, but as let's try to
> avoid it if possible :-)
>
> > > Note also that raw streams will always have the orientation with which
> > > they come out of the camera, though again I don't believe we have a
> > > convenient way for a platform to indicate whether this includes the
> > > transform or not  (perhaps a stream could indicate its transform
> > > relative to the requested transform?).
> > >
> > > We propose to represent the transforms as "int" controls, in fact
> > > being members of an enum with eight entries. Further, we suggest that
> > > the first four entries are "identity", "hflip", "vflip", "h+vflip",
> > > meaning the control's maximum value can indicate whether transforms
> > > that involve a transposition are excluded.
>
> An enum sounds good. How would you name the next our entries ? :-)
>
> > > Naush:
> > >
> > > ... the pipeline handlers need to maintain a list of
> > > controls supported, like in include/libcamera/ipa/raspberrypi.h
> > > (RPiControls).  This then gets exported to the CameraData base class
> > > member controlInfo_.  ARequest will not allow setting a
> > > libcamera::control that is not listed in CameraData::controlInfo_
> >
> > Jacopo:
> >
> > That's correct, it seems to me from this discussion, a ControlInfo
> > could be augmented with information about a control being supported
> > per-frame or just at at configuration time (or else...)
>
> Seems we agree :-)
>
> Please also note that include/libcamera/ipa/raspberrypi.h lists controls
> that are supported by the IPA. If there are controls that are fully
> implemented on the pipeline handler side without involving the IPA (I'm
> not suggesting this is or is not the case here, it's a general remark),
> they can be added to the control info map by the pipeline handler.
>
> > > Notes on the Raspberry Pi implementation:
> > > Only transpose-less transforms will be allowed, and only before the
>
> What do you mean by transpose-less transforms ?

Anything that involves only a combination of horizontal and vertical
flips would be a tranpose-less transform, e.g. rot180, rot180 +
mirror.  Any rot 90/270 would require a transpose operation, and we
cannot do that with the hardware block.

>
> > > camera is started. We will support them by setting the corresponding
> > > control bits in the camera driver (so raw streams will include the
> > > transform).
>
> Does this mean configuring h/v flip at the camera sensor level ? I
> assume that will be the most usual case. I wonder if offline ISPs
> typically support h/v flip.

The RPi ISP does support it.  However, we do find it much much easier
overall if the flips occurred at the source.

>
> > > This means the ALSC (Auto Lens Shading) algorithm will
> > > have to know whether camera images are oriented differently from the
> > > calibration images.
>
> How do you think we should convey the orientation of the calibration
> data ?

We could have a field in our tuning file specifying the orientation of
the calibration table - or always enforce the calibration must happen
at rot 0.  Then any orientation change can be passed into the IPA (via
controls?) so that the calibration table can be flipped appropriately.
Personally, I prefer to have all calibration data fixed at rot 0, but
David may have other opinions :)  However, all this is up to vendors
to decide for themselves I suppose.

>
> > > Thoughts?
>
> --
> Regards,
>
> Laurent Pinchart
> _______________________________________________
> libcamera-devel mailing list
> libcamera-devel at lists.libcamera.org
> https://lists.libcamera.org/listinfo/libcamera-devel

Regards,
Naush


More information about the libcamera-devel mailing list