[libcamera-devel] Software gathering of image statistics for libcamera on Librem 5
Laurent Pinchart
laurent.pinchart at ideasonboard.com
Sun Dec 11 00:37:18 CET 2022
Hello,
On Fri, Dec 09, 2022 at 07:23:49PM +0100, Pavel Machek via libcamera-devel wrote:
> Hi!
>
> > > Anyway, I'm a bit of worried about mapping the frames once in
> > > libcamera and second time in application. Syscalls are probably fast
> > > enough but it is kind of ugly.
> >
> > You certainly pay a price for the double mapping. I'm not sure how it
> > compares to the time it take for the actual computations though.
>
> It really depends on how good statistics I'll want to gather.
There will always be a balance between quality and efficiency. An AWB
algorithm that uses a few pixels only will be faster but will misbehave
more often. That's not an issue as such, although I'd like to see an
implementation where the quality/CPU time cursor could be set through
configuration parameters.
> > > > All of this for AEGC, which produces values to be applied on the
> > > > -sensor-. What about AWB ? Are you doing software AWB too ? How
> > > > are you realizing it, by changing what parameters ?
> > >
> > > Yes, millipixels is doing software AWB. I'd not call it important (and
> > > don't know details), but it is "fast enough". Code is here:
> > >
> > > https://source.puri.sm/Librem5/millipixels/-/blob/master/quickpreview.c
> >
> > I do there see the colorspace conversion and a software debayering
> > implementation. I guess one can plumb AWB there by adjusting the color
> > space transfer function matrices coefficients or by piping some color
> > gain adjustment routine after that.
>
> Yes, and we do kind of AWB there, the other half is in
> process_pipeline.c -- compute_statistics().
>
> > > > I actually suggest to consider GPU, but seeing a pure software
> > > > implementation running would be nice for a comparison. However, I
> > > > would be hesitant in having anything like that upstream for the above
> > > > mentioned reasons.
> > >
> > > Well, there's no other good place to put camera support :-(. We should
> > > not really AE/AF support in each application, and it would not be
> > > acceptable in kernel.
> >
> > That's a good point.
> >
> > I'm not sure yet how we could model that if not with a module that
> > acts a soft-IPA, but that means double mapping, in the library and in
> > the application at display time (unless your application is smart and
> > does 0 copy rendering)
>
> I guess we can live with double mapping for a while.
>
> And -- I forgot to say this earlier -- debayering is a task that could
> use GPU help, especially if we do it in context of movie
> recording. But for movie recording, specialized hardware to do the
> mp4/avi/... encoding will be more or less required at resolutions >
> 1Mpix, anyway.
Not just debayering, the GPU is very well suited to apply colour gains
too, and to perform RGB to YUV conversion.
While I think the GPU should be used in pretty much any real use case
where an ISP is not available (especially for the Librem5), I would also
welcome a reference CPU-based ISP implementation in libcamera. This
would be helpful for testing and development of algorithms, as well as
possibly for platform bringup. The way I envision this, we should have
an abstract soft ISP interface, with multiple implementations (CPU and
GPU). It should be designed as a component that pipeline handlers can
easily use.
The soft ISP interface should also offer a way to replace some of the
soft processing steps with hardware-based processing. For instance, on
the Librem5, the hardware can compute statistics. It should be possibly
to disable software statistics computation in that case.
Regarding frame buffer mapping, one point that needs to be taken into
account is that we can't always rely on mmap()ing the dmabuf fds
producing the correct results. Especially for GPU buffers, tiled layouts
are common, and to access the buffer from the CPU we would need to map
it through a tiler unit that offers a linear view. This requires
involving the buffer supplier. We have a partial implementation of this
in the CameraBuffer class (src/android/camera_buffer.h), see for
instance src/android/mm/cros_camera_buffer.cpp that shows how a custom
mapping API can be used. This mechanism is internal to the HAL
implementation, we would likely need something more generic.
> What I'm saying is that statistics gathering can be done on CPU in
> good-enough way.
--
Regards,
Laurent Pinchart
More information about the libcamera-devel
mailing list