[libcamera-devel] Regression in commit 86a47fdcd97350bd979a4d6b00124330a3b02441

Laurent Pinchart laurent.pinchart at ideasonboard.com
Fri Sep 3 17:48:36 CEST 2021


Hi David,

On Fri, Sep 03, 2021 at 02:21:06PM +0100, David Plowman wrote:
> Hi everyone
> 
> We're having a bit of a problem with this commit ("libcamera:
> framebuffer: Add offset to FrameBuffer::Plane").

I was hoping nobody would notice before I could get the fixes merged,
but... :-)

> We use only single plane buffers in all our code and previously
> FrameBuffers would advertise having only one plane
> (frameBuffer.planes().size() would be 1), and the length would be the
> full length of the plane (all of the Y and U and V pixels). We'd use
> this information to mmap the buffer and give us a pointer. And in
> fact, we'd quite often check that there's only one plane because (for
> example) our codec driver doesn't support mplane formats.
> 
> Since this commit, everything is now showing up as 3 planes, and each
> has a length field just for a part of the plane, so the first "plane"
> would just be the Y, and so on.
> 
> Now, I can loop through all the planes with the same fd and add up all
> the lengths, and then mmap my buffer as before, but I guess I'm
> wondering why this has changed. After all, it's *not* a multi-plane
> buffer and we want to know, only now it seems harder to tell the
> difference.
> 
> Any thoughts?

Sure.

This all stems from the mess of handling multi-planar formats in V4L2.
We have *colour formats* that are planar (e.g. NV12), and depending on
the device we may have requirements that planes are stored contiguously
in memory. V4L2 evolved with a multi-planar *API* in order to support
discontiguous planes, which introduced multi-planar *V4L2 formats* such
as V4L2_PIX_FMT_NV12M to denote whether the format stores all colour
planes in a single buffer area (a.k.a. buffer plane) or multiple
discontiguous buffer areas. For backward compatibility reasons the new
API is able to support not only single-planar colour formats, but also
multi-planar colour formats stored in a single V4L2 buffer plane (e.g.
V4L2_PIX_FMT_NV12).

We had so far more or less replicated this mess in libcamera, even if
the FrameBuffer class had support for multiple planes from the
beginning. We have recently merged a patch series that adds offsets to
FrameBuffer planes, to tell at which offset in a dmabuf the data for the
plane is stored. This is required to describe multi-planar
memory-contiguous buffers in a more generic way (and incidently this is
how DRM operates). As part of that effort, we moved to always exposing
multi-planar formats with multiple planes in FrameBuffer, regardless of
how V4L2 handles them in the backend. I believe this results in a
cleaner API.

Unfortunately, I've overlooked during review that breakages were
introduced in areas where I thought we were safe. I started
investigating a regression in qcam, and it became clear that not only
was qcam not correctly updated, but the implementation in the libcamera
core was not handling conversions between FrameBuffer and V4L2 buffers
correctly. I've posted an RFC series for the latter, with a v2 coming
soon. I'll then proceed to fix qcam as well as other potential
regressions in other components (or maybe I'll do that before posting
v2, to make sure that the proposed changes in the libcamera core work as
expected).

The end result is that applications need to be updated. When it comes to
mapping a frame buffer, we have a class named MappedFrameBuffer that
handles this for internal use cases (it's not part of the public API).
I've been reluctant to expose it to applications, as it's not really the
job of libcamera to handle mapping of buffers. On the other hand, it's
also not really the job of libcamera to allocate buffers, but we have a
FrameBufferAllocator for convenience. Another convenience helper to map
buffers would make sense.

This being said, on some platforms camera buffers me be stored in a
tiled format, in which case mapping them for CPU access would require
platform-specific work. mmap() won't work in this case, we may need to
go through a tiler device, and that's the job of a system memory
allocator. We may in the end only support the frame buffer mapper helper
for buffers allocated through FrameBufferAllocator, and leave it to
application the task of mapping their own buffers if they import them
from somewhere else.

Any thoughts ? :-)

-- 
Regards,

Laurent Pinchart


More information about the libcamera-devel mailing list