[libcamera-devel] use of v4l2-compat.so

Laurent Pinchart laurent.pinchart at ideasonboard.com
Thu Jun 25 04:11:49 CEST 2020


Hi George,

On Tue, Jun 23, 2020 at 12:50:14PM +0300, George Kiagiadakis wrote:
> On 23/06/2020 00:55, Nicolas Dufresne wrote:
> > Le lundi 22 juin 2020 à 15:31 +0300, George Kiagiadakis a écrit :
> > <cut>
> > 
> >> For libcamera, they way I see it, it would make sense to go with a
> >> design similar to the ALSA one. libcamera could have backends for its
> >> frontend API, either through plugins or hardcoded in the library. In one
> >> backend it would directly access the device, while in the other one it
> >> would go through PipeWire. This would avoid duplicating the API.
> >>
> >> Please don't hesitate to ask me if you have further questions related to
> >> PipeWire.
> > 
> > As you may know, with libcamera, camera are no longer single stream, and frames
> > are requested. But to get the stream configured, the application need to do some
> > hanshake with the "CameraManager" (trying to simplify). So the question is what
> > does PipeWire offer (if any) to help carry this handsake to the server side
> > CameraManager ?
> 
> There are side-channels available for negotiation, which are used for
> format negotiation, for example.
> 
> Having multiple streams is not an issue. Audio is also typically
> transported using one stream per audio channel. Something similar can be
> done with the camera too. Or alternatively, if this does not work in
> this case, a single stream with multi-planar buffers could be used. Each
> buffer in PipeWire can have multiple planes, where each plane is a
> different fd.

In libcamera's case, each stream can produce buffers made of multiple
planes, so we would need something similar to what is done for audio.

> > The pull base scheduling is very interesting, since we could probably map well
> > with request, though is there mechanism to attach some data with the request in
> > pull mode ?
> 
> It could be done, yes. In PipeWire there are no requests, though, it
> works like this:
> 
> When a link is negotiated, both the source and sink are set up to use
> some special memory areas, called IO areas. These are shared memory
> areas between the two processes that implement the source and the sink
> respectively. One of those areas is the "buffers" area, where there is a
> status flag.
> 
> The sink will set this flag to NEED_DATA when it is able to consume more
> data and then it will go to sleep, waking up the PipeWire daemon.
> 
> The PipeWire daemon will then, in the next graph scheduling loop, look
> at this flag and trigger the source to write more data. The source will
> then set the next buffer id in this area (there is normally a fixed
> amount of buffers, also in shared memory, referred to by id) and change
> the flag to HAVE_DATA.
> 
> The daemon will then wake up the sink again, which will process the
> data, and so forth...
> 
> These IO areas are fully extensible, so we could have a special area for
> libcamera request metadata, for example. The sink would then need to
> both write the request metadata and set the NEED_DATA flag in the
> buffers IO area. And the source would just read this data from the
> shared memory.

Transporting requests over a shared memory protocol seems fine to me.
I'm curious to know how pipewire handles concurrency to avoid race
conditions, but that's a detail for now.

-- 
Regards,

Laurent Pinchart


More information about the libcamera-devel mailing list