[libcamera-devel] Raw streams and video
Laurent Pinchart
laurent.pinchart at ideasonboard.com
Thu Sep 24 15:27:50 CEST 2020
Hello,
On Wed, Sep 23, 2020 at 01:07:40PM +0200, Niklas Söderlund wrote:
> Hi David,
>
> Thanks for bringing this up. I have wrestled with similar questions
> myself for this API, my thoughts are however note yet mature. I will
> just fill in some extra information to yours to hopefully help the
> discussion.
>
> On 2020-09-23 09:45:43 +0100, David Plowman wrote:
> > Hi everyone
> >
> > I wanted to raise a topic that I've run into recently. One of the
> > features I want to implement is the ability to capture raw video
> > (meaning the raw Bayer frames from the sensor). In this use case we
> > might typically be capturing binned frames from the sensor.
> >
> > Originally I'd thought that all I need is Naush's set of raw stream
> > patches, however, now that they've appeared I'm not so sure.
>
> I think you can do what you wish with the current API can you not?
> Create a stream with a RAW format and the desired resolution and
> configure() wold have all the information it could possibly have from a
> stream created with generateConfiguration() with any current or future
> role. As you outline bellow, after generatConfuguration() returns a
> configuration the role no longer exists, roles are only used as hints to
> generate a configuration containing a set of streams with default
> formats and resolutions.
>
> > 1. The current StillCaptureRaw role seems inappropriate. It forces the
> > use of the full sensor resolution (I realise this is something I
> > wanted at the time!) and requests a small number of buffers.
> >
> > 2. I wonder whether it would be appropriate to introduce a "VideoRaw"
> > role. This could default to a larger number of buffers and not force
> > the maximum resolution. In fact, we would need to treat its resolution
> > quite differently - it would have no effect on selecting a resolution
> > (other streams would determine that), but would have its details
> > filled in after configure() has chosen the camera mode.
>
> We are still working on defining which roles we should have and maybe
> more important what applications can expect if they request a role. For
> raw our current direction is to rename StillCaptureRaw to Raw. The idea
> is that it should be used by generateConfugiration() to create a stream
> in the configuration for the RAW buffers produced by the sensor taking
> the other requested roles into account for resolution. A patch to this
> effect [1] have been posted.
>
> > One minor point is that the stream configurations don't "know" what
> > role they have, is that right? That would make it difficult for
> > configure() to distinguish "StillCaptureRaw" streams from "VideoRaw"
> > streams, so maybe there's something to think about there.
>
> Here is where my concerns and thought have been focused. I think our
> current configuration API is a bit blunt and hard for pipelines to
> implement while still not being able to truly express the capabilities
> and limitations of a combinations of streams and current configuration.
>
> I like the idea that we create a CameraConfiguration object that is
> disconnected from the camera and owned by the application which it can
> tweak and validate, and when it wish to configure the camera it hands it
> an config object it knows the camera will accept.
>
> I don't like that it's hard for applications to enumerate the
> CameraConfiguration. After generating the configuration object the
> application IMHO should be able to reconfigure it in a more discoverable
> way. Say it changes the format on stream1 to A which makes it impossible
> for stream2 to also be configured with format A, as stream1 have higher
> priority then stream2 the application should then when enumerating
> formats for stream2 not see format A.
I'd love that too, but I think this is an unsolvable problem in the
general case. It really would boil down to implementing a generic
constraint resolution mechanism, with the caveat that constraints can be
pipeline-handler specific, and would thus be very hard to express in a
generic and simple way in the API. The same problem exists in V4L2 when
configuring devices (the API uses a trial-and-error approach), or more
recently came up during the design discussions for a generic memory
allocator that needs to satisfy memory allocation constraints of
different hardware devices to enable buffer sharing.
Some level of constraints discoverability is possible, and useful in my
opinion, but I don't think we can reach 100% coverage, especially while
keeping the API towards applications usable, and implementable on the
pipeline handler side. I'm working on a proposal that will expose
capabilities of streams at the stream level, before a configuration is
even generated, which should hopefully help address the issue pointed
out in this mail thread. I'm not sure if it will be enough though. I'm
certainly interested in hearing any idea on how we could address the
issue better, keeping in mind that the whole configuration API behaviour
has to be consistent, well documented (and thus not too difficult to
document), and easy enough to use and implement.
> I'm envisioning a console test application that could interactively asks
> it user to configure it's camera, in a very simplistic form for some
> theoretical pipeline which only can support a single RAW stream and a
> ISP that supports 2 outputs, but it can only downscale
>
>
> How many streams do you want (1-3):
> > 3
>
> For stream 0 which of the supported pixel formats do you want:
> 1: RAW
> 2: NV16
> 3: NV12
> > 1
>
> For stream 0 with format RAW which resolution do you want:
> 1: 640x480
> 2: 800x600
> 3: 1024x768
> > 2
>
> For stream 1 which of the supported pixel formats do you want:
> 1: NV16
> 2: NV12
> > 1
>
> For stream 1 with format NV16 which resolution do you want:
> 1: 640x480
> 2: 800x600
> > 2
>
> For stream 2 which of the supported pixel formats do you want:
> 1: NV16
> 2: NV12
> > 2
>
> For stream 2 with format NV12 which resolution do you want:
> 1: 640x480
> 2: 800x600
> > 1
>
> Stream 0 RAW 800x600
> Stream 1 NV16 800x600
> Stream 2 NV12 640x480
>
> I know Laurent is working on a new configuration API but I'm unfamiliar
> with the specifics. But for this reason I have put the shortcomings I
> see in the current interface low on my agenda as I'm sure the API will
> change with his work.
>
> On an related topic I'm also not found of exposing Camera::streams() to
> applications ;-)
>
> > Anyway, apologies if I've started another meta-discussion. Thoughts
> > welcome, as always!
>
> 1. [PATCH] libcamera: stream: Rename StillCaptureRaw to Raw
--
Regards,
Laurent Pinchart
More information about the libcamera-devel
mailing list