[libcamera-devel] Debayering

paul.elder at ideasonboard.com paul.elder at ideasonboard.com
Tue Nov 16 10:18:19 CET 2021


Hello,

On Mon, Nov 15, 2021 at 12:16:39PM +0200, Laurent Pinchart wrote:
> Hi Dorota,
> 
> On Sat, Nov 13, 2021 at 11:56:04AM +0100, Dorota Czaplejewicz wrote:
> > On Fri, 12 Nov 2021 18:25:55 +0900 paul.elder at ideasonboard.com wrote:
> > > On Fri, Nov 12, 2021 at 02:09:43AM +0200, Laurent Pinchart wrote:
> > > > On Thu, Nov 11, 2021 at 11:56:07AM +0000, Kieran Bingham wrote:  
> > > > > Quoting Jacopo Mondi (2021-11-10 20:32:29)  
> > > > > > On Wed, Nov 10, 2021 at 02:01:23PM +0100, Dorota Czaplejewicz wrote:  
> > > > > > > Hi all,
> > > > > > >
> > > > > > > I'm making sure that Librem 5 cameras can be easily used via
> > > > > > > libcamera. One of the steps that are needed for the "easily" part is
> > > > > > > having a builtin debayering mechanism.
> > > > > > >
> > > > > > > Before I jump head-in, I wanted to ask for some guidance about where
> > > > > > > to place this support, and how to expose it to the consumer.
> > > > > > >
> > > > > > > For a simplest implementation, supported cameras could add the RGB
> > > > > > > format to the list of supported formats, and transparently convert
> > > > > > > the bayer data to RGB if configured for RGB. That opens up a
> > > > > > > question about the opt-in mechanism, but is otherwise simple enough.  
> > > > > 
> > > > > I think this should be 'opt in' only by the platform (not the 'camera'
> > > > > or 'sensor'). To me that means it's a pipeline handler decision as to
> >
> > What is the definition of "platform", "camera" and "sensor" used here?
> > Is platform=pipeline?
> 
> "Platform" more or less means the SoC type. It's roughly equivalent to
> the pipeline handler, except for the simple pipeline handler that
> supports multiple platforms. As some of them don't have a GPU, we can't
> assume that all platforms supported by the simple pipeline handler would
> use the GPU-based ISP.
> 
> "Sensor" is the imaging sensor (S5K3L6XX or YACG4D0C9SHC in your case).
> 
> "Camera" is the combination of an imaging sensor and all other related
> devices (ISP, GPU-based ISP, ...) that together implement the ability to
> capture and process frames.
> 
> > > > > whether it exposes a software/GPU ISP. (a CPU ISP should only be
> > > > > considered a shortcut to GPU accelleration later I believe).
> > > > > 
> > > > > > > I intend to start by adding a CPU-based debayering implementation
> > > > > > > for the sake of building it up quickly. Would a simple
> > > > > > > implementation like I outlined above suitable for upstreaming?  
> > > > > 
> > > > > That would depend on how it's architectured. I don't think we'd like
> > > > > 'quick hacks' in place internally to be merged.
> >
> > "Quick hacks" is not what I intend to merge. Rather, I found the
> > libcamera codebase difficult to learn, and I don't see the point in
> 
> Is there anything in particular that would have made it easier ?
> 
> > trying to architect any significant part of the solution from the
> > outset. I'd rather have a quick first step to confirm the sanity of
> > the approach, and then iterate on that.
> 
> No disagreement there, it's best to work in an iterative manner instead
> of spending a large amount of time designing something that won't fit in
> the end.
> 
> > > > > Although it has an 'optional' SimpleConvertor so it's also this:
> >
> > I'll take a look at this.
> >
> > > The current interface design that Siyuan has has statistics and image
> > > processing as separate components, so this should be supported
> > > out-of-the-box. We'll have to solidify the requirements of the
> > > interface, though, to mandate that soft ISP implementations make sure
> > > that those two are separate and dont't have dependencies under-the-hood.
> > 
> > Is that design available anywhere?
> 
> The latest version I know of is available at
> https://lists.libcamera.org/pipermail/libcamera-devel/2021-October/025580.html.
> It's work in progress, and nothing is set in stone, so you can comment
> on anything that doesn't feel right, or propose alternatives. Paul is
> following the work more closely than I do.

Yeah that's the latest available design that Siyuan has sent on the
public list. There's a newish (?) version with some comments on how the
functions are meant to be used, and splitting parameters from
statistics.

The gist is that there are separate functions for doing image processing
and for doing statistics calculations, and there are corresponding
Signals for callbacks on completion. There are also corresponding
functions for allocating frame buffers (like what hardware ISPs have via
V4L2), and for allocating statistics buffers. We're also moving in a
direction to have a wrapper class around the statistics and parameters
structs, so that different software ISPs can implement them in an
optimal way.


Paul

> 
> > > > > > Can I ask how you have envisioned your capture pipeline ? I
> > > > > > assume you have a raw sensor connected to a pipeline handler that has
> > > > > > no ISP. What component would be driving the adjustment of the sensor
> > > > > > parameters in such an architecture ?  
> > 
> > I don't have enough familiarity with libcamera for any concrete ideas,
> > so I only made vague guesses before asking the list for guidance.
> >
> > Now that I received the guidance, I will start with something like the
> > SimpleConverter (assuming that it does what I expect it to do), and
> > not concern myself with parameter adjustment.
> > 
> > Once I dig in and have more questions, I'll send updates.
> 
> Looking forward to that :-)
> 
> -- 
> Regards,
> 
> Laurent Pinchart


More information about the libcamera-devel mailing list