[libcamera-devel] Custom automatic image processing

Laurent Pinchart laurent.pinchart at ideasonboard.com
Tue May 12 21:39:30 CEST 2020


Hi Marco,

On Mon, May 04, 2020 at 07:55:56AM +0200, Marco Felsch wrote:
> On 20-04-30 16:15, Laurent Pinchart wrote:
> > On Thu, Apr 30, 2020 at 11:07:53AM +0200, Marco Felsch wrote:
> > > On 20-04-29 19:33, Laurent Pinchart wrote:
> > > > On Wed, Apr 29, 2020 at 04:49:51PM +0200, Marco Felsch wrote:
> > > > > Hi all,
> > > > > 
> > > > > first of all I'm very new to this project. I noticed the project since
> > > > > the ELCE2018 but never got intouch with it until now. A customer of us
> > > > > wants to implement a custom automatic image processing unit within the
> > > > > gstreamer pipeline. This isn't the right place for it and we proposed
> > > > > libcamera but I don't know how much work we need to implement it. Here
> > > > > are the facts:
> > > > >   - Platform: imx6
> > > > 
> > > > Which i.MX6 is that ? There are important variations in the camera
> > > > hardware between different SoCs in the i.MX6 family.
> > > 
> > > It's an i.MX6 Dual.
> > 
> > IPUv3 based then, OK. Not supported in libcamera yet, but that would be
> > a nice addition :-)
> 
> Yep, I covered that only intel, qcom and rockchip are supported
> currently. Adding the support shouldn't be a big deal?!

We now also support Raspberry Pi and i.MX7 :-) Qualcomm platforms aren't
supported yet (but work is in progress to support raw capture).

Adding support for the i.MX6 shouldn't be a big deal, no. Ideally it
should be added to the simple pipeline handler
(src/libcamera/pipeline/simple), but the hardware pipeline of the i.MX6
being a bit more complex, I'm not sure if it would be easy. That
certainly the first target in any case.

> > > > >   - Camera: AR0237 (I will send a driver in the next weeks)
> > > > 
> > > > Please make sure to support at least the following features in that
> > > > driver:
> > > > 
> > > > - .get_selection() with the V4L2_SEL_TGT_CROP_BOUNDS,
> > > >   V4L2_SEL_TGT_NATIVE_SIZE and V4L2_SEL_TGT_CROP targets
> > > >   (V4L2_SEL_TGT_NATIVE_SIZE isn't required today but we're considering
> > > >   switching from V4L2_SEL_TGT_CROP_BOUNDS to V4L2_SEL_TGT_NATIVE_SIZE)
> > > > - V4L2_CID_PIXEL_RATE in read-only mode
> > > > - V4L2_CID_HBLANK in read-only mode (read-write is fine too)
> > > 
> > > Thanks for this list :)
> > > 
> > > > The driver should also support the following features, even if they're
> > > > not mandatory at the moment:
> > > > 
> > > > - Direct control of binning and cropping through the .set_selection()
> > > >   and .set_fmt() operations (no hardcoded list of modes with register
> > > >   lists)
> > > 
> > > Luckily I went that way.
> > 
> > Nice :-)
> > 
> > > > - V4L2_CID_VBLANK in read-write mode
> > > > 
> > > > >   - Gstreamer as media framework
> > > > 
> > > > Thanks to Nicolas we have GStreamer support for libcamera :-)
> > > 
> > > And that is working fine? Just saw a longer ToDo, therfore I'm asking
> > > here :)
> > 
> > It has been tested and is working, but as you noted there's a todo list,
> > so everything isn't perfect (yet). It largely depends on what your
> > particular needs are on the gstreamer side, but I don't see any
> > particular problem that couldn't be solved by some more work on the
> > libcamera gstreamer element.
> 
> Currently we only want to open the device and stream the bayered
> incoming images to the "gldebayer-unit" and stream it via udp. So no
> special handling needed for the source.
> 
> > > > >   - Custom embedded OS (no Android)
> > > > > 
> > > > > The sensor can embed sensor statistics into the frame at the beginning
> > > > > and the end. We need to extract those data, calc all necessary values
> > > > > (like gain, exposure, ...) and adjust the sensor using v4l2-ctrls if
> > > > > necessary.
> > > > > 
> > > > > Pls, can someone provide me some information what we can expect from the
> > > > > current master state. What is working? How much work would you expect
> > > > > for the above use-case?
> > > > 
> > > > I few questions before providing you with answers.
> > > > 
> > > > - Unless I'm mistaken, the i.MX6 doesn't support colour interpolation
> > > >   (de-bayering) in hardware, and the AR0237 can only produce raw Bayer
> > > >   data, right ?
> > > 
> > > You're right the iMX6 has no de-bayer unit. Therefore we wanna do the
> > > de-bayering within the GPU using the GStreamer gl plugins.
> > > 
> > > > - Is the i.MX6 you're using capable of capturing the embedded data and
> > > >   embedded statistics to a different buffer than the image data
> > > >   (splitting the incoming frame based on line numbers to different DMA
> > > >   engines) ?
> > > 
> > > As far as I know no. The i.MX6 IPU is not capable to split the incoming
> > > stream. So we need to extract them manually.
> > 
> > OK. As you de-bayer with the GPU then I assume the GPU will skip the
> > embedded data lines.
> 
> Yep.
> 
> > I don't see any particular obstacle. We would need to implement an IPA
> > module for the algorithms (which ones are you interested in by the way,
> > will you configure other sensor parameters than exposure time and gain
> > ?).
> 
> Currently I don't know since we are in a very begin state. As you see
> this sensor uses a custom bayer array (with interleaved ir pixels). I
> think that our customer wants to don't care about the day time. They
> just want to open the device and the pictures should have the best
> qualities.

Let's revisit this when you'll know a bit more about the sensor
parameters you need to control. Your use case is very interesting but
departs a bit from traditional hardware architectures for camera
support, so we'll have to design the software architecture carefully.

> > The IPA would be given the image buffer, and extract embedded data.
> > This is already what happens today when embedded data is captured to a
> > separate buffer, having a large image in the buffer between embedded
> > data and embedded statistics isn't a big deal.
> 
> Okay.
> 
> > The only point that could require an API extension in libcamera is
> > reporting the amount of embedded data lines at the beginning and end of
> > the image to applications, for the GPU to properly skip them.
> 
> Do you don't think that we can handled this by a new v4l-ctrl?

Unless the amount of embedded data is fixed for that sensor (in which
case we could hardcode that information in userspace), a V4L2 extension
will be needed. Embedded data support will be introduced in V4L2 for
CSI-2 first (I'll start working on this soon), and will likely not use
controls but a larger API extension. We will need to decide how to
expose it to userspace for parallel sensors.

> > I would
> > however like to explore the possibility of implementing GPU de-bayering
> > directly inside libcamera instead of using a GStreamer gl plugin, as
> > that's a component that could be reused between multiple devices.
> 
> I see, unfortunately we didn't start with libcamera and the development
> for a GStreamer gl-based de-bayer mechanism already started. Maybe we
> come back to libcamera if we step into too much troubles.

I would expect most of the work to go into development of the shaders
and interfacing with the GPU. Hopefully that could easily be ported to
libcamera :-) Do you plan to release that code under an open-source
license ?

-- 
Regards,

Laurent Pinchart


More information about the libcamera-devel mailing list