[PATCH 8/8] libcamera: Add new atomisp pipeline handler

Laurent Pinchart laurent.pinchart at ideasonboard.com
Wed Nov 6 14:40:06 CET 2024


Hi Hans,

On Wed, Nov 06, 2024 at 02:25:31PM +0100, Hans de Goede wrote:
> On 5-Nov-24 12:53 AM, Laurent Pinchart wrote:
> > Hi Hans,
> > 
> > (CC'ing Sakari)
> > 
> > Thank you for the patch.
> > 
> > A few high-level questions first.
> > 
> > On Sun, Nov 03, 2024 at 04:22:05PM +0100, Hans de Goede wrote:
> >> Add a basic atomisp pipeline handler which supports configuring
> >> the pipeline, capturing frames and selecting front/back sensor.
> >>
> >> The atomisp ISP needs some extra lines/columns when debayering and also
> >> has some max resolution limitations, this causes the available output
> >> resolutions to differ from the sensor resolutions.
> >>
> >> The atomisp driver's Android heritage means that it mostly works as a non
> >> media-controller centric v4l2 device, primarily controlled through its
> >> /dev/video# node.
> > 
> > Could that be fixed on the kernel side (assuming someone would be able
> > to do the work of course) ?
> 
> Yes, note that the current kernel driver already uses the media-controller
> and has separate subdevs for the ISP, CSI receivers, sensors and VCM,
> see e.g. the 2 attached pngs for 2 different setups (generated by dot).
> 
> And the atomisp pipeline handler e.g. already configures mc-links to
> select which sensor to use.
> 
> So we are already part way there.

Ah nice :-)

> The thing which currently is not
> very mc-centric is that a single set_fmt call is made on /dev/video#
> after setting the mc-links and then that configures the fmts
> on all the subdevs taking the special resolution-padding requirements
> of the ISP into account.
> 
> Currently atomisp kernel code already allocates and initializes
> a bunch of ISP contexts at this set_fmt call time (rather then
> at request-buffers time) and more importantly it selects which
> pipeline program (since the ISP is not fixed function) to run on
> the ISP at this time. Changing that is very much no trivial.

I see there's quite a bit of untangling that would need to be done
indeed.

Speaking of this, how do you plan to handle side-by-side development in
libcamera and in the driver ? I don't see how we could ensure backward
compatibility in any clean way on either side, would it be fine to tell
users they will always have to use the latest version on both sides ?

> I guess we could keep allocating those at that time and have
> a flag (ioctl / v4l2-ctrl?) to skip the propagating of the fmts
> to the subdevs and instead having the pipeline handler set
> the subdev fmts itself, but I do not see much added value in that
> atm.

By itself it doesn't add a lot of value indeed, but it would still
prepare for the future.

Another thing that would need to be looked at is replacing the ISP
parameters ioctl API with a parameters buffer. That will be useful to
set the white balance gains.

> >> The driver takes care of setting up the pipeline itself
> >> propagating try / set fmt calls down from its single /dev/video# node to
> >> the selected sensor taking the necessary padding, etc. into account.
> >>
> >> Therefor things like getting the list of support formats / sizes and
> >> setFmt() calls are all done on the /dev/video# node instead of on subdevs,
> >> this avoids having to duplicate the padding, etc. logic in the pipeline
> >> handler.
> >>
> >> Since the statistics buffers which we get from the ISP2 are not documented
> > 
> > Could the stats format be reverse-engineered ? Or alternatively, could
> > Intel provide documentation (waving at Sakari) ?
> 
> I have asked Salari about this already, but with these kinda things
> it is going to take a while to get an official yes / no answer.
> 
> >> this uses the swstats_cpu and simple-IPA from the swisp. At the moment only
> >> aec/agc is supported.
> >>
> >> awb support will be added in a follow-up patch.

-- 
Regards,

Laurent Pinchart


More information about the libcamera-devel mailing list