[PATCH 8/8] libcamera: Add new atomisp pipeline handler
Hans de Goede
hdegoede at redhat.com
Wed Nov 6 15:17:19 CET 2024
Hi Laurent,
On 6-Nov-24 2:40 PM, Laurent Pinchart wrote:
> Hi Hans,
>
> On Wed, Nov 06, 2024 at 02:25:31PM +0100, Hans de Goede wrote:
>> On 5-Nov-24 12:53 AM, Laurent Pinchart wrote:
>>> Hi Hans,
>>>
>>> (CC'ing Sakari)
>>>
>>> Thank you for the patch.
>>>
>>> A few high-level questions first.
>>>
>>> On Sun, Nov 03, 2024 at 04:22:05PM +0100, Hans de Goede wrote:
>>>> Add a basic atomisp pipeline handler which supports configuring
>>>> the pipeline, capturing frames and selecting front/back sensor.
>>>>
>>>> The atomisp ISP needs some extra lines/columns when debayering and also
>>>> has some max resolution limitations, this causes the available output
>>>> resolutions to differ from the sensor resolutions.
>>>>
>>>> The atomisp driver's Android heritage means that it mostly works as a non
>>>> media-controller centric v4l2 device, primarily controlled through its
>>>> /dev/video# node.
>>>
>>> Could that be fixed on the kernel side (assuming someone would be able
>>> to do the work of course) ?
>>
>> Yes, note that the current kernel driver already uses the media-controller
>> and has separate subdevs for the ISP, CSI receivers, sensors and VCM,
>> see e.g. the 2 attached pngs for 2 different setups (generated by dot).
>>
>> And the atomisp pipeline handler e.g. already configures mc-links to
>> select which sensor to use.
>>
>> So we are already part way there.
>
> Ah nice :-)
>
>> The thing which currently is not
>> very mc-centric is that a single set_fmt call is made on /dev/video#
>> after setting the mc-links and then that configures the fmts
>> on all the subdevs taking the special resolution-padding requirements
>> of the ISP into account.
>>
>> Currently atomisp kernel code already allocates and initializes
>> a bunch of ISP contexts at this set_fmt call time (rather then
>> at request-buffers time) and more importantly it selects which
>> pipeline program (since the ISP is not fixed function) to run on
>> the ISP at this time. Changing that is very much no trivial.
>
> I see there's quite a bit of untangling that would need to be done
> indeed.
>
> Speaking of this, how do you plan to handle side-by-side development in
> libcamera and in the driver ?
That is a good question my answer is: "carefully".
I hope to be able to make time to get any review comments on
the atomisp pipeline handler addressed and post new version
regularly so that this can get merged in a reasonable time frame.
Then I would like to enable this for Fedora 42 (code complete
deadline 18 Feb 2025, final freeze April 1st 2025) and once enabled
there some form of compatibility will need to be kept.
As mentioned in the other thread IMHO it is import to start shipping
this to end users in a somewhat usable form to hopefully build
a community around this and get more contributors.
So lets say that we start with swstats and then manage to switch
to ISP 3A stats, then I will likely keep the swstats support around
behind some flag for testing / comparison at least for a while.
And I would e.g. make the pipeline handler detect an older kernel
driver and auto switch to the swstats in that case for say approx.
6 months (so keep swstats support around at least that long).
So basically my idea would be to not pin ourselves to providing
a stable ABI / compatibility in either direction (kernel > libcamera,
libcamera > kernel) forever. But I also don't want things to break
if the 2 are not upgraded at exactly the same time.
Likewise if the kernel gets a new /dev/video node for stats buffers
and that is not used by userspace then it should behave as before
and allocate stats buffers itself and just cycle through those
as it does now, since I don't think the ISP can work without them.
> I don't see how we could ensure backward
> compatibility in any clean way on either side, would it be fine to tell
> users they will always have to use the latest version on both sides ?
See above. My compromise would be no long term ABI guarantees
(because staging driver) but add in some leeway by not braking things
if they get a bit out of sink.
>> I guess we could keep allocating those at that time and have
>> a flag (ioctl / v4l2-ctrl?) to skip the propagating of the fmts
>> to the subdevs and instead having the pipeline handler set
>> the subdev fmts itself, but I do not see much added value in that
>> atm.
>
> By itself it doesn't add a lot of value indeed, but it would still
> prepare for the future.
>
> Another thing that would need to be looked at is replacing the ISP
> parameters ioctl API with a parameters buffer. That will be useful to
> set the white balance gains.
Interesting. I have not had time look into this yet. But I think that
currently there is some custom ioctl which passes params including the
white balance gains.
I definitely do not plan to use any of the still existing (a lot have
been removed already) custom atomisp IOCTLs. I was actually thinking
about having v4l2-controls on the ISP subdev for the whitebalance
gains. But if other ISPs are using parameter buffers for this then
that sounds good.
Do the go through another /dev/video# node, or ... ? Are there any
docs / example code for this ?
It would be good to replace the custom ioctl used for the atomisp
params which contain the gains with a parameter buffer mechanism.
Note that AFAICT the atomisp has multiple parameter buffer types
(currently separate ioctls). Like e.g. separate buffers to pass parameters
related to special optional features like digital image stabilization.
Regards,
Hans
More information about the libcamera-devel
mailing list