[libcamera-devel] apply methods in IPARPi

Naushir Patuck naush at raspberrypi.com
Thu May 6 15:45:44 CEST 2021


On Thu, 6 May 2021 at 14:32, Laurent Pinchart <
laurent.pinchart at ideasonboard.com> wrote:

> Hello,
>
> On Thu, May 06, 2021 at 09:45:38AM +0100, David Plowman wrote:
> > On Thu, 6 May 2021 at 09:04, Naushir Patuck <naush at raspberrypi.com>
> wrote:
> > > On Wed, 5 May 2021 at 19:08, Sebastian Fricke <
> sebastian.fricke at posteo.net> wrote:
> > >>
> > >> Hello Naushir and David,
> > >>
> > >> I am currently performing a lot of investigation and preparatory work
> > >> for a few generalizations in form of an IPAFramework class, that
> reuses
> > >> some of the great concepts you introduced to libcamera. Along the way
> I
> > >> noticed, that you have implemented the apply methods (applyAGC,
> applyAWB,
> > >> etc.) as part of the IPARPi class.
> > >>
> > >> I was wondering if it would be possible to move those to the
> controller
> > >> and specific algorithm implementation, as they mostly get the status
> from
> > >> the algorithm, perform a few calculations with it and insert the
> results
> > >> of those calculations into the ISP controls.
> > >>
> > >> The upside of doing so, from my point of view, would be that we could
> > >> call `apply` similar to the way we call `initialize`, `process` and
> > >> `prepare`.
> > >> That way, I would be able to generalize the whole step of applying the
> > >> algorithm status to a set of controls, which can then be used for
> setting
> > >> the ISP controls.
> > >> Now, the reason why I ask you is that I want to be sure, that I didn't
> > >> miss a key concept.
> > >
> > >
> > > This was indeed something I did consider doing some time back, but
> > > never got to it.  There are a couple of considerations with this
> > > proposal that we need to address:
> > >
> > > 1) My original intention with keeping apply* out of the Controllers
> > > was to *somewhat* decouple the BCM2835 ISP parameters from the
> > > Controller algorithms.  I say somewhat because some algorithms do
> > > have to know the statistics structure defined in the bcm2835-isp.h
> > > header, so there is already a hardware dependency there.  Secondly,
> > > I wanted to try and not include any libcamera dependency in the
> > > Controller code.  We do now have libcamera logging in the controller
> > > source files, so again, this is also not strictly true.
> > >
> > > 2) applyAGC() is a special one in that it runs after
> > > Controller::Process() whereas all the others run after
> > > Controller::Prepare.  applyAGC() also runs on IPA::start() so this
> > > must be treated differently somehow.
> > >
> > > In a way, having a Controller::Apply() might be quite nice, if we
> > > are able to somehow add a HW abstraction layer (through templated
> > > functions perhaps?).  Or does it even matter? David, what are your
> > > thoughts on this?
> >
> > So I remain quite strongly in favour in keeping hardware specific code
> > away from the algorithms. As Naush says, they do get a bit mixed
> > together because of the statistics, but for the moment we have to live
> > with this. We could imagine a formal API which takes the statistics
> > from the hardware (whatever that is) and produces the numbers the
> > algorithms want, though sometimes you could imagine tweaking the
> > algorithm slightly might be better than going through lots of hoops
> > with statistics, so that's probably one to think about.
>
> Keeping the implementation of algorithms hardware-agnostic would be
> great, but it's a hard problem. We could imagine translating all
> statistics to a standard format before passing them to algorithms, but
> that will have a performance impact, and could also result in
> information loss as a format that would be a superset of all possible
> hardware features is likely impossible. We need to figure out some
> middleground, perhaps in the form of device-agnostic algorithm "pieces"
> that would be assembled by device-specific algorithm "implementations".
> There's room for research and clever designs.
>
> > Nonetheless, I really don't want more hardware knowledge getting into
> > the algorithms, nor would I want to see algorithms making direct calls
> > that will set hardware.
>
> Fully agreed on the second part, the IPA design explicitly forbids
> accessing the device from algorithms.
>

I think the idea here is to allow Controller::Apply() to only populate the
ControlList items, and the IPA to send that ControlList to the PH to apply
the parameters as V4L2 controls, as is done today.


>
> > Similarly, I really don't want one algorithm
> > to make direct calls that affect another algorithm - this always
> > starts quite benignly but can ultimately tend towards spaghetti. The
> > rule that we have is that this is all communicated through metadata.
>
> While exception may sometimes be needed, I really want to follow that
> rule.
>
> > But I think that still leaves us with options. Controller::Apply()
> > sounds like a reasonable thing to have, so long as it doesn't touch
> > the algorithms and only uses the metadata that they produced. Perhaps
> > we need a "PreApply" and a "PostApply", to allow for them to operate
> > either before or after the ISP.
> >
> > One more point to consider is what we'd do with imaging pipelines that
> > have a different architecture. One obvious case would be the "straight
> > through" ISP where the image buffer goes straight from the CSI-2
> > receiver into the ISP and only reaches memory after that. Another
> > variant on that might be a hybrid architecture, where some processing
> > happens in this "straight through" manner, but then there's a "main"
> > ISP which runs memory-to-memory.
>
> The former is what the rkisp1 implements, the latter is also needed to
> support new platforms. We usually call the "straight through" part the
> "inline" ISP, and the memory-to-memory part the "offline" ISP. There
> will usually be a single inline ISP, but we could have multiple offline
> components (hardware ISPs, general-purpose DSPs, GPUs, ...) in the
> pipeline.
>
> > How would that look? It's probably
> > worth giving these some consideration now, rather than inventing
> > something for our current hardware platforms and finding ourselves in
> > a pickle later!
> >
> > >> --
> > >>
> > >> Additionally, off-topic to the topic mentioned above. While
> > >> investigating the RPiStream abstraction, I noticed the following
> comment
> > >> within: `rpi_stream.cpp`:116
> > >> ```
> > >>   * A nullptr buffer implies an external stream, but no external
> > >>   * buffer has been supplied in the Request. So, pick one from the
> > >>   * availableBuffers_ queue.
> > >>   ```
> > >>
> > >>   And within `rpi_steam.h`:139
> > >>   ```
> > >>   * A nullptr indicates any internal buffer can be used (from
> availableBuffers_),
> > >>   * whereas a valid pointer indicates an external buffer to be queued.
> > >>   ```
> > >>
> > >>   Which confused me a little bit, could you enlighten me in this
> matter?
> > >>   Does a nullptr buffer imply an external stream and therefore an
> > >>   external buffer? Or does a nullptr buffer imply an internal buffer
> and
> > >>   therefore an internal stream?
> > >
> > >
> > > I can see why this caused some confusion :-)
> > >
> > > Firstly, my definitions of external and internal streams may need
> > > clarification.  The former is a stream that *may* be exported to the
> > > application, and the latter is a stream that can never be exported
> > > to the application.  For external streams, the buffers used may be
> > > provided by the application, and these buffers will be returned back
> > > to the application once they have been processed.  However, we can
> > > also use internally allocated buffers for external streams that will
> > > never go back to the application.  This is needed when:
> > >
> > > 1) The application has not provided an external buffer to use for
> > > that stream in a Request.  Example of this is when we have a RAW
> > > stream set to external, but that application will only request a RAW
> > > image once every N frames.
> > > 2) If we need to drop frames as instructed by the IPA, we cannot use
> > > external buffers as they will eventually be dropped and not returned
> > > to the application.  In this case, we queue an internal buffer that
> > > can be dropped safely.  The alternative is to requeue external
> > > buffers once dropped, but will add a ton of latency to the capture
> > > cycle.
> > >
> > > Hope that helps clarify things a bit.
>
> --
> Regards,
>
> Laurent Pinchart
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.libcamera.org/pipermail/libcamera-devel/attachments/20210506/06e03b85/attachment.htm>


More information about the libcamera-devel mailing list