[Discussion] Multiple IPAs support
Laurent Pinchart
laurent.pinchart at ideasonboard.com
Mon Jul 29 09:49:29 CEST 2024
Hi Harvey,
On Mon, Jul 29, 2024 at 02:44:05PM +0800, Cheng-Hao Yang wrote:
> On Sat, Jul 27, 2024 at 8:36 AM Laurent Pinchart wrote:
> > On Wed, Jul 24, 2024 at 05:55:51PM +0800, Cheng-Hao Yang wrote:
> > > On Wed, Jul 24, 2024 at 3:42 PM Kieran Bingham wrote:
> > > > Quoting Cheng-Hao Yang (2024-07-24 07:36:06)
> > > > > Hi folks,
> > > > >
> > > > > I want to discuss the support of multiple IPAs within a pipeline handler,
> > > > > before I upload patches, as I think there are different approaches. It's
> > > > > needed because some chromeos models (like intelipu7) need both cpu & gpu
> > > > > IPAs, and they might have different mojo interfaces.
> > > >
> > > > That sounds like multiple distinct processing blocks within a pipeline
> > > > handler.
> > > >
> > > > But do you mean the IPA will run 'on the GPU'? Or that it will 'control'
> > > > processing on the GPU?
> > >
> > > In CrOS, IPA runs in a sandbox. GPU IPA has access to GPU [1]. I believe it
> > > can send some requests to GPU hardware.
> > >
> > > [1]:
> > https://source.chromium.org/chromiumos/chromiumos/codesearch/+/main:src/platform2/camera/common/init/cros-camera-gpu-algo.conf;l=29
> >
> > In libcamera IPA modules are not allowed to access hardware. Only
> > pipeline handlers can.
> >
> > > I would suspect (but probably need some block diagrams) that they should
> > > > be broken up into a more generic set of pipelines and the 'main
> > > > pipeline handler' would then construct and connect everything.
> > > >
> > > > I can foresee a similar thing with the soft ISP development ... it could
> > > > be added as an extra processing pipline to existing platforms. For
> > > > instance it could be added to the IPU3 to support the IR sensor...
> > > >
> > > > So some sort of generic solution or design is likely desireable here.
> > > >
> > > > > Currently in the `meson.build`s, they restrict the IPA name to be the same
> > > > > as a pipeline handler name [1] [2].
> > > > >
> > > > > Here are some approaches that I can think of:
> > > > >
> > > > > 1. Besides the ipa that has the same name as pipeline handler name, build
> > > > > the one that has a suffix `_gpu` or so. For example, build `ipu7` &
> > > > > `ipu7_gpu` for pipeline handler `ipu7`.
> > > > > This approach limits the number of IPAs to be two though, and we might need
> > > > > to change the design again if there are other requirements in the future.
> > > >
> > > > Indeed, this just sounds like a hack/shortcut to me.
> > > >
> > > > > 2. Build exactly the IPAs specified in the meson option: `ipas` (or we add
> > > > > another meson option). IIUC, currently the default value of meson option
> > > > > `ipas` contains all the possible values, and it might be the case for most
> > > > > of the build configs for the current use cases.
> > > > > This approach might require developers to update their build configs
> > > > > accordingly.
> > > >
> > > > If you're building distinct IPA modules - I think this is fine. Don't
> > > > worry about developers updating their build configs. That sounds like
> > > > worrying that developers would have to apply patches if they want to
> > > > change the code to me ...
> > > >
> > > > It should all be covered by the build system. And -Dipas=all/auto should
> > > > 'do the right thing' in what ever is designed.
> > >
> > > Just to make sure we're aligned: This approach needs to update the logic
> > > to use the meson option `ipas`. And yes, the previous logic is confusing to
> > > me: It's like filtering out the options provided by developers by pipeline
> > > names.
> > >
> > > > > 3. Build more than one shared library in one `src/ipa/${IPA_NAME}`
> > > > > directory. We can register more than one mojom interface in a pipeline
> > > > > handler name [3], so that the build can generate multiple IPA
> > > > > proxies/workers.
> > > > > This approach doesn't need to change the meson build files or logic. I
> > > > > haven't tried it myself though. Maybe there are some missing parts.
> > > >
> > > > I haven't looked into the details / implementation here - but this
> > > > sounds more aligned to the approach I would take - where for complex
> > > > processing pipelines I think there should still be a top level pipeline
> > > > handler in charge, but that it can construct multiple internal
> > > > processing pipes to manage the whole streams.
> >
> > I think you're putting the cart before the horses Kieran. Before even
> > discussing technical solutions, I want to clearly understand why
> > multiple IPA modules would even be needed. Harvey, what are you trying
> > to achieve here ? Combining multiple processing units in a pipeline
> > handler (by the look of it, that would be a hardware ISP and GPU-based
> > processing for the IPU7) is absolutely fine, but why would that require
> > more than one IPA module ? Let's discuss the high-level design first,
> > with a clear description of the IPU7 pipeline and its requirements.
>
> Got it. I understand that libcamera IPA modules are not supposed to
> access any hardware now. However, it's not the reason that cros
> camera service needs to separate IPA in different processes/sandboxes.
> Instead, it's because vendors provide proprietary libraries that we cannot
> inspect, so we wrap them into separated sandboxes to limit the
> permissions.
Can't you wrap both libraries in a single IPA module ?
> Currently, Intel ipu6 & Qualcomm camx [1] have proprietary algorithms
> that utilize gpu to accelerate processing.
>
> It's true that we could probably let the gpu proprietary algorithms
> calculate the instructions to access gpu (render nodes, for instance, I'm
> not a gpu expert), while we didn't find strong reasons to do so in the
> beginning. Letting vendors' gpu proprietary algorithms directly access
> gpu is simpler for us.
> In the short term, I don't think we're changing this design.
I don't want to open the door to hardware access from IPA modules as
that's a slippery slope. We should at least explore alternatives first.
> We could probably merge the CPU IPA into the GPU IPA, as it seems
> that the GPU IPA covers all permissions that CPU IPA has. I think it
> was only because we want vendors to split the functionalities cleaner.
> Intel also mentioned that some of the ipu6 chromebook modules
> don't support the gpu processing/flow, so such an isolation might help
> with this.
IPA modules are not a free pass to integrate all kind of closed-source
code. They are meant to permit shipping of closed-source ISP control
algorithms by isolating them from the rest of the pipeline.
An ISP, or part of it, can be implemented as GPU shaders (or
equivalents, depending on what API is used to interface with the GPU).
More generally, a GPU can perform processing as part of the whole
imaging pipeline. Control algorithms for the GPU processing can be
conceptually as sensitive as ISP control algorithms for vendors, so I
think it makes sense to apply the same rules and allow closed-source
implementations in an IPA module, alongside other ISP control
algorithms. Control of the pipeline, on the other hand, should be in the
pipeline handler.
There's one important difference between hardware ISPs and GPU-based
processing though. With GPU-based processing, the equivalent of the ISP
hardware and firmware takes the form of GPU shader code (or equivalent).
We don't force ISP hardware or firmware to be open-source, so there's a
case for allowing the GPU shaders to remain close. However, we require
hardware ISPs to be usable by 100% open-source userspace code. This
means that
- the input and output parameters of the GPU processing blocks need to
be documented ; and
- running processing on the GPU should be done without closed-source CPU
code beside the control algorithms in the IPA module.
We may run into technical challenges (as well as non-technical ones,
*sigh*) while implementing this. I would be happy to discuss them and
help finding solutions.
> [1]: https://source.chromium.org/chromiumos/chromiumos/codesearch/+/main:src/platform2/camera/common/BUILD.gn;l=31
>
> BR,
> Harvey
>
> > > Actually I just tried myself with a POC patch:
> > > https://chromium-review.googlesource.com/c/chromiumos/third_party/libcamera/+/5737834/1/include/libcamera/ipa/meson.build#75
> > >
> > > Seems to be working. Just need to make the map into a list instead.
> > >
> > > > I would envisage that sometimes those internal pipes might have their
> > > > own IPA interface:
> > > >
> > > >
> > > > ┌────────────────────────────────────────────────┐
> > > > │ IPU3 Pipeline Handler │
> > > > │ ┌────────────────────────────────────────────┐ │
> > > > │ │ Pipe1 (Bayer) │ │
> > > > │ │ ┌────────┐ ┌────────┐ ┌────────┐ │ │ ┌─────────┐
> > > > │ │ │ Bayer │ │ │ │ │ │ │ │ │
> > > > │ │ │ Sensor ├───►│ CIO2 ├───►│ ImgU ├─────┼─┼─► Camera1 │
> > > > │ │ │ │ │ │ │ │ │ │ │ │
> > > > │ │ └────────┘ └────────┘ └──┬──┬──┘ │ │ └─────────┘
> > > > │ │ │ │ │ │
> > > > │ │ ┌──▼──┴──┐ │ │
> > > > │ │ │ │ │ │
> > > > │ │ │IPU3-IPA│ │ │
> > > > │ │ │ │ │ │
> > > > │ │ └────────┘ │ │
> > > > │ │ │ │
> > > > │ └────────────────────────────────────────────┘ │
> > > > │ │
> > > > │ ┌────────────────────────────────────────────┐ │
> > > > │ │ Pipe2 (IR) │ │
> > > > │ │ ┌────────┐ ┌────────┐ ┌────────┐ │ │ ┌─────────┐
> > > > │ │ │ IR │ │ │ │ │ │ │ │ │
> > > > │ │ │ Sensor ├───►│ CIO2 ├───►│Soft-ISP├─────┼─┼─► Camera2 │
> > > > │ │ │ │ │ │ │ │ │ │ │ │
> > > > │ │ └────────┘ └────────┘ └──┬──▲──┘ │ │ └─────────┘
> > > > │ │ │ │ │ │
> > > > │ │ ┌──▼──┴──┐ │ │
> > > > │ │ │ │ │ │
> > > > │ │ │Soft-IPA│ │ │
> > > > │ │ │ │ │ │
> > > > │ │ └────────┘ │ │
> > > > │ │ │ │
> > > > │ └────────────────────────────────────────────┘ │
> > > > │ │
> > > > └────────────────────────────────────────────────┘
> > > >
> > > > I've used the IPU3 IR cam as an example above because that's hardware I
> > > > have and know and I don't currently have visibility of what design you
> > > > are facing. If it sort of matches your issue then great, if not - then
> > > > it would help to know more about the underlying pipelines you are trying
> > > > to construct.
> > >
> > > Do you mean that we should try to use only one IPA in each pipeline handler
> > > (, in your case, "pipe1" and "pipe2"), and there's another "main pipeline handler"
> > > (, which itself doesn't own an IPA) calling "pipe1" and "pipe2", depending on
> > > the requests?
> > >
> > > Yating, from Intel, told me that the GPU IPA supports an optional feature of image
> > > processing, which will be used in a stream (that doesn't need to call CPU IPA).
> > > However, I believe the source is from the same sensor. Please correct me if I'm
> > > wrong :)
> > >
> > > > > Let me know your opinions. Thanks!
> > > > >
> > > > > [1]: https://git.libcamera.org/libcamera/libcamera.git/tree/include/libcamera/ipa/meson.build#n88
> > > > > [2]: https://git.libcamera.org/libcamera/libcamera.git/tree/src/ipa/meson.build#n47
> > > > > [3]: https://git.libcamera.org/libcamera/libcamera.git/tree/include/libcamera/ipa/meson.build#n64
--
Regards,
Laurent Pinchart
More information about the libcamera-devel
mailing list