<div dir="ltr"><div dir="ltr">Hi Laurent,<div><br></div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, Jul 27, 2024 at 8:36 AM Laurent Pinchart <<a href="mailto:laurent.pinchart@ideasonboard.com">laurent.pinchart@ideasonboard.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hello,<br>
<br>
On Wed, Jul 24, 2024 at 05:55:51PM +0800, Cheng-Hao Yang wrote:<br>
> On Wed, Jul 24, 2024 at 3:42 PM Kieran Bingham wrote:<br>
> > Quoting Cheng-Hao Yang (2024-07-24 07:36:06)<br>
> > > Hi folks,<br>
> > ><br>
> > > I want to discuss the support of multiple IPAs within a pipeline handler,<br>
> > > before I upload patches, as I think there are different approaches. It's<br>
> > > needed because some chromeos models (like intelipu7) need both cpu & gpu<br>
> > > IPAs, and they might have different mojo interfaces.<br>
> ><br>
> > That sounds like multiple distinct processing blocks within a pipeline<br>
> > handler.<br>
> ><br>
> > But do you mean the IPA will run 'on the GPU'? Or that it will 'control'<br>
> > processing on the GPU?<br>
><br>
> In CrOS, IPA runs in a sandbox. GPU IPA has access to GPU [1]. I believe it<br>
> can send some requests to GPU hardware.<br>
> <br>
> [1]: <a href="https://source.chromium.org/chromiumos/chromiumos/codesearch/+/main:src/platform2/camera/common/init/cros-camera-gpu-algo.conf;l=29" rel="noreferrer" target="_blank">https://source.chromium.org/chromiumos/chromiumos/codesearch/+/main:src/platform2/camera/common/init/cros-camera-gpu-algo.conf;l=29</a><br>
<br>
In libcamera IPA modules are not allowed to access hardware. Only<br>
pipeline handlers can.<br></blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
> > I would suspect (but probably need some block diagrams) that they should<br>
> > be broken up into a more generic set of pipelines and the 'main<br>
> > pipeline handler' would then construct and connect everything.<br>
> ><br>
> > I can foresee a similar thing with the soft ISP development ... it could<br>
> > be added as an extra processing pipline to existing platforms. For<br>
> > instance it could be added to the IPU3 to support the IR sensor...<br>
> ><br>
> > So some sort of generic solution or design is likely desireable here.<br>
> ><br>
> > > Currently in the `meson.build`s, they restrict the IPA name to be the> same<br>
> > > as a pipeline handler name [1] [2].<br>
> > ><br>
> > > Here are some approaches that I can think of:<br>
> > ><br>
> > > 1. Besides the ipa that has the same name as pipeline handler name, build<br>
> > > the one that has a suffix `_gpu` or so. For example, build `ipu7` &<br>
> > > `ipu7_gpu` for pipeline handler `ipu7`.<br>
> > > This approach limits the number of IPAs to be two though, and we might need<br>
> > > to change the design again if there are other requirements in the future.<br>
> ><br>
> > Indeed, this just sounds like a hack/shortcut to me.<br>
> ><br>
> > > 2. Build exactly the IPAs specified in the meson option: `ipas` (or we add<br>
> > > another meson option). IIUC, currently the default value of meson option<br>
> > > `ipas` contains all the possible values, and it might be the case for most<br>
> > > of the build configs for the current use cases.<br>
> > > This approach might require developers to update their build configs<br>
> > > accordingly.<br>
> ><br>
> > If you're building distinct IPA modules - I think this is fine. Don't<br>
> > worry about developers updating their build configs. That sounds like<br>
> > worrying that developers would have to apply patches if they want to<br>
> > change the code to me ...<br>
> ><br>
> > It should all be covered by the build system. And -Dipas=all/auto should<br>
> > 'do the right thing' in what ever is designed.<br>
><br>
> Just to make sure we're aligned: This approach needs to update the logic<br>
> to use the meson option `ipas`. And yes, the previous logic is confusing to<br>
> me: It's like filtering out the options provided by developers by pipeline<br>
> names.<br>
> <br>
> > > 3. Build more than one shared library in one `src/ipa/${IPA_NAME}`<br>
> > > directory. We can register more than one mojom interface in a pipeline<br>
> > > handler name [3], so that the build can generate multiple IPA<br>
> > > proxies/workers.<br>
> > > This approach doesn't need to change the meson build files or logic. I<br>
> > > haven't tried it myself though. Maybe there are some missing parts.<br>
> ><br>
> > I haven't looked into the details / implementation here - but this<br>
> > sounds more aligned to the approach I would take - where for complex<br>
> > processing pipelines I think there should still be a top level pipeline<br>
> > handler in charge, but that it can construct multiple internal<br>
> > processing pipes to manage the whole streams.<br>
<br>
I think you're putting the cart before the horses Kieran. Before even<br>
discussing technical solutions, I want to clearly understand why<br>
multiple IPA modules would even be needed. Harvey, what are you trying<br>
to achieve here ? Combining multiple processing units in a pipeline<br>
handler (by the look of it, that would be a hardware ISP and GPU-based<br>
processing for the IPU7) is absolutely fine, but why would that require<br>
more than one IPA module ? Let's discuss the high-level design first,<br>
with a clear description of the IPU7 pipeline and its requirements.<br><br></blockquote><div><br></div><div><div>Got it. I understand that libcamera IPA modules are not supposed to </div><div>access any hardware now. However, it's not the reason that cros </div><div>camera service needs to separate IPA in different processes/sandboxes.</div><div>Instead, it's because vendors provide proprietary libraries that we cannot</div><div>inspect, so we wrap them into separated sandboxes to limit the</div><div>permissions.</div><div><br></div><div>Currently, Intel ipu6 & Qualcomm camx [1] have proprietary algorithms</div><div>that utilize gpu to accelerate processing.</div><div><br></div><div>It's true that we could probably let the gpu proprietary algorithms</div><div>calculate the instructions to access gpu (render nodes, for instance, I'm</div><div>not a gpu expert), while we didn't find strong reasons to do so in the</div><div>beginning. Letting vendors' gpu proprietary algorithms directly access</div><div>gpu is simpler for us.</div><div>In the short term, I don't think we're changing this design.</div><div><br></div><div>We could probably merge the CPU IPA into the GPU IPA, as it seems</div><div>that the GPU IPA covers all permissions that CPU IPA has. I think it</div><div>was only because we want vendors to split the functionalities cleaner.</div><div>Intel also mentioned that some of the ipu6 chromebook modules </div><div>don't support the gpu processing/flow, so such an isolation might help</div><div>with this.</div><div><br></div><div>[1]: <a href="https://source.chromium.org/chromiumos/chromiumos/codesearch/+/main:src/platform2/camera/common/BUILD.gn;l=31">https://source.chromium.org/chromiumos/chromiumos/codesearch/+/main:src/platform2/camera/common/BUILD.gn;l=31</a></div><div> </div></div><div>BR,</div><div>Harvey</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
> Actually I just tried myself with a POC patch:<br>
> <a href="https://chromium-review.googlesource.com/c/chromiumos/third_party/libcamera/+/5737834/1/include/libcamera/ipa/meson.build#75" rel="noreferrer" target="_blank">https://chromium-review.googlesource.com/c/chromiumos/third_party/libcamera/+/5737834/1/include/libcamera/ipa/meson.build#75</a><br>
> <br>
> Seems to be working. Just need to make the map into a list instead.<br>
> <br>
> > I would envisage that sometimes those internal pipes might have their<br>
> > own IPA interface:<br>
> ><br>
> ><br>
> > ┌────────────────────────────────────────────────┐<br>
> > │ IPU3 Pipeline Handler │<br>
> > │ ┌────────────────────────────────────────────┐ │<br>
> > │ │ Pipe1 (Bayer) │ │<br>
> > │ │ ┌────────┐ ┌────────┐ ┌────────┐ │ │ ┌─────────┐<br>
> > │ │ │ Bayer │ │ │ │ │ │ │ │ │<br>
> > │ │ │ Sensor ├───►│ CIO2 ├───►│ ImgU ├─────┼─┼─► Camera1 │<br>
> > │ │ │ │ │ │ │ │ │ │ │ │<br>
> > │ │ └────────┘ └────────┘ └──┬──┬──┘ │ │ └─────────┘<br>
> > │ │ │ │ │ │<br>
> > │ │ ┌──▼──┴──┐ │ │<br>
> > │ │ │ │ │ │<br>
> > │ │ │IPU3-IPA│ │ │<br>
> > │ │ │ │ │ │<br>
> > │ │ └────────┘ │ │<br>
> > │ │ │ │<br>
> > │ └────────────────────────────────────────────┘ │<br>
> > │ │<br>
> > │ ┌────────────────────────────────────────────┐ │<br>
> > │ │ Pipe2 (IR) │ │<br>
> > │ │ ┌────────┐ ┌────────┐ ┌────────┐ │ │ ┌─────────┐<br>
> > │ │ │ IR │ │ │ │ │ │ │ │ │<br>
> > │ │ │ Sensor ├───►│ CIO2 ├───►│Soft-ISP├─────┼─┼─► Camera2 │<br>
> > │ │ │ │ │ │ │ │ │ │ │ │<br>
> > │ │ └────────┘ └────────┘ └──┬──▲──┘ │ │ └─────────┘<br>
> > │ │ │ │ │ │<br>
> > │ │ ┌──▼──┴──┐ │ │<br>
> > │ │ │ │ │ │<br>
> > │ │ │Soft-IPA│ │ │<br>
> > │ │ │ │ │ │<br>
> > │ │ └────────┘ │ │<br>
> > │ │ │ │<br>
> > │ └────────────────────────────────────────────┘ │<br>
> > │ │<br>
> > └────────────────────────────────────────────────┘<br>
> ><br>
> > I've used the IPU3 IR cam as an example above because that's hardware I<br>
> > have and know and I don't currently have visibility of what design you<br>
> > are facing. If it sort of matches your issue then great, if not - then<br>
> > it would help to know more about the underlying pipelines you are trying<br>
> > to construct.<br>
><br>
> Do you mean that we should try to use only one IPA in each pipeline handler<br>
> (, in your case, "pipe1" and "pipe2"), and there's another "main pipeline handler"<br>
> (, which itself doesn't own an IPA) calling "pipe1" and "pipe2", depending on<br>
> the requests?<br>
> <br>
> Yating, from Intel, told me that the GPU IPA supports an optional feature of image<br>
> processing, which will be used in a stream (that doesn't need to call CPU IPA).<br>
> However, I believe the source is from the same sensor. Please correct me if I'm<br>
> wrong :)<br>
> <br>
> > > Let me know your opinions. Thanks!<br>
> > ><br>
> > > [1]: <a href="https://git.libcamera.org/libcamera/libcamera.git/tree/include/libcamera/ipa/meson.build#n88" rel="noreferrer" target="_blank">https://git.libcamera.org/libcamera/libcamera.git/tree/include/libcamera/ipa/meson.build#n88</a><br>
> > > [2]: <a href="https://git.libcamera.org/libcamera/libcamera.git/tree/src/ipa/meson.build#n47" rel="noreferrer" target="_blank">https://git.libcamera.org/libcamera/libcamera.git/tree/src/ipa/meson.build#n47</a><br>
> > > [3]: <a href="https://git.libcamera.org/libcamera/libcamera.git/tree/include/libcamera/ipa/meson.build#n64" rel="noreferrer" target="_blank">https://git.libcamera.org/libcamera/libcamera.git/tree/include/libcamera/ipa/meson.build#n64</a><br>
<br>
-- <br>
Regards,<br>
<br>
Laurent Pinchart<br>
</blockquote></div></div>