<div dir="ltr"><div dir="ltr">Hi Laurent,</div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, 10 Dec 2021 at 23:24, Laurent Pinchart <<a href="mailto:laurent.pinchart@ideasonboard.com">laurent.pinchart@ideasonboard.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi Naush,<br>
<br>
On Fri, Dec 10, 2021 at 03:48:19PM +0000, Naushir Patuck wrote:<br>
> On Fri, 10 Dec 2021 at 00:24, Laurent Pinchart wrote:<br>
> > On Thu, Dec 09, 2021 at 10:29:21AM +0000, Naushir Patuck wrote:<br>
> > > On Thu, 9 Dec 2021 at 10:25, Kieran Bingham wrote:<br>
> > > > Quoting Naushir Patuck (2021-12-09 10:07:43)<br>
> > > > > On Thu, 9 Dec 2021 at 09:39, Kieran Bingham wrote:<br>
> > > > > > Quoting Naushir Patuck (2021-12-08 15:15:27)<br>
> > > > > > > This change will allow the pipeline handler to enumerate and control Video<br>
> > > > > > > Mux or Bridge devices that may be attached between sensors and a particular<br>
> > > > > > > Unicam instance. Cascaded mux or bridge devices are also handled.<br>
> > > > > > ><br>
> > > > > > > A new member function enumerateVideoDevices(), called from registerCamera(), is<br>
> > > > > > > used to identify and open all mux and bridge subdevices present in the<br>
> > > > > > > sensor -> Unicam link.<br>
> > > > > > ><br>
> > > > > > > Relevent links are enabled/disabled and pad formats correctly set in configure()<br>
> > > > > > > before the camera is started.<br>
> > > > > > ><br>
> > > > > > > Signed-off-by: Naushir Patuck <<a href="mailto:naush@raspberrypi.com" target="_blank">naush@raspberrypi.com</a>><br>
> > > > > > > ---<br>
> > > > > > > .../pipeline/raspberrypi/raspberrypi.cpp | 78 +++++++++++++++++++<br>
> > > > > > > 1 file changed, 78 insertions(+)<br>
> > > > > > ><br>
> > > > > > > diff --git a/src/libcamera/pipeline/raspberrypi/raspberrypi.cpp b/src/libcamera/pipeline/raspberrypi/raspberrypi.cpp<br>
> > > > > > > index 756878c70036..ca176ecb40ec 100644<br>
> > > > > > > --- a/src/libcamera/pipeline/raspberrypi/raspberrypi.cpp<br>
> > > > > > > +++ b/src/libcamera/pipeline/raspberrypi/raspberrypi.cpp<br>
> > > > > > > @@ -12,6 +12,7 @@<br>
> > > > > > > #include <mutex><br>
> > > > > > > #include <queue><br>
> > > > > > > #include <unordered_set><br>
> > > > > > > +#include <utility><br>
> > > > > > ><br>
> > > > > > > #include <libcamera/base/shared_fd.h><br>
> > > > > > > #include <libcamera/base/utils.h><br>
> > > > > > > @@ -220,6 +221,11 @@ public:<br>
> > > > > > > std::vector<RPi::Stream *> streams_;<br>
> > > > > > > /* Stores the ids of the buffers mapped in the IPA. */<br>
> > > > > > > std::unordered_set<unsigned int> ipaBuffers_;<br>
> > > > > > > + /*<br>
> > > > > > > + * Stores a cascade of Video Mux or Bridge devices between the sensor and<br>
> > > > > > > + * Unicam together with media link across the entities.<br>
> > > > > > > + */<br>
> > > > > > > + std::vector<std::pair<std::unique_ptr<V4L2Subdevice>, MediaLink *>> bridgeDevices_;<br>
> > > > > > ><br>
> > > > > > > /* DMAHEAP allocation helper. */<br>
> > > > > > > RPi::DmaHeap dmaHeap_;<br>
> > > > > > > @@ -311,6 +317,7 @@ private:<br>
> > > > > > > }<br>
> > > > > > ><br>
> > > > > > > int registerCamera(MediaDevice *unicam, MediaDevice *isp, MediaEntity *sensorEntity);<br>
> > > > > > > + void enumerateVideoDevices(RPiCameraData *data, MediaLink *link);<br>
> > > > > > > int queueAllBuffers(Camera *camera);<br>
> > > > > > > int prepareBuffers(Camera *camera);<br>
> > > > > > > void freeBuffers(Camera *camera);<br>
> > > > > > > @@ -868,6 +875,25 @@ int PipelineHandlerRPi::configure(Camera *camera, CameraConfiguration *config)<br>
> > > > > > > */<br>
> > > > > > > data->properties_.set(properties::ScalerCropMaximum, data->sensorInfo_.analogCrop);<br>
> > > > > > ><br>
> > > > > > > + /* Setup the Video Mux/Bridge entities. */<br>
> > > > > > > + for (auto &[device, link] : data->bridgeDevices_) {<br>
> > > > > > > + /* Start by disabling all the sink pad links on the devices in the cascade. */<br>
> > > > > > > + for (const MediaPad *p : device->entity()->pads()) {<br>
> > > > > > > + if (!(p->flags() & MEDIA_PAD_FL_SINK))<br>
> > > > > > > + continue;<br>
> > > > > > > +<br>
> > > > > > > + for (MediaLink *l : p->links())<br>
> > > > > > > + l->setEnabled(false);<br>
> ><br>
> > As an optimization, you could skip disabling the link that you will<br>
> > enable below.<br>
> <br>
> Ack.<br>
> <br>
> > > > > > > + }<br>
> > > > > > > +<br>
> > > > > > > + /* Finally enable the link, and setup the pad format. */<br>
> > > > > > > + link->setEnabled(true);<br>
> > > > > ><br>
> > > > > > Isn't this going to enable /all/ bridge links in the cascade<br>
> > > > > > incorrectly?<br>
> > > > > ><br>
> > > > > ><br>
> > > > > > ┌──────────┐<br>
> > > > > > │ Unicam │<br>
> > > > > > └─────▲────┘<br>
> > > > > > │<br>
> > > > > > ┌───┴───┐<br>
> > > > > > │ Mux ◄───────┐<br>
> > > > > > └──▲────┘ │<br>
> > > > > > │ │<br>
> > > > > > ┌─────┴───┐ ┌───┴───┐<br>
> > > > > > │ Sensor1 │ │ Mux │◄──┐<br>
> > > > > > └─────────┘ └─▲─────┘ │<br>
> > > > > > │ │<br>
> > > > > > ┌───────┴─┐ ┌───┴─────┐<br>
> > > > > > │ Sensor2 │ │ Sensor3 │<br>
> > > > > > └─────────┘ └─────────┘<br>
> > > > > ><br>
> > > > > > In that 'use case' we're now iterating over all bridges and enabling<br>
> > > > > > their link, I think which means we've just enabled both muxes,<br>
> > > > > > immediately after disabling them - even for Sensor1?<br>
> > > > > ><br>
> > > > > > Maybe I've mis-understood it, but I thought there would be something<br>
> > > > > > that would start at the desired sensor, and walk up the links to the<br>
> > > > > > unicam enabling those links (and only those) as it went up?<br>
> > > > > ><br>
> > > > > > The walk only has to be done once too, so perhaps a per-camera(sensor)<br>
> > > > > > vector of links to iterate and enable?<br>
> > > > > ><br>
> > > > > > Or maybe I'm missing the separation between the bridgeDevices_ and the<br>
> > > > > > per-camera instances. But if that's already happening, I can't then see<br>
> > > > > > how each camera data clears all the bridges used by other cameras...<br>
> > > > ><br>
> > > > > Let me explain my intention, and you can then tell me if the code does<br>
> > > > > what I think it does :-)<br>
> > > > ><br>
> > > > > Each sensor (Sensor{1,2,3}) will register its own Camera object, and<br>
> > > > > in that object bridgeDevices_ will store the cascade of muxes between<br>
> > > > > the sensor and the Unicam port. So, for Sensor1, we store only 1 mux,<br>
> > > > > and Sensor{2,3} will store both muxes. Together with the mux device,<br>
> > > > > we also store the entity to entity links.<br>
> > > > ><br>
> > > > > The above code goes through those stored entities, first disabling *all*<br>
> > > > > links on each device in the chain, and then selectively enabling<br>
> > > > > the specific links that are stored in bridgeDevices_ to link sensor<br>
> > > > > to Unicam across all intermedia muxes. If Sensor1 is used, this<br>
> > > > > does mean that the Sensor{2,3} -> Mux links might not change<br>
> > > > > state but the Mux to Mux link will be disabled. Similarly, if we are driving<br>
> > > > > Sensor3, the Sensor{1,2} -> Mux link will be disabled.<br>
> > > ><br>
> > > > I think I see the obvious part I missed ;-)<br>
> > > ><br>
> > > > It doesn't matter what configuration Mux2 has as long as Mux1 only has<br>
> > > > Sensor1 link enabled!<br>
> > > ><br>
> > > > It might help to document that in the comments somehow somewhere?<br>
> > > > Particularly now that you've already written the explanation to me -<br>
> > > > perhaps it should be a block comment above enumerateVideoDevices.<br>
> > > ><br>
> > > > Feel free to take or add the ascii diagram, I think more pictures in<br>
> > > > our documentation are always helpful. It's really easy to make those<br>
> > > > block diagrams in ascii art at <a href="https://asciiflow.com" rel="noreferrer" target="_blank">https://asciiflow.com</a>.<br>
> > ><br>
> > > Sure, I'll add some comments through the code explaining it in more detail.<br>
> > ><br>
> > > > > > > + const MediaPad *srcPad = link->sink();<br>
> > > > > > > + ret = device->setFormat(srcPad->index(), &sensorFormat);<br>
> > > > > ><br>
> > > > > > This assignment of ret is in a loop, so earlier failures are going to<br>
> > > > > > get ignored, unchecked.<br>
> > > > ><br>
> > > > > This should not matter, as ret is only used in the loop to count successful<br>
> > > > > camera registrations. The return value from match() will be false only if<br>
> > > > > 0 cameras were registered.<br>
> > > ><br>
> > > > So should we not even assign it?<br>
> > > > Do we need to report a warning if it failed to set the format?<br>
> > > ><br>
> > > > But aren't we here in configure() setting the format of the device to<br>
> > > > propogate the format?<br>
> > > ><br>
> > > > If we fail to setFormat in that case, then I think we should be saying<br>
> > > > the configure() call failed?<br>
> > ><br>
> > > Oops, my mistake. I misread your original comment and what bit of<br>
> > > code it refers to! This ret value *is* used for the return path, and must<br>
> > > be accounted for correctly as it is set in a loop! I'll fix this in the next<br>
> > > revision.<br>
> > ><br>
> > > > > > > + LOG(RPI, Info) << "Configured media link on device " << device->entity()->name()<br>
> > > > > > > + << " at pad " << srcPad->index();<br>
> > > > > > > + }<br>
> > > > > > > +<br>
> > > > > > > return ret;<br>
> > > > > > > }<br>
> > > > > > ><br>
> > > > > > > @@ -1098,6 +1124,13 @@ int PipelineHandlerRPi::registerCamera(MediaDevice *unicam, MediaDevice *isp, Me<br>
> > > > > > > if (data->sensor_->init())<br>
> > > > > > > return -EINVAL;<br>
> > > > > > ><br>
> > > > > > > + /*<br>
> > > > > > > + * Enumerate all the Video Mux/Bridge devices across the sensor -> unicam<br>
> > > > > > > + * link. There may be a cascade of devices in this link!<br>
> > > > > > > + */<br>
> > > > > > > + MediaLink *link = sensorEntity->getPadByIndex(0)->links()[0];<br>
> > > > > > > + enumerateVideoDevices(data.get(), link);<br>
> > > > > > > +<br>
> > > > > > > data->sensorFormats_ = populateSensorFormats(data->sensor_);<br>
> > > > > > ><br>
> > > > > > > ipa::RPi::SensorConfig sensorConfig;<br>
> > > > > > > @@ -1224,6 +1257,51 @@ int PipelineHandlerRPi::registerCamera(MediaDevice *unicam, MediaDevice *isp, Me<br>
> > > > > > > return 0;<br>
> > > > > > > }<br>
> > > > > > ><br>
> > > > > > > +void PipelineHandlerRPi::enumerateVideoDevices(RPiCameraData *data, MediaLink *link)<br>
> ><br>
> > This function accesses members of data but doesn't seem to access any<br>
> > member of PipelineHandlerRPi. Could it be moved to RPiCameraData ?<br>
> <br>
> Yes, I can do that.<br>
> <br>
> > > > > > > +{<br>
> > > > > > > + const MediaPad *sinkPad = link->sink();<br>
> > > > > > > + const MediaEntity *entity = sinkPad->entity();<br>
> > > > > > > + bool unicamFound = false;<br>
> > > > > > > +<br>
> > > > > > > + /* We only deal with Video Mux and Bridge devices in cascade. */<br>
> > > > > > > + if (entity->function() != MEDIA_ENT_F_VID_MUX &&<br>
> > > > > > > + entity->function() != MEDIA_ENT_F_VID_IF_BRIDGE)<br>
> > > > > > > + return;<br>
> > > > > > > +<br>
> > > > > > > + LOG(RPI, Info) << "Found video mux device " << entity->name()<br>
> > > > > > > + << " linked to sink pad " << sinkPad->index();<br>
> > > > > > > +<br>
> > > > > > > + data->bridgeDevices_.emplace_back(std::make_unique<V4L2Subdevice>(entity),link);<br>
> > > > > > > + data->bridgeDevices_.back().first->open();<br>
> > > > > ><br>
> > > > > > Going through the above, I can't help but wonder if the bridgeDevices_<br>
> > > > > > should be stored as a single instance in the Pipeline handler (not each<br>
> > > > > > CameraData, we have one CameraData per camera right? if so, how does<br>
> > > > > > camera3 disable all the links? does it know about every path?)<br>
> > > > ><br>
> > > > > From my description earlier, bridgeDevices_ must be stored in the CameraData, as<br>
> > > > > it lists the devices in the path between the sensor and Unicam. And this is unique<br>
> > > > > per-sensor. Again, this does mean that if we are using Sensor1, links for Sensor{2,3}<br>
> > > > > -> Mux are not changed, but the Mux->Mux link will be disabled. Do you think<br>
> > > > > that may not be appropriate leaving them enabled?<br>
> ><br>
> > I can't help but seeing lots of similarities with the simple pipeline<br>
> > handler, which also performs dynamic discovery of pipelines (all the way<br>
> > to the video node in that case, while in your case it would be all the<br>
> > way to the Unicam device). I wonder if we could share code between the<br>
> > two ? This could benefit the Raspberry Pi pipeline handler by providing<br>
> > a more generic version of format propagation through the pipeline<br>
> > (although that part may be more difficult to share).<br>
> ><br>
> > An important difference between the two implementations is that the<br>
> > simple pipeline handler opens the video node and subdevs once only,<br>
> > storing them in the pipeline handler instance, and only stores pointers<br>
> > to those objects in the camera data.<br>
> <br>
> I've had a very brief look at the simple pipeline handler. I agree, there are<br>
> similarities between the two. I see that the enumeration/discovery in the<br>
> simple pipeline handler is a fair bit more sophisticated in finding all available<br>
> entities, whereas the RPi case, I only care to look for a cascade of mux<br>
> or bridge devices.<br>
> <br>
> What were your thoughts on trying to share functionality between these two<br>
> (and indeed all) pipeline handlers? Perhaps one way would be to have some<br>
> common code to create the media graph structure for any given media device?<br>
> This could include entities, pads and links represented in a common way for<br>
> pipeline handlers to use? Then the RPi enumeration code would simply look<br>
> to find the devices it cares about. This is probably a bigger job than is required<br>
> for this change though.<br>
<br>
That's a very good question :-)<br>
<br>
I don't usually believe in designing helpers without at least a couple<br>
of use cases for them, that more often than not leads to bad designs.<br>
Now we have two examples of code that needs to parse a media graph, and<br>
I expect the exact same feature could be implemented by all pipeline<br>
handlers.<br>
<br>
Following the design-from-use-cases philosophy, I think we should focus<br>
on the need at hand. I think it would be able to share code that parses<br>
a media graph to locate sensors and paths from them all the way up to a<br>
given entity (Unicam in your case, any video node in the simple pipeline<br>
handler case), and store the paths in a shared data structure. Functions<br>
to then setup the links in the pipeline seem to also be good candidates<br>
for a common helper.<br>
<br>
Format configuration along the pipeline could also possibly be shared,<br>
but that seems more difficult, as the needs appear to be a bit<br>
different.<br>
<br>
I do agree this is a bit of yak shaving, as it's not strictly required<br>
to get this feature implemented, but we also have a growing technical<br>
debt. I was thus wondering if you think you could have time to<br>
experiment a little bit.<br></blockquote><div><br></div><div>Sure, I'll see what I can come up with to have some general purpose</div><div>helpers dealing with media device enumeration/discovery. In the meantime,</div><div>I'll post a new patch with the requested changes soon.</div><div><br></div><div>Regard,</div><div>Naush</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
> > > > I missed that, and now it's obvious. If the first mux is only pointing<br>
> > > > to Sensor1, then indeed it doesn't matter what mux2 is configured to<br>
> > > > link to...<br>
> > > ><br>
> > > > So I think that's ok.<br>
> > > ><br>
> > > > > Please shout if this doesn't make sense, and something simpler might<br>
> > > > > equally work :-)<br>
> > > ><br>
> > > > It's all fine until someone adds a crossbar perhaps that might make it<br>
> > > > possible to do more crazy things ;-)<br>
> > > ><br>
> > > > But we don't need to worry about that now I dont' think.<br>
> > > ><br>
> > > > I think I'm still concerned about that ret during the loop in<br>
> > > > configure(). But with that resolved:<br>
> > > ><br>
> > > > Reviewed-by: Kieran Bingham <<a href="mailto:kieran.bingham@ideasonboard.com" target="_blank">kieran.bingham@ideasonboard.com</a>><br>
> > > ><br>
> > > > > > > +<br>
> > > > > > > + for (const MediaPad *pad : entity->pads()) {<br>
> > > > > > > + /*<br>
> > > > > > > + * Iterate through all the sink pads down the cascade to find any<br>
> > > > > > > + * other Video Mux and Bridge devices.<br>
> > > > > > > + */<br>
> > > > > > > + if (!(pad->flags() & MEDIA_PAD_FL_SOURCE))<br>
> > > > > > > + continue;<br>
> > > > > > > +<br>
> > > > > > > + for (MediaLink *l : pad->links()) {<br>
> > > > > > > + enumerateVideoDevices(data, l);<br>
> > > > > > > + if (l->sink()->entity()->name() == "unicam-image")<br>
> > > > > > > + unicamFound = true;<br>
> > > > > > > + }<br>
> > > > > > > + }<br>
> > > > > > > +<br>
> > > > > > > + /* This identifies the end of our entity enumeration recursion. */<br>
> > > > > > > + if (link->source()->entity()->function() == MEDIA_ENT_F_CAM_SENSOR) {<br>
> > > > > > > + /*<br>
> > > > > > > + * If Unicam is not at the end of this cascade, we cannot configure<br>
> > > > > > > + * this topology automatically, so remove all entity references.<br>
> > > > > > > + */<br>
> > > > > > > + if (!unicamFound) {<br>
> > > > > > > + LOG(RPI, Warning) << "Cannot automatically configure this MC topology!";<br>
> > > > > > > + data->bridgeDevices_.clear();<br>
> > > > > > > + }<br>
> > > > > > > + }<br>
> > > > > > > +}<br>
> > > > > > > +<br>
> > > > > > > int PipelineHandlerRPi::queueAllBuffers(Camera *camera)<br>
> > > > > > > {<br>
> > > > > > > RPiCameraData *data = cameraData(camera);<br>
<br>
-- <br>
Regards,<br>
<br>
Laurent Pinchart<br>
</blockquote></div></div>