[libcamera-devel] [PATCH v2 2/2] pipeline: raspberrypi: Add support for Video Mux and Bridge devices
Naushir Patuck
naush at raspberrypi.com
Tue Dec 14 09:29:39 CET 2021
Hi Laurent,
On Fri, 10 Dec 2021 at 23:24, Laurent Pinchart <
laurent.pinchart at ideasonboard.com> wrote:
> Hi Naush,
>
> On Fri, Dec 10, 2021 at 03:48:19PM +0000, Naushir Patuck wrote:
> > On Fri, 10 Dec 2021 at 00:24, Laurent Pinchart wrote:
> > > On Thu, Dec 09, 2021 at 10:29:21AM +0000, Naushir Patuck wrote:
> > > > On Thu, 9 Dec 2021 at 10:25, Kieran Bingham wrote:
> > > > > Quoting Naushir Patuck (2021-12-09 10:07:43)
> > > > > > On Thu, 9 Dec 2021 at 09:39, Kieran Bingham wrote:
> > > > > > > Quoting Naushir Patuck (2021-12-08 15:15:27)
> > > > > > > > This change will allow the pipeline handler to enumerate and
> control Video
> > > > > > > > Mux or Bridge devices that may be attached between sensors
> and a particular
> > > > > > > > Unicam instance. Cascaded mux or bridge devices are also
> handled.
> > > > > > > >
> > > > > > > > A new member function enumerateVideoDevices(), called from
> registerCamera(), is
> > > > > > > > used to identify and open all mux and bridge subdevices
> present in the
> > > > > > > > sensor -> Unicam link.
> > > > > > > >
> > > > > > > > Relevent links are enabled/disabled and pad formats
> correctly set in configure()
> > > > > > > > before the camera is started.
> > > > > > > >
> > > > > > > > Signed-off-by: Naushir Patuck <naush at raspberrypi.com>
> > > > > > > > ---
> > > > > > > > .../pipeline/raspberrypi/raspberrypi.cpp | 78
> +++++++++++++++++++
> > > > > > > > 1 file changed, 78 insertions(+)
> > > > > > > >
> > > > > > > > diff --git
> a/src/libcamera/pipeline/raspberrypi/raspberrypi.cpp
> b/src/libcamera/pipeline/raspberrypi/raspberrypi.cpp
> > > > > > > > index 756878c70036..ca176ecb40ec 100644
> > > > > > > > --- a/src/libcamera/pipeline/raspberrypi/raspberrypi.cpp
> > > > > > > > +++ b/src/libcamera/pipeline/raspberrypi/raspberrypi.cpp
> > > > > > > > @@ -12,6 +12,7 @@
> > > > > > > > #include <mutex>
> > > > > > > > #include <queue>
> > > > > > > > #include <unordered_set>
> > > > > > > > +#include <utility>
> > > > > > > >
> > > > > > > > #include <libcamera/base/shared_fd.h>
> > > > > > > > #include <libcamera/base/utils.h>
> > > > > > > > @@ -220,6 +221,11 @@ public:
> > > > > > > > std::vector<RPi::Stream *> streams_;
> > > > > > > > /* Stores the ids of the buffers mapped in the IPA.
> */
> > > > > > > > std::unordered_set<unsigned int> ipaBuffers_;
> > > > > > > > + /*
> > > > > > > > + * Stores a cascade of Video Mux or Bridge devices
> between the sensor and
> > > > > > > > + * Unicam together with media link across the
> entities.
> > > > > > > > + */
> > > > > > > > +
> std::vector<std::pair<std::unique_ptr<V4L2Subdevice>, MediaLink *>>
> bridgeDevices_;
> > > > > > > >
> > > > > > > > /* DMAHEAP allocation helper. */
> > > > > > > > RPi::DmaHeap dmaHeap_;
> > > > > > > > @@ -311,6 +317,7 @@ private:
> > > > > > > > }
> > > > > > > >
> > > > > > > > int registerCamera(MediaDevice *unicam, MediaDevice
> *isp, MediaEntity *sensorEntity);
> > > > > > > > + void enumerateVideoDevices(RPiCameraData *data,
> MediaLink *link);
> > > > > > > > int queueAllBuffers(Camera *camera);
> > > > > > > > int prepareBuffers(Camera *camera);
> > > > > > > > void freeBuffers(Camera *camera);
> > > > > > > > @@ -868,6 +875,25 @@ int
> PipelineHandlerRPi::configure(Camera *camera, CameraConfiguration *config)
> > > > > > > > */
> > > > > > > > data->properties_.set(properties::ScalerCropMaximum,
> data->sensorInfo_.analogCrop);
> > > > > > > >
> > > > > > > > + /* Setup the Video Mux/Bridge entities. */
> > > > > > > > + for (auto &[device, link] : data->bridgeDevices_) {
> > > > > > > > + /* Start by disabling all the sink pad links
> on the devices in the cascade. */
> > > > > > > > + for (const MediaPad *p :
> device->entity()->pads()) {
> > > > > > > > + if (!(p->flags() &
> MEDIA_PAD_FL_SINK))
> > > > > > > > + continue;
> > > > > > > > +
> > > > > > > > + for (MediaLink *l : p->links())
> > > > > > > > + l->setEnabled(false);
> > >
> > > As an optimization, you could skip disabling the link that you will
> > > enable below.
> >
> > Ack.
> >
> > > > > > > > + }
> > > > > > > > +
> > > > > > > > + /* Finally enable the link, and setup the
> pad format. */
> > > > > > > > + link->setEnabled(true);
> > > > > > >
> > > > > > > Isn't this going to enable /all/ bridge links in the cascade
> > > > > > > incorrectly?
> > > > > > >
> > > > > > >
> > > > > > > ┌──────────┐
> > > > > > > │ Unicam │
> > > > > > > └─────▲────┘
> > > > > > > │
> > > > > > > ┌───┴───┐
> > > > > > > │ Mux ◄───────┐
> > > > > > > └──▲────┘ │
> > > > > > > │ │
> > > > > > > ┌─────┴───┐ ┌───┴───┐
> > > > > > > │ Sensor1 │ │ Mux │◄──┐
> > > > > > > └─────────┘ └─▲─────┘ │
> > > > > > > │ │
> > > > > > > ┌───────┴─┐ ┌───┴─────┐
> > > > > > > │ Sensor2 │ │ Sensor3 │
> > > > > > > └─────────┘ └─────────┘
> > > > > > >
> > > > > > > In that 'use case' we're now iterating over all bridges and
> enabling
> > > > > > > their link, I think which means we've just enabled both muxes,
> > > > > > > immediately after disabling them - even for Sensor1?
> > > > > > >
> > > > > > > Maybe I've mis-understood it, but I thought there would be
> something
> > > > > > > that would start at the desired sensor, and walk up the links
> to the
> > > > > > > unicam enabling those links (and only those) as it went up?
> > > > > > >
> > > > > > > The walk only has to be done once too, so perhaps a
> per-camera(sensor)
> > > > > > > vector of links to iterate and enable?
> > > > > > >
> > > > > > > Or maybe I'm missing the separation between the bridgeDevices_
> and the
> > > > > > > per-camera instances. But if that's already happening, I can't
> then see
> > > > > > > how each camera data clears all the bridges used by other
> cameras...
> > > > > >
> > > > > > Let me explain my intention, and you can then tell me if the
> code does
> > > > > > what I think it does :-)
> > > > > >
> > > > > > Each sensor (Sensor{1,2,3}) will register its own Camera object,
> and
> > > > > > in that object bridgeDevices_ will store the cascade of muxes
> between
> > > > > > the sensor and the Unicam port. So, for Sensor1, we store only 1
> mux,
> > > > > > and Sensor{2,3} will store both muxes. Together with the mux
> device,
> > > > > > we also store the entity to entity links.
> > > > > >
> > > > > > The above code goes through those stored entities, first
> disabling *all*
> > > > > > links on each device in the chain, and then selectively enabling
> > > > > > the specific links that are stored in bridgeDevices_ to link
> sensor
> > > > > > to Unicam across all intermedia muxes. If Sensor1 is used, this
> > > > > > does mean that the Sensor{2,3} -> Mux links might not change
> > > > > > state but the Mux to Mux link will be disabled. Similarly, if
> we are driving
> > > > > > Sensor3, the Sensor{1,2} -> Mux link will be disabled.
> > > > >
> > > > > I think I see the obvious part I missed ;-)
> > > > >
> > > > > It doesn't matter what configuration Mux2 has as long as Mux1 only
> has
> > > > > Sensor1 link enabled!
> > > > >
> > > > > It might help to document that in the comments somehow somewhere?
> > > > > Particularly now that you've already written the explanation to me
> -
> > > > > perhaps it should be a block comment above enumerateVideoDevices.
> > > > >
> > > > > Feel free to take or add the ascii diagram, I think more pictures
> in
> > > > > our documentation are always helpful. It's really easy to make
> those
> > > > > block diagrams in ascii art at https://asciiflow.com.
> > > >
> > > > Sure, I'll add some comments through the code explaining it in more
> detail.
> > > >
> > > > > > > > + const MediaPad *srcPad = link->sink();
> > > > > > > > + ret = device->setFormat(srcPad->index(),
> &sensorFormat);
> > > > > > >
> > > > > > > This assignment of ret is in a loop, so earlier failures are
> going to
> > > > > > > get ignored, unchecked.
> > > > > >
> > > > > > This should not matter, as ret is only used in the loop to count
> successful
> > > > > > camera registrations. The return value from match() will be
> false only if
> > > > > > 0 cameras were registered.
> > > > >
> > > > > So should we not even assign it?
> > > > > Do we need to report a warning if it failed to set the format?
> > > > >
> > > > > But aren't we here in configure() setting the format of the device
> to
> > > > > propogate the format?
> > > > >
> > > > > If we fail to setFormat in that case, then I think we should be
> saying
> > > > > the configure() call failed?
> > > >
> > > > Oops, my mistake. I misread your original comment and what bit of
> > > > code it refers to! This ret value *is* used for the return path,
> and must
> > > > be accounted for correctly as it is set in a loop! I'll fix this in
> the next
> > > > revision.
> > > >
> > > > > > > > + LOG(RPI, Info) << "Configured media link on
> device " << device->entity()->name()
> > > > > > > > + << " at pad " <<
> srcPad->index();
> > > > > > > > + }
> > > > > > > > +
> > > > > > > > return ret;
> > > > > > > > }
> > > > > > > >
> > > > > > > > @@ -1098,6 +1124,13 @@ int
> PipelineHandlerRPi::registerCamera(MediaDevice *unicam, MediaDevice *isp, Me
> > > > > > > > if (data->sensor_->init())
> > > > > > > > return -EINVAL;
> > > > > > > >
> > > > > > > > + /*
> > > > > > > > + * Enumerate all the Video Mux/Bridge devices across
> the sensor -> unicam
> > > > > > > > + * link. There may be a cascade of devices in this
> link!
> > > > > > > > + */
> > > > > > > > + MediaLink *link =
> sensorEntity->getPadByIndex(0)->links()[0];
> > > > > > > > + enumerateVideoDevices(data.get(), link);
> > > > > > > > +
> > > > > > > > data->sensorFormats_ =
> populateSensorFormats(data->sensor_);
> > > > > > > >
> > > > > > > > ipa::RPi::SensorConfig sensorConfig;
> > > > > > > > @@ -1224,6 +1257,51 @@ int
> PipelineHandlerRPi::registerCamera(MediaDevice *unicam, MediaDevice *isp, Me
> > > > > > > > return 0;
> > > > > > > > }
> > > > > > > >
> > > > > > > > +void
> PipelineHandlerRPi::enumerateVideoDevices(RPiCameraData *data, MediaLink
> *link)
> > >
> > > This function accesses members of data but doesn't seem to access any
> > > member of PipelineHandlerRPi. Could it be moved to RPiCameraData ?
> >
> > Yes, I can do that.
> >
> > > > > > > > +{
> > > > > > > > + const MediaPad *sinkPad = link->sink();
> > > > > > > > + const MediaEntity *entity = sinkPad->entity();
> > > > > > > > + bool unicamFound = false;
> > > > > > > > +
> > > > > > > > + /* We only deal with Video Mux and Bridge devices in
> cascade. */
> > > > > > > > + if (entity->function() != MEDIA_ENT_F_VID_MUX &&
> > > > > > > > + entity->function() != MEDIA_ENT_F_VID_IF_BRIDGE)
> > > > > > > > + return;
> > > > > > > > +
> > > > > > > > + LOG(RPI, Info) << "Found video mux device " <<
> entity->name()
> > > > > > > > + << " linked to sink pad " <<
> sinkPad->index();
> > > > > > > > +
> > > > > > > > +
> data->bridgeDevices_.emplace_back(std::make_unique<V4L2Subdevice>(entity),link);
> > > > > > > > + data->bridgeDevices_.back().first->open();
> > > > > > >
> > > > > > > Going through the above, I can't help but wonder if the
> bridgeDevices_
> > > > > > > should be stored as a single instance in the Pipeline handler
> (not each
> > > > > > > CameraData, we have one CameraData per camera right? if so,
> how does
> > > > > > > camera3 disable all the links? does it know about every path?)
> > > > > >
> > > > > > From my description earlier, bridgeDevices_ must be stored in
> the CameraData, as
> > > > > > it lists the devices in the path between the sensor and Unicam.
> And this is unique
> > > > > > per-sensor. Again, this does mean that if we are using Sensor1,
> links for Sensor{2,3}
> > > > > > -> Mux are not changed, but the Mux->Mux link will be disabled.
> Do you think
> > > > > > that may not be appropriate leaving them enabled?
> > >
> > > I can't help but seeing lots of similarities with the simple pipeline
> > > handler, which also performs dynamic discovery of pipelines (all the
> way
> > > to the video node in that case, while in your case it would be all the
> > > way to the Unicam device). I wonder if we could share code between the
> > > two ? This could benefit the Raspberry Pi pipeline handler by providing
> > > a more generic version of format propagation through the pipeline
> > > (although that part may be more difficult to share).
> > >
> > > An important difference between the two implementations is that the
> > > simple pipeline handler opens the video node and subdevs once only,
> > > storing them in the pipeline handler instance, and only stores pointers
> > > to those objects in the camera data.
> >
> > I've had a very brief look at the simple pipeline handler. I agree,
> there are
> > similarities between the two. I see that the enumeration/discovery in
> the
> > simple pipeline handler is a fair bit more sophisticated in finding all
> available
> > entities, whereas the RPi case, I only care to look for a cascade of mux
> > or bridge devices.
> >
> > What were your thoughts on trying to share functionality between these
> two
> > (and indeed all) pipeline handlers? Perhaps one way would be to have
> some
> > common code to create the media graph structure for any given media
> device?
> > This could include entities, pads and links represented in a common way
> for
> > pipeline handlers to use? Then the RPi enumeration code would simply
> look
> > to find the devices it cares about. This is probably a bigger job than
> is required
> > for this change though.
>
> That's a very good question :-)
>
> I don't usually believe in designing helpers without at least a couple
> of use cases for them, that more often than not leads to bad designs.
> Now we have two examples of code that needs to parse a media graph, and
> I expect the exact same feature could be implemented by all pipeline
> handlers.
>
> Following the design-from-use-cases philosophy, I think we should focus
> on the need at hand. I think it would be able to share code that parses
> a media graph to locate sensors and paths from them all the way up to a
> given entity (Unicam in your case, any video node in the simple pipeline
> handler case), and store the paths in a shared data structure. Functions
> to then setup the links in the pipeline seem to also be good candidates
> for a common helper.
>
> Format configuration along the pipeline could also possibly be shared,
> but that seems more difficult, as the needs appear to be a bit
> different.
>
> I do agree this is a bit of yak shaving, as it's not strictly required
> to get this feature implemented, but we also have a growing technical
> debt. I was thus wondering if you think you could have time to
> experiment a little bit.
>
Sure, I'll see what I can come up with to have some general purpose
helpers dealing with media device enumeration/discovery. In the meantime,
I'll post a new patch with the requested changes soon.
Regard,
Naush
>
> > > > > I missed that, and now it's obvious. If the first mux is only
> pointing
> > > > > to Sensor1, then indeed it doesn't matter what mux2 is configured
> to
> > > > > link to...
> > > > >
> > > > > So I think that's ok.
> > > > >
> > > > > > Please shout if this doesn't make sense, and something simpler
> might
> > > > > > equally work :-)
> > > > >
> > > > > It's all fine until someone adds a crossbar perhaps that might
> make it
> > > > > possible to do more crazy things ;-)
> > > > >
> > > > > But we don't need to worry about that now I dont' think.
> > > > >
> > > > > I think I'm still concerned about that ret during the loop in
> > > > > configure(). But with that resolved:
> > > > >
> > > > > Reviewed-by: Kieran Bingham <kieran.bingham at ideasonboard.com>
> > > > >
> > > > > > > > +
> > > > > > > > + for (const MediaPad *pad : entity->pads()) {
> > > > > > > > + /*
> > > > > > > > + * Iterate through all the sink pads down
> the cascade to find any
> > > > > > > > + * other Video Mux and Bridge devices.
> > > > > > > > + */
> > > > > > > > + if (!(pad->flags() & MEDIA_PAD_FL_SOURCE))
> > > > > > > > + continue;
> > > > > > > > +
> > > > > > > > + for (MediaLink *l : pad->links()) {
> > > > > > > > + enumerateVideoDevices(data, l);
> > > > > > > > + if (l->sink()->entity()->name() ==
> "unicam-image")
> > > > > > > > + unicamFound = true;
> > > > > > > > + }
> > > > > > > > + }
> > > > > > > > +
> > > > > > > > + /* This identifies the end of our entity enumeration
> recursion. */
> > > > > > > > + if (link->source()->entity()->function() ==
> MEDIA_ENT_F_CAM_SENSOR) {
> > > > > > > > + /*
> > > > > > > > + * If Unicam is not at the end of this
> cascade, we cannot configure
> > > > > > > > + * this topology automatically, so remove all
> entity references.
> > > > > > > > + */
> > > > > > > > + if (!unicamFound) {
> > > > > > > > + LOG(RPI, Warning) << "Cannot
> automatically configure this MC topology!";
> > > > > > > > + data->bridgeDevices_.clear();
> > > > > > > > + }
> > > > > > > > + }
> > > > > > > > +}
> > > > > > > > +
> > > > > > > > int PipelineHandlerRPi::queueAllBuffers(Camera *camera)
> > > > > > > > {
> > > > > > > > RPiCameraData *data = cameraData(camera);
>
> --
> Regards,
>
> Laurent Pinchart
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.libcamera.org/pipermail/libcamera-devel/attachments/20211214/8eefd5fb/attachment-0001.htm>
More information about the libcamera-devel
mailing list