[libcamera-devel] [PATCH v3 5/9] pipeline: raspberrypi: Convert the pipeline handler to use media controller

Laurent Pinchart laurent.pinchart at ideasonboard.com
Wed Oct 27 16:18:41 CEST 2021


On Wed, Oct 27, 2021 at 05:12:03PM +0300, Laurent Pinchart wrote:
> On Wed, Oct 27, 2021 at 04:51:33PM +0300, Laurent Pinchart wrote:
> > On Wed, Oct 27, 2021 at 01:13:50PM +0100, Kieran Bingham wrote:
> > > Quoting Naushir Patuck (2021-10-27 10:27:59)
> > > > Switch the pipeline handler to use the new Unicam media controller based driver.
> > > > With this change, we directly talk to the sensor device driver to set controls
> > > > and set/get formats in the pipeline handler.
> > > > 
> > > > This change requires the accompanying Raspberry Pi linux kernel change at
> > > > https://github.com/raspberrypi/linux/pull/4645. If this kernel change is not
> > > > present, the pipeline handler will fail to run with an error message informing
> > > > the user to update the kernel build.
> > > > 
> > > > Signed-off-by: Naushir Patuck <naush at raspberrypi.com>
> > > > Reviewed-by: David Plowman <david.plowman at raspberrypi.com>
> > > > ---
> > > >  .../pipeline/raspberrypi/raspberrypi.cpp      | 185 ++++++++++++------
> > > >  1 file changed, 127 insertions(+), 58 deletions(-)
> > > > 
> > > > diff --git a/src/libcamera/pipeline/raspberrypi/raspberrypi.cpp b/src/libcamera/pipeline/raspberrypi/raspberrypi.cpp
> > > > index 1634ca98f481..48f561d31a31 100644
> > > > --- a/src/libcamera/pipeline/raspberrypi/raspberrypi.cpp
> > > > +++ b/src/libcamera/pipeline/raspberrypi/raspberrypi.cpp
> > > > @@ -26,6 +26,7 @@
> > > >  #include <libcamera/base/utils.h>
> > > >  
> > > >  #include <linux/bcm2835-isp.h>
> > > > +#include <linux/media-bus-format.h>
> > > >  #include <linux/videodev2.h>
> > > >  
> > > >  #include "libcamera/internal/bayer_format.h"
> > > > @@ -48,6 +49,66 @@ LOG_DEFINE_CATEGORY(RPI)
> > > >  
> > > >  namespace {
> > > >  
> > > > +/* Map of mbus codes to supported sizes reported by the sensor. */
> > > > +using SensorFormats = std::map<unsigned int, std::vector<Size>>;
> > > > +
> > > > +SensorFormats populateSensorFormats(std::unique_ptr<CameraSensor> &sensor)
> > > > +{
> > > > +       SensorFormats formats;
> > > > +
> > > > +       for (auto const mbusCode : sensor->mbusCodes())
> > > > +               formats.emplace(mbusCode, sensor->sizes(mbusCode));
> > > > +
> > > > +       return formats;
> > > > +}
> > > > +
> > > > +BayerFormat mbusCodeToBayerFormat(unsigned int mbusCode)
> > > > +{
> > > > +       static const std::unordered_map<unsigned int, BayerFormat> mbusCodeToBayer {
> > > > +               { MEDIA_BUS_FMT_SBGGR8_1X8, { BayerFormat::BGGR, 8, BayerFormat::None } },
> > > > +               { MEDIA_BUS_FMT_SGBRG8_1X8, { BayerFormat::GBRG, 8, BayerFormat::None } },
> > > > +               { MEDIA_BUS_FMT_SGRBG8_1X8, { BayerFormat::GRBG, 8, BayerFormat::None } },
> > > > +               { MEDIA_BUS_FMT_SRGGB8_1X8, { BayerFormat::RGGB, 8, BayerFormat::None } },
> > > > +               { MEDIA_BUS_FMT_SBGGR10_1X10, { BayerFormat::BGGR, 10, BayerFormat::None } },
> > > > +               { MEDIA_BUS_FMT_SGBRG10_1X10, { BayerFormat::GBRG, 10, BayerFormat::None } },
> > > > +               { MEDIA_BUS_FMT_SGRBG10_1X10, { BayerFormat::GRBG, 10, BayerFormat::None } },
> > > > +               { MEDIA_BUS_FMT_SRGGB10_1X10, { BayerFormat::RGGB, 10, BayerFormat::None } },
> > > > +               { MEDIA_BUS_FMT_SBGGR12_1X12, { BayerFormat::BGGR, 12, BayerFormat::None } },
> > > > +               { MEDIA_BUS_FMT_SGBRG12_1X12, { BayerFormat::GBRG, 12, BayerFormat::None } },
> > > > +               { MEDIA_BUS_FMT_SGRBG12_1X12, { BayerFormat::GRBG, 12, BayerFormat::None } },
> > > > +               { MEDIA_BUS_FMT_SRGGB12_1X12, { BayerFormat::RGGB, 12, BayerFormat::None } },
> > > > +               { MEDIA_BUS_FMT_SBGGR14_1X14, { BayerFormat::BGGR, 14, BayerFormat::None } },
> > > > +               { MEDIA_BUS_FMT_SGBRG14_1X14, { BayerFormat::GBRG, 14, BayerFormat::None } },
> > > > +               { MEDIA_BUS_FMT_SGRBG14_1X14, { BayerFormat::GRBG, 14, BayerFormat::None } },
> > > > +               { MEDIA_BUS_FMT_SRGGB14_1X14, { BayerFormat::RGGB, 14, BayerFormat::None } },
> > > > +               { MEDIA_BUS_FMT_SBGGR16_1X16, { BayerFormat::BGGR, 16, BayerFormat::None } },
> > > > +               { MEDIA_BUS_FMT_SGBRG16_1X16, { BayerFormat::GBRG, 16, BayerFormat::None } },
> > > > +               { MEDIA_BUS_FMT_SGRBG16_1X16, { BayerFormat::GRBG, 16, BayerFormat::None } },
> > > > +               { MEDIA_BUS_FMT_SRGGB16_1X16, { BayerFormat::RGGB, 16, BayerFormat::None } },
> > > > +               { MEDIA_BUS_FMT_Y8_1X8, { BayerFormat::MONO, 8, BayerFormat::None } },
> > > > +               { MEDIA_BUS_FMT_Y10_1X10, { BayerFormat::MONO, 10, BayerFormat::None } },
> > > > +       };
> > > > +
> > > > +       const auto it = mbusCodeToBayer.find(mbusCode);
> > > > +       if (it != mbusCodeToBayer.end())
> > > > +               return it->second;
> > > > +
> > > > +       return BayerFormat{};
> > 
> > I'd go the other way around, returning early on errors.
> > 
> > > > +}
> > 
> > I was expecting media bus code <-> (V4L2)PixelFormat conversion here, as
> > that's the part that is device specific. I understand why it's
> > convenient to use the BayerFormat class as an intermediate format in the
> > media bus code <-> BayerFormat <-> PixelFormat conversion. I'm a bit
> > worried that it will cause issues later, especially when adding support
> > for YUV sensors (as BayerFormat won't be able to represent non-raw
> > formats), but solving this can be deferred.
> > 
> > If you prefer going through BayerFormat, I think the map can be dropped,
> > as it essentially duplicates the one from the BayerFormat class. This
> > function should be kept though, given that a packing argument will be
> > added later.
> 
> Actually, the packing argument is added to toV4L2DeviceFormat(), not
> mbusCodeToBayerFormat(), so BayerFormat::fromMbusCode() can be used
> directly.
> 
> > > > +
> > > > +V4L2DeviceFormat toV4L2DeviceFormat(V4L2SubdeviceFormat &format)
> > 
> > The argument isn't modified, it should be const.
> > 
> > > > +{
> > > > +       V4L2DeviceFormat deviceFormat;
> > > > +       BayerFormat bayer = mbusCodeToBayerFormat(format.mbus_code);
> > > > +
> > > > +       ASSERT(bayer.isValid());
> > > > +
> > > > +       bayer.packing = BayerFormat::Packing::CSI2Packed;
> > > 
> > > I think this can be BayerFormat::CSI2Packed; It doesn't hurt to
> > > fully qualify it I suspect, but the packing is used without the
> > > ::Packing:: in the table above.
> > > 
> > > If we always set BayerFormat::CSI2Packed though, why not do that in the
> > > table above instead of initialising with BayerFormat::None?
> > 
> > BayerFormat::None isn't very explicit. I'd rather turn the enum into a
> > scoped enum. I've sent a patch to do so.
> > 
> > > (Maybe I'll discover later that we don't alway do this ... ?)
> > > 
> > > > +       deviceFormat.fourcc = bayer.toV4L2PixelFormat();
> > > > +       deviceFormat.size = format.size;
> > > > +       return deviceFormat;
> > > > +}
> > > > +
> > > >  bool isRaw(PixelFormat &pixFmt)
> > > >  {
> > > >         /*
> > > > @@ -74,11 +135,10 @@ double scoreFormat(double desired, double actual)
> > > >         return score;
> > > >  }
> > > >  
> > > > -V4L2DeviceFormat findBestMode(V4L2VideoDevice::Formats &formatsMap,
> > > > -                             const Size &req)
> > > > +V4L2SubdeviceFormat findBestFormat(const SensorFormats &formatsMap, const Size &req)
> > > >  {
> > > >         double bestScore = std::numeric_limits<double>::max(), score;
> > > > -       V4L2DeviceFormat bestMode;
> > > > +       V4L2SubdeviceFormat bestFormat;
> > > >  
> > > >  #define PENALTY_AR             1500.0
> > > >  #define PENALTY_8BIT           2000.0
> > > > @@ -88,19 +148,18 @@ V4L2DeviceFormat findBestMode(V4L2VideoDevice::Formats &formatsMap,
> > > >  
> > > >         /* Calculate the closest/best mode from the user requested size. */
> > > >         for (const auto &iter : formatsMap) {
> > > > -               V4L2PixelFormat v4l2Format = iter.first;
> > > > -               const PixelFormatInfo &info = PixelFormatInfo::info(v4l2Format);
> > > > +               const unsigned int mbusCode = iter.first;
> > > > +               const PixelFormat format = mbusCodeToBayerFormat(mbusCode).toPixelFormat();
> > > 
> > > Aha, ok here it is ... so we can't initialise with Packed in the table.
> > > 
> > > > +               const PixelFormatInfo &info = PixelFormatInfo::info(format);
> > > >  
> > > > -               for (const SizeRange &sz : iter.second) {
> > > > -                       double modeWidth = sz.contains(req) ? req.width : sz.max.width;
> > > > -                       double modeHeight = sz.contains(req) ? req.height : sz.max.height;
> > > > +               for (const Size &size : iter.second) {
> > > >                         double reqAr = static_cast<double>(req.width) / req.height;
> > > > -                       double modeAr = modeWidth / modeHeight;
> > > > +                       double fmtAr = size.width / size.height;
> > > >  
> > > >                         /* Score the dimensions for closeness. */
> > > > -                       score = scoreFormat(req.width, modeWidth);
> > > > -                       score += scoreFormat(req.height, modeHeight);
> > > > -                       score += PENALTY_AR * scoreFormat(reqAr, modeAr);
> > > > +                       score = scoreFormat(req.width, size.width);
> > > > +                       score += scoreFormat(req.height, size.height);
> > > > +                       score += PENALTY_AR * scoreFormat(reqAr, fmtAr);
> > > >  
> > > >                         /* Add any penalties... this is not an exact science! */
> > > >                         if (!info.packed)
> > > > @@ -115,18 +174,18 @@ V4L2DeviceFormat findBestMode(V4L2VideoDevice::Formats &formatsMap,
> > > >  
> > > >                         if (score <= bestScore) {
> > > >                                 bestScore = score;
> > > > -                               bestMode.fourcc = v4l2Format;
> > > > -                               bestMode.size = Size(modeWidth, modeHeight);
> > > > +                               bestFormat.mbus_code = mbusCode;
> > > > +                               bestFormat.size = size;
> > > >                         }
> > > >  
> > > > -                       LOG(RPI, Info) << "Mode: " << modeWidth << "x" << modeHeight
> > > > -                                      << " fmt " << v4l2Format.toString()
> > > > +                       LOG(RPI, Info) << "Format: " << size.toString()
> > > > +                                      << " fmt " << format.toString()
> > > >                                        << " Score: " << score
> > > >                                        << " (best " << bestScore << ")";
> > > >                 }
> > > >         }
> > > >  
> > > > -       return bestMode;
> > > > +       return bestFormat;
> > > >  }
> > > >  
> > > >  enum class Unicam : unsigned int { Image, Embedded };
> > > > @@ -170,6 +229,7 @@ public:
> > > >         std::unique_ptr<ipa::RPi::IPAProxyRPi> ipa_;
> > > >  
> > > >         std::unique_ptr<CameraSensor> sensor_;
> > > > +       SensorFormats sensorFormats_;
> > > >         /* Array of Unicam and ISP device streams and associated buffers/streams. */
> > > >         RPi::Device<Unicam, 2> unicam_;
> > > >         RPi::Device<Isp, 4> isp_;
> > > > @@ -352,9 +412,9 @@ CameraConfiguration::Status RPiCameraConfiguration::validate()
> > > >                          * Calculate the best sensor mode we can use based on
> > > >                          * the user request.
> > > >                          */
> > > > -                       V4L2VideoDevice::Formats fmts = data_->unicam_[Unicam::Image].dev()->formats();
> > > > -                       V4L2DeviceFormat sensorFormat = findBestMode(fmts, cfg.size);
> > > > -                       int ret = data_->unicam_[Unicam::Image].dev()->tryFormat(&sensorFormat);
> > > > +                       V4L2SubdeviceFormat sensorFormat = findBestFormat(data_->sensorFormats_, cfg.size);
> > > > +                       V4L2DeviceFormat unicamFormat = toV4L2DeviceFormat(sensorFormat);
> > > > +                       int ret = data_->unicam_[Unicam::Image].dev()->tryFormat(&unicamFormat);
> > > >                         if (ret)
> > > >                                 return Invalid;
> > > >  
> > > > @@ -366,7 +426,7 @@ CameraConfiguration::Status RPiCameraConfiguration::validate()
> > > >                          * fetch the "native" (i.e. untransformed) Bayer order,
> > > >                          * because the sensor may currently be flipped!
> > > >                          */
> > > > -                       V4L2PixelFormat fourcc = sensorFormat.fourcc;
> > > > +                       V4L2PixelFormat fourcc = unicamFormat.fourcc;
> > > >                         if (data_->flipsAlterBayerOrder_) {
> > > >                                 BayerFormat bayer = BayerFormat::fromV4L2PixelFormat(fourcc);
> > > >                                 bayer.order = data_->nativeBayerOrder_;
> > > > @@ -375,15 +435,15 @@ CameraConfiguration::Status RPiCameraConfiguration::validate()
> > > >                         }
> > > >  
> > > >                         PixelFormat sensorPixFormat = fourcc.toPixelFormat();
> > > > -                       if (cfg.size != sensorFormat.size ||
> > > > +                       if (cfg.size != unicamFormat.size ||
> > > >                             cfg.pixelFormat != sensorPixFormat) {
> > > > -                               cfg.size = sensorFormat.size;
> > > > +                               cfg.size = unicamFormat.size;
> > > >                                 cfg.pixelFormat = sensorPixFormat;
> > > >                                 status = Adjusted;
> > > >                         }
> > > >  
> > > > -                       cfg.stride = sensorFormat.planes[0].bpl;
> > > > -                       cfg.frameSize = sensorFormat.planes[0].size;
> > > > +                       cfg.stride = unicamFormat.planes[0].bpl;
> > > > +                       cfg.frameSize = unicamFormat.planes[0].size;
> > > >  
> > > >                         rawCount++;
> > > >                 } else {
> > > > @@ -472,7 +532,7 @@ CameraConfiguration *PipelineHandlerRPi::generateConfiguration(Camera *camera,
> > > >  {
> > > >         RPiCameraData *data = cameraData(camera);
> > > >         CameraConfiguration *config = new RPiCameraConfiguration(data);
> > > > -       V4L2DeviceFormat sensorFormat;
> > > > +       V4L2SubdeviceFormat sensorFormat;
> > > >         unsigned int bufferCount;
> > > >         PixelFormat pixelFormat;
> > > >         V4L2VideoDevice::Formats fmts;
> > > > @@ -487,9 +547,8 @@ CameraConfiguration *PipelineHandlerRPi::generateConfiguration(Camera *camera,
> > > >                 switch (role) {
> > > >                 case StreamRole::Raw:
> > > >                         size = data->sensor_->resolution();
> > > > -                       fmts = data->unicam_[Unicam::Image].dev()->formats();
> > > > -                       sensorFormat = findBestMode(fmts, size);
> > > > -                       pixelFormat = sensorFormat.fourcc.toPixelFormat();
> > > > +                       sensorFormat = findBestFormat(data->sensorFormats_, size);
> > > > +                       pixelFormat = mbusCodeToBayerFormat(sensorFormat.mbus_code).toPixelFormat();

This means that we'll default to unpacked raw formats in memory,
requiring additional memory usage and bandwidth. Is it really the right
option, shouldn't we default to packed formats ?

> > > >                         ASSERT(pixelFormat.isValid());
> > > >                         bufferCount = 2;
> > > >                         rawCount++;
> > > > @@ -599,32 +658,29 @@ int PipelineHandlerRPi::configure(Camera *camera, CameraConfiguration *config)
> > > >         }
> > > >  
> > > >         /* First calculate the best sensor mode we can use based on the user request. */
> > > > -       V4L2VideoDevice::Formats fmts = data->unicam_[Unicam::Image].dev()->formats();
> > > > -       V4L2DeviceFormat sensorFormat = findBestMode(fmts, rawStream ? sensorSize : maxSize);
> > > > +       V4L2SubdeviceFormat sensorFormat = findBestFormat(data->sensorFormats_, rawStream ? sensorSize : maxSize);
> > > > +       ret = data->sensor_->setFormat(&sensorFormat);
> > > > +       if (ret)
> > > > +               return ret;
> > > >  
> > > >         /*
> > > >          * Unicam image output format. The ISP input format gets set at start,
> > > >          * just in case we have swapped bayer orders due to flips.
> > > >          */
> > > > -       ret = data->unicam_[Unicam::Image].dev()->setFormat(&sensorFormat);
> > > > +       V4L2DeviceFormat unicamFormat = toV4L2DeviceFormat(sensorFormat);
> > > > +       ret = data->unicam_[Unicam::Image].dev()->setFormat(&unicamFormat);
> > > >         if (ret)
> > > >                 return ret;
> > > >  
> > > > -       /*
> > > > -        * The control ranges associated with the sensor may need updating
> > > > -        * after a format change.
> > > > -        * \todo Use the CameraSensor::setFormat API instead.
> > > > -        */
> > > > -       data->sensor_->updateControlInfo();
> > > > -
> > > >         LOG(RPI, Info) << "Sensor: " << camera->id()
> > > > -                      << " - Selected mode: " << sensorFormat.toString();
> > > > +                      << " - Selected sensor format: " << sensorFormat.toString()
> > > > +                      << " - Selected unicam format: " << unicamFormat.toString();
> > > >  
> > > >         /*
> > > >          * This format may be reset on start() if the bayer order has changed
> > > >          * because of flips in the sensor.
> > > >          */
> > > > -       ret = data->isp_[Isp::Input].dev()->setFormat(&sensorFormat);
> > > > +       ret = data->isp_[Isp::Input].dev()->setFormat(&unicamFormat);
> > > >         if (ret)
> > > >                 return ret;
> > > >  
> > > > @@ -746,8 +802,8 @@ int PipelineHandlerRPi::configure(Camera *camera, CameraConfiguration *config)
> > > >         data->ispMinCropSize_ = testCrop.size();
> > > >  
> > > >         /* Adjust aspect ratio by providing crops on the input image. */
> > > > -       Size size = sensorFormat.size.boundedToAspectRatio(maxSize);
> > > > -       Rectangle crop = size.centeredTo(Rectangle(sensorFormat.size).center());
> > > > +       Size size = unicamFormat.size.boundedToAspectRatio(maxSize);
> > > > +       Rectangle crop = size.centeredTo(Rectangle(unicamFormat.size).center());
> > > >         data->ispCrop_ = crop;
> > > >  
> > > >         data->isp_[Isp::Input].dev()->setSelection(V4L2_SEL_TGT_CROP, &crop);
> > > > @@ -761,8 +817,11 @@ int PipelineHandlerRPi::configure(Camera *camera, CameraConfiguration *config)
> > > >          * supports it.
> > > >          */
> > > >         if (data->sensorMetadata_) {
> > > > -               format = {};
> > > > +               V4L2SubdeviceFormat embeddedFormat;
> > > > +
> > > > +               data->sensor_->device()->getFormat(1, &embeddedFormat);
> > > >                 format.fourcc = V4L2PixelFormat(V4L2_META_FMT_SENSOR_DATA);
> > > > +               format.planes[0].size = embeddedFormat.size.width * embeddedFormat.size.height;
> > > >  
> > > >                 LOG(RPI, Debug) << "Setting embedded data format.";
> > > >                 ret = data->unicam_[Unicam::Embedded].dev()->setFormat(&format);
> > > > @@ -847,9 +906,14 @@ int PipelineHandlerRPi::start(Camera *camera, const ControlList *controls)
> > > >          * IPA configure may have changed the sensor flips - hence the bayer
> > > >          * order. Get the sensor format and set the ISP input now.
> > > >          */
> > > > -       V4L2DeviceFormat sensorFormat;
> > > > -       data->unicam_[Unicam::Image].dev()->getFormat(&sensorFormat);
> > > > -       ret = data->isp_[Isp::Input].dev()->setFormat(&sensorFormat);
> > > > +       V4L2SubdeviceFormat sensorFormat;
> > > > +       data->sensor_->device()->getFormat(0, &sensorFormat);
> > > > +
> > > > +       V4L2DeviceFormat ispFormat;
> > > > +       ispFormat.fourcc = mbusCodeToBayerFormat(sensorFormat.mbus_code).toV4L2PixelFormat();

And unless I'm mistaken, this won't work if the user select a packed
format.

Here I would use the V4L2PixelFormat that has been set on the Unicam
video node, without going through the media bus code.

> > > > +       ispFormat.size = sensorFormat.size;
> > > > +
> > > > +       ret = data->isp_[Isp::Input].dev()->setFormat(&ispFormat);
> > > >         if (ret) {
> > > >                 stop(camera);
> > > >                 return ret;
> > > > @@ -1004,6 +1068,8 @@ bool PipelineHandlerRPi::match(DeviceEnumerator *enumerator)
> > > >         if (data->sensor_->init())
> > > >                 return false;
> > > >  
> > > > +       data->sensorFormats_ = populateSensorFormats(data->sensor_);
> > > > +
> > > >         ipa::RPi::SensorConfig sensorConfig;
> > > >         if (data->loadIPA(&sensorConfig)) {
> > > >                 LOG(RPI, Error) << "Failed to load a suitable IPA library";
> > > > @@ -1030,6 +1096,11 @@ bool PipelineHandlerRPi::match(DeviceEnumerator *enumerator)
> > > >                         return false;
> > > >         }
> > > >  
> > > > +       if (!(data->unicam_[Unicam::Image].dev()->caps().device_caps() & V4L2_CAP_IO_MC)) {
> > > 
> > > I'd probably add this to v4l2_videodevice.h as hasMediaController() on
> > > the V4L2Capabilities.
> > > 
> > >   Done: libcamera: v4l2_videodevice: provide hasMediaController()
> > > 
> > > No requirement to change here, unless you want to/the above gets in
> > > fast, it can be updated later.
> > > 
> > > > +               LOG(RPI, Error) << "Unicam driver did not advertise V4L2_CAP_IO_MC, please update your kernel!";
> > > 
> > > I would probably have written this as 
> > > 	"Unicam driver does not use the MediaController, please update your kernel!";
> > > 
> > > but I'm pleased to see we can check this at run time anyway.
> > > 
> > > Nothing else stands out to me in this so:
> > > 
> > > Reviewed-by: Kieran Bingham <kieran.bingham at ideasonboard.com>
> > > 
> > > > +               return false;
> > > > +       }
> > > > +
> > > >         /*
> > > >          * Setup our delayed control writer with the sensor default
> > > >          * gain and exposure delays. Mark VBLANK for priority write.
> > > > @@ -1039,7 +1110,7 @@ bool PipelineHandlerRPi::match(DeviceEnumerator *enumerator)
> > > >                 { V4L2_CID_EXPOSURE, { sensorConfig.exposureDelay, false } },
> > > >                 { V4L2_CID_VBLANK, { sensorConfig.vblankDelay, true } }
> > > >         };
> > > > -       data->delayedCtrls_ = std::make_unique<DelayedControls>(data->unicam_[Unicam::Image].dev(), params);
> > > > +       data->delayedCtrls_ = std::make_unique<DelayedControls>(data->sensor_->device(), params);
> > > >         data->sensorMetadata_ = sensorConfig.sensorMetadata;
> > > >  
> > > >         /* Register the controls that the Raspberry Pi IPA can handle. */
> > > > @@ -1066,15 +1137,14 @@ bool PipelineHandlerRPi::match(DeviceEnumerator *enumerator)
> > > >          * As part of answering the final question, we reset the camera to
> > > >          * no transform at all.
> > > >          */
> > > > -
> > > > -       V4L2VideoDevice *dev = data->unicam_[Unicam::Image].dev();
> > > > -       const struct v4l2_query_ext_ctrl *hflipCtrl = dev->controlInfo(V4L2_CID_HFLIP);
> > > > +       const V4L2Subdevice *sensor = data->sensor_->device();
> > > > +       const struct v4l2_query_ext_ctrl *hflipCtrl = sensor->controlInfo(V4L2_CID_HFLIP);
> > > >         if (hflipCtrl) {
> > > >                 /* We assume it will support vflips too... */
> > > >                 data->supportsFlips_ = true;
> > > >                 data->flipsAlterBayerOrder_ = hflipCtrl->flags & V4L2_CTRL_FLAG_MODIFY_LAYOUT;
> > > >  
> > > > -               ControlList ctrls(dev->controls());
> > > > +               ControlList ctrls(data->sensor_->controls());
> > > >                 ctrls.set(V4L2_CID_HFLIP, 0);
> > > >                 ctrls.set(V4L2_CID_VFLIP, 0);
> > > >                 data->setSensorControls(ctrls);
> > > > @@ -1082,9 +1152,8 @@ bool PipelineHandlerRPi::match(DeviceEnumerator *enumerator)
> > > >  
> > > >         /* Look for a valid Bayer format. */
> > > >         BayerFormat bayerFormat;
> > > > -       for (const auto &iter : dev->formats()) {
> > > > -               V4L2PixelFormat v4l2Format = iter.first;
> > > > -               bayerFormat = BayerFormat::fromV4L2PixelFormat(v4l2Format);
> > > > +       for (const auto &iter : data->sensorFormats_) {
> > > > +               bayerFormat = mbusCodeToBayerFormat(iter.first);
> > > >                 if (bayerFormat.isValid())
> > > >                         break;
> > > >         }
> > > > @@ -1271,7 +1340,7 @@ int RPiCameraData::configureIPA(const CameraConfiguration *config)
> > > >                 }
> > > >         }
> > > >  
> > > > -       entityControls.emplace(0, unicam_[Unicam::Image].dev()->controls());
> > > > +       entityControls.emplace(0, sensor_->controls());
> > > >         entityControls.emplace(1, isp_[Isp::Input].dev()->controls());
> > > >  
> > > >         /* Always send the user transform to the IPA. */
> > > > @@ -1406,10 +1475,10 @@ void RPiCameraData::setSensorControls(ControlList &controls)
> > > >                 ControlList vblank_ctrl;
> > > >  
> > > >                 vblank_ctrl.set(V4L2_CID_VBLANK, controls.get(V4L2_CID_VBLANK));
> > > > -               unicam_[Unicam::Image].dev()->setControls(&vblank_ctrl);
> > > > +               sensor_->setControls(&vblank_ctrl);
> > > >         }
> > > >  
> > > > -       unicam_[Unicam::Image].dev()->setControls(&controls);
> > > > +       sensor_->setControls(&controls);
> > > >  }
> > > >  
> > > >  void RPiCameraData::unicamBufferDequeue(FrameBuffer *buffer)
> 
> -- 
> Regards,
> 
> Laurent Pinchart

-- 
Regards,

Laurent Pinchart


More information about the libcamera-devel mailing list