[libcamera-devel] [PATCH 4/4] libcamera: rpi: Handle SensorConfiguration
Naushir Patuck
naush at raspberrypi.com
Thu Jul 27 16:31:20 CEST 2023
Hi Jacopo,
On Thu, 27 Jul 2023 at 09:13, Jacopo Mondi
<jacopo.mondi at ideasonboard.com> wrote:
>
> Hi Naush
>
> On Wed, Jul 26, 2023 at 10:52:59AM +0100, Naushir Patuck wrote:
> > Hi Jacopo,
> >
> > Thank you for your work.
> >
> > On Mon, 24 Jul 2023 at 13:39, Jacopo Mondi via libcamera-devel
> > <libcamera-devel at lists.libcamera.org> wrote:
> > >
> > > Handle the SensorConfiguration provided by the application in the
> > > pipeline validate() and configure() call chains.
> > >
> > > During validation, first make sure SensorConfiguration is valid, then
> > > handle it to compute the sensor format.
> > >
> > > For the VC4 platform where the RAW stream follows the sensor's
> > > configuration adjust the RAW stream configuration to match the sensor
> > > configuration.
> > >
> > > Signed-off-by: Jacopo Mondi <jacopo.mondi at ideasonboard.com>
> > > ---
> > > .../pipeline/rpi/common/pipeline_base.cpp | 64 +++++++++++++++----
> > > .../pipeline/rpi/common/pipeline_base.h | 4 +-
> > > src/libcamera/pipeline/rpi/vc4/vc4.cpp | 30 ++++++++-
> > > 3 files changed, 83 insertions(+), 15 deletions(-)
> > >
> > > diff --git a/src/libcamera/pipeline/rpi/common/pipeline_base.cpp b/src/libcamera/pipeline/rpi/common/pipeline_base.cpp
> > > index e0fbeec37fe9..574e003402df 100644
> > > --- a/src/libcamera/pipeline/rpi/common/pipeline_base.cpp
> > > +++ b/src/libcamera/pipeline/rpi/common/pipeline_base.cpp
> > > @@ -180,6 +180,11 @@ CameraConfiguration::Status RPiCameraConfiguration::validate()
> > > if (config_.empty())
> > > return Invalid;
> > >
> > > + if (!sensorConfig.valid()) {
> > > + LOG(RPI, Error) << "Invalid sensor configuration request";
> > > + return Invalid;
> > > + }
> > > +
> > > status = validateColorSpaces(ColorSpaceFlag::StreamsShareColorSpace);
> > >
> > > /*
> > > @@ -207,19 +212,43 @@ CameraConfiguration::Status RPiCameraConfiguration::validate()
> > > std::sort(outStreams.begin(), outStreams.end(),
> > > [](auto &l, auto &r) { return l.cfg->size > r.cfg->size; });
> > >
> > > - /* Compute the sensor configuration. */
> > > - unsigned int bitDepth = defaultRawBitDepth;
> > > - if (!rawStreams.empty()) {
> > > + /* Compute the sensor's format then do any platform specific fixups. */
> > > + unsigned int bitDepth;
> > > + Size sensorSize;
> > > +
> > > + if (sensorConfig) {
> > > + /* Use the application provided sensor configuration. */
> > > + bitDepth = sensorConfig.bitDepth;
> > > + sensorSize = sensorConfig.outputSize;
> > > + } else if (!rawStreams.empty()) {
> > > + /* Use the RAW stream format and size. */
> > > BayerFormat bayerFormat = BayerFormat::fromPixelFormat(rawStreams[0].cfg->pixelFormat);
> > > bitDepth = bayerFormat.bitDepth;
> > > + sensorSize = rawStreams[0].cfg->size;
> > > + } else {
> > > + bitDepth = defaultRawBitDepth;
> > > + sensorSize = outStreams[0].cfg->size;
> > > }
> > >
> > > - sensorFormat_ = data_->findBestFormat(rawStreams.empty() ? outStreams[0].cfg->size
> > > - : rawStreams[0].cfg->size,
> > > - bitDepth);
> > > + sensorFormat_ = data_->findBestFormat(sensorSize, bitDepth);
> >
> > We don't necessarily need to do this in the case where sensorConfig is present
> > right? But I suppose it's a good way to do a validation on the values in
> > sensorConfig, so no harm in keeping it.
> >
> > > +
> > > + /*
> > > + * If a sensor configuration has been requested, it should apply
> > > + * without modifications.
> > > + */
> > > + if (sensorConfig) {
> > > + BayerFormat bayer = BayerFormat::fromMbusCode(sensorFormat_.mbus_code);
> > > +
> > > + if (bayer.bitDepth != sensorConfig.bitDepth ||
> > > + sensorFormat_.size != sensorConfig.outputSize) {
> > >
> > > - /* Do any platform specific fixups. */
> > > - status = data_->platformValidate(rawStreams, outStreams);
> > > + LOG(RPI, Error) << "Invalid sensor configuration: "
> > > + << "bitDepth/size mismatch";
> > > + return Invalid;
> > > + }
> > > + }
> > > +
> > > + status = data_->platformValidate(this, rawStreams, outStreams);
> >
> > Minor nit, we lost the comment above this statement.
> >
> >
> > > if (status == Invalid)
> > > return Invalid;
> > >
> > > @@ -466,12 +495,25 @@ int PipelineHandlerBase::configure(Camera *camera, CameraConfiguration *config)
> > > std::sort(ispStreams.begin(), ispStreams.end(),
> > > [](auto &l, auto &r) { return l.cfg->size > r.cfg->size; });
> > >
> > > - /* Apply the format on the sensor with any cached transform. */
> > > + /*
> > > + * Apply the format on the sensor with any cached transform.
> > > + *
> > > + * If the application has provided a sensor configuration apply it
> > > + * instead of just applying a format.
> > > + */
> > > const RPiCameraConfiguration *rpiConfig =
> > > static_cast<const RPiCameraConfiguration *>(config);
> > > - V4L2SubdeviceFormat sensorFormat = rpiConfig->sensorFormat_;
> > > + V4L2SubdeviceFormat sensorFormat;
> > >
> > > - ret = data->sensor_->setFormat(&sensorFormat, rpiConfig->combinedTransform_);
> > > + if (rpiConfig->sensorConfig) {
> > > + ret = data->sensor_->applyConfiguration(rpiConfig->sensorConfig,
> > > + rpiConfig->combinedTransform_,
> > > + &sensorFormat);
> > > + } else {
> > > + sensorFormat = rpiConfig->sensorFormat_;
> > > + ret = data->sensor_->setFormat(&sensorFormat,
> > > + rpiConfig->combinedTransform_);
> > > + }
> > > if (ret)
> > > return ret;
> > >
> > > diff --git a/src/libcamera/pipeline/rpi/common/pipeline_base.h b/src/libcamera/pipeline/rpi/common/pipeline_base.h
> > > index a139c98a5a2b..0a795f4d2689 100644
> > > --- a/src/libcamera/pipeline/rpi/common/pipeline_base.h
> > > +++ b/src/libcamera/pipeline/rpi/common/pipeline_base.h
> > > @@ -42,6 +42,7 @@ namespace RPi {
> > > /* Map of mbus codes to supported sizes reported by the sensor. */
> > > using SensorFormats = std::map<unsigned int, std::vector<Size>>;
> > >
> > > +class RPiCameraConfiguration;
> > > class CameraData : public Camera::Private
> > > {
> > > public:
> > > @@ -72,7 +73,8 @@ public:
> > > V4L2VideoDevice *dev;
> > > };
> > >
> > > - virtual CameraConfiguration::Status platformValidate(std::vector<StreamParams> &rawStreams,
> > > + virtual CameraConfiguration::Status platformValidate(RPiCameraConfiguration *rpiConfig,
> > > + std::vector<StreamParams> &rawStreams,
> > > std::vector<StreamParams> &outStreams) const = 0;
> > > virtual int platformConfigure(const V4L2SubdeviceFormat &sensorFormat,
> > > std::optional<BayerFormat::Packing> packing,
> > > diff --git a/src/libcamera/pipeline/rpi/vc4/vc4.cpp b/src/libcamera/pipeline/rpi/vc4/vc4.cpp
> > > index 018cf4881d0e..bf864d4174b2 100644
> > > --- a/src/libcamera/pipeline/rpi/vc4/vc4.cpp
> > > +++ b/src/libcamera/pipeline/rpi/vc4/vc4.cpp
> > > @@ -65,7 +65,8 @@ public:
> > > {
> > > }
> > >
> > > - CameraConfiguration::Status platformValidate(std::vector<StreamParams> &rawStreams,
> > > + CameraConfiguration::Status platformValidate(RPi::RPiCameraConfiguration *rpiConfig,
> > > + std::vector<StreamParams> &rawStreams,
> > > std::vector<StreamParams> &outStreams) const override;
> > >
> > > int platformPipelineConfigure(const std::unique_ptr<YamlObject> &root) override;
> > > @@ -394,7 +395,8 @@ int PipelineHandlerVc4::platformRegister(std::unique_ptr<RPi::CameraData> &camer
> > > return 0;
> > > }
> > >
> > > -CameraConfiguration::Status Vc4CameraData::platformValidate(std::vector<StreamParams> &rawStreams,
> > > +CameraConfiguration::Status Vc4CameraData::platformValidate(RPi::RPiCameraConfiguration *rpiConfig,
> > > + std::vector<StreamParams> &rawStreams,
> > > std::vector<StreamParams> &outStreams) const
> > > {
> > > CameraConfiguration::Status status = CameraConfiguration::Status::Valid;
> > > @@ -405,9 +407,29 @@ CameraConfiguration::Status Vc4CameraData::platformValidate(std::vector<StreamPa
> > > return CameraConfiguration::Status::Invalid;
> > > }
> > >
> > > - if (!rawStreams.empty())
> > > + if (!rawStreams.empty()) {
> > > rawStreams[0].dev = unicam_[Unicam::Image].dev();
> > >
> > > + /* Adjust the RAW stream to match the requested sensor config. */
> > > + if (rpiConfig->sensorConfig) {
> > > + StreamConfiguration *rawStream = rawStreams[0].cfg;
> > > + BayerFormat rawBayer = BayerFormat::fromMbusCode(rpiConfig->sensorFormat_.mbus_code);
> > > +
> > > + /* Handle flips to make sure to match the RAW stream format. */
> > > + if (flipsAlterBayerOrder_)
> > > + rawBayer = rawBayer.transform(rpiConfig->combinedTransform_);
> > > + PixelFormat rawFormat = rawBayer.toPixelFormat();
> > > +
> > > + if (rawStream->pixelFormat != rawFormat ||
> > > + rawStream->size != rpiConfig->sensorConfig.outputSize) {
> > > + rawStream->pixelFormat = rawFormat;
> > > + rawStream->size = rpiConfig->sensorConfig.outputSize;
> > > +
> > > + status = CameraConfiguration::Adjusted;
> > > + }
> > > + }
> > > + }
> > > +
> >
> > I wonder if we can make some simplifications here? The block of code that
> > handles bayer orders and transforms already happens in
> > RPiCameraConfiguration::validate()
> > (done there to avoid derived pipeline handlers from repeating the same
> > construct). Could it be simply removed from here without any loss of
> > functionality?
>
> Are you sure we're actually doing the same thing ?
>
> As I understand it, the logic implemented here makes sure that the RAW
> StreamConfiguration matches the requested SensorConfiguration without
> actually testing it against the CSI-2 receiver video device.
>
> While in RPiCameraConfiguration::validate() we take the RAW
> StreamConfiguration, regardless of the SensorConfig, and create a
> V4L2DeviceFormat from it (using the media bus code from the sensor
> format) and try to apply it to the CSI-2 receiver video device,
> adjusting the RAW StreamConfiguration according to the result of
> applying the format to the video device.
Sorry, I should have been clearer - I was only referring to the Bayer order
handling below, not the whole block of code.
>
> When it comes to the bayer order handling are you referring to:
>
> /* Handle flips to make sure to match the RAW stream format. */
> if (flipsAlterBayerOrder_)
> rawBayer = rawBayer.transform(rpiConfig->combinedTransform_);
>
> If that's your actual concern ?
Yes, this is the bit that I'm wondering if we can remove.
>
> If we don't do it here, do we risk to set to Adjusted a correct
> configuration from user ?
>
> Let's assume your sensor is natively SRGGB10 and you have H/VFLIP
> enabled, so that the actual output format is BGGR10.
>
> You receive a CameraConfiguration with a RAW stream with
>
> { PixelFormat::SBGGR10, 1080p }
>
> And a SensorConfiguration = { 10, 1080p }
>
> In RPiCameraConfiguration::validate() we compute the 'best' sensor
> format with CameraData::findBestFormat(). At this point (and here I'm
> going from memory so please confirm) flips are not activated, so you
> get back a SubdeviceFormat with the native SRGGB10 ordering.
With you so far.
>
> Then we enter platformValidate() with
>
> sensorFormat_ = { MEDIA_BUS_SRGGB10, 1080p }
>
> and we get to
> BayerFormat rawBayer = BayerFormat::fromMbusCode(rpiConfig->sensorFormat_.mbus_code);
>
> let's assume we don't do the above check for if
> (flipsAlterBayerOrder_) and we'll end up with
>
> if (rawStream->pixelFormat != rawFormat)
>
> and then we Adjust the RAW stream to PixelFormat::SRGGB10.
>
> Then we're back to RPiCameraConfiguration::validate(). We enter the
> "Further fixup on RAW streams" loop. We apply SRGGB10 to the CSI-2
> receiver but here we notice that flipsAlterBayerOrder_. So we go and
> adjust back the RAW stream configuration to SBGGR10 and we mark it
> again to Adjusted, while it is what the application has actually asked
> for!
>
> Is my understanding correct here ?
Yes, I agree with your sequencing and final outcome here.
I guess my thinking here was that in Vc4CameraData::platformValidate we would
not ever touch cfg->pixelFormat. So no if (flipsAlterBayerOrder_) clause, and
no if (rawStream->pixelFormat != rawFormat) test.
>From your example above, in "Further fixup on RAW streams" loop, rawFormat
would be BGGR10, then after the if (data_->flipsAlterBayerOrder_), we remain at
BGGR10 and this would not signal Adjusted. I think that's what would happen,
but my head hurts too much thinking about it!
Maybe the simplification I'm thinking of will not work for all circumstances,
and your change is more complete and explicit. But I wonder then, should we
remove the if (data_->flipsAlterBayerOrder_) test in the "Further fixup on RAW
streams" loop instead?
Regards,
Naush
>
> >
> > If the answer is yes, can I suggest another change - instead of passing in
> > *rpiConfig to this function, we can pass in sensorFormat_? This ought to be
> > equivalent I think? In fact, a completely unrelated change that I'm working on
> > passes in sensorFormat_ (or really just a V4L2SubdeviceFormat &), so this may
> > be ideal :)
> >
> > Regards,
> > Naush
> >
> > > /*
> > > * For the two ISP outputs, one stream must be equal or smaller than the
> > > * other in all dimensions.
> > > @@ -417,6 +439,8 @@ CameraConfiguration::Status Vc4CameraData::platformValidate(std::vector<StreamPa
> > > for (unsigned int i = 0; i < outStreams.size(); i++) {
> > > Size size;
> > >
> > > + /* \todo Warn if upscaling: reduces image quality. */
> > > +
> > > size.width = std::min(outStreams[i].cfg->size.width,
> > > outStreams[0].cfg->size.width);
> > > size.height = std::min(outStreams[i].cfg->size.height,
> > > --
> > > 2.40.1
> > >
More information about the libcamera-devel
mailing list