[libcamera-devel] [PATCH 2/2] pipeline: raspberrypi: Choose bit depth and packing according to raw stream
David Plowman
david.plowman at raspberrypi.com
Tue Nov 30 13:32:29 CET 2021
Hi Kieran
Thanks for the review!
On Tue, 30 Nov 2021 at 12:14, Kieran Bingham
<kieran.bingham at ideasonboard.com> wrote:
>
> Hi David,
>
> Quoting David Plowman (2021-11-30 11:22:59)
> > When a raw stream is specified, the bit depth and packing requested
> > should influence our choice of camera mode to match (if possible).
> >
> > Signed-off-by: David Plowman <david.plowman at raspberrypi.com>
> > ---
> > .../pipeline/raspberrypi/raspberrypi.cpp | 33 ++++++++++---------
> > 1 file changed, 18 insertions(+), 15 deletions(-)
> >
> > diff --git a/src/libcamera/pipeline/raspberrypi/raspberrypi.cpp b/src/libcamera/pipeline/raspberrypi/raspberrypi.cpp
> > index 045725dd..03012e89 100644
> > --- a/src/libcamera/pipeline/raspberrypi/raspberrypi.cpp
> > +++ b/src/libcamera/pipeline/raspberrypi/raspberrypi.cpp
> > @@ -49,6 +49,8 @@ LOG_DEFINE_CATEGORY(RPI)
> >
> > namespace {
> >
> > +unsigned int DEFAULT_RAW_BIT_DEPTH = 12;
> > +
> > /* Map of mbus codes to supported sizes reported by the sensor. */
> > using SensorFormats = std::map<unsigned int, std::vector<Size>>;
> >
> > @@ -125,15 +127,13 @@ double scoreFormat(double desired, double actual)
> > return score;
> > }
> >
> > -V4L2SubdeviceFormat findBestFormat(const SensorFormats &formatsMap, const Size &req)
> > +V4L2SubdeviceFormat findBestFormat(const SensorFormats &formatsMap, const Size &req, unsigned int bitDepth)
> > {
> > double bestScore = std::numeric_limits<double>::max(), score;
> > V4L2SubdeviceFormat bestFormat;
> >
> > #define PENALTY_AR 1500.0
> > -#define PENALTY_8BIT 2000.0
> > -#define PENALTY_10BIT 1000.0
> > -#define PENALTY_12BIT 0.0
> > +#define PENALTY_BIT_DEPTH 500.0
> >
> > /* Calculate the closest/best mode from the user requested size. */
> > for (const auto &iter : formatsMap) {
> > @@ -152,12 +152,7 @@ V4L2SubdeviceFormat findBestFormat(const SensorFormats &formatsMap, const Size &
> > score += PENALTY_AR * scoreFormat(reqAr, fmtAr);
> >
> > /* Add any penalties... this is not an exact science! */
> > - if (info.bitsPerPixel == 12)
> > - score += PENALTY_12BIT;
> > - else if (info.bitsPerPixel == 10)
> > - score += PENALTY_10BIT;
> > - else if (info.bitsPerPixel == 8)
> > - score += PENALTY_8BIT;
> > + score += abs(info.bitsPerPixel - bitDepth) * PENALTY_BIT_DEPTH;
> >
> > if (score <= bestScore) {
> > bestScore = score;
> > @@ -397,9 +392,14 @@ CameraConfiguration::Status RPiCameraConfiguration::validate()
> > * Calculate the best sensor mode we can use based on
> > * the user request.
> > */
> > - V4L2SubdeviceFormat sensorFormat = findBestFormat(data_->sensorFormats_, cfg.size);
> > + const PixelFormatInfo &info = PixelFormatInfo::info(cfg.pixelFormat);
> > + unsigned int bitDepth = info.isValid() ? info.bitsPerPixel : DEFAULT_RAW_BIT_DEPTH;
> > + V4L2SubdeviceFormat sensorFormat = findBestFormat(data_->sensorFormats_, cfg.size, bitDepth);
> > + BayerFormat::Packing packing = BayerFormat::Packing::CSI2;
> > + if (info.isValid() && !info.packed)
> > + packing = BayerFormat::Packing::None;
> > V4L2DeviceFormat unicamFormat = toV4L2DeviceFormat(sensorFormat,
> > - BayerFormat::Packing::CSI2);
> > + packing);
> > int ret = data_->unicam_[Unicam::Image].dev()->tryFormat(&unicamFormat);
> > if (ret)
> > return Invalid;
> > @@ -533,7 +533,7 @@ CameraConfiguration *PipelineHandlerRPi::generateConfiguration(Camera *camera,
> > switch (role) {
> > case StreamRole::Raw:
> > size = data->sensor_->resolution();
> > - sensorFormat = findBestFormat(data->sensorFormats_, size);
> > + sensorFormat = findBestFormat(data->sensorFormats_, size, DEFAULT_RAW_BIT_DEPTH);
> > pixelFormat = mbusCodeToPixelFormat(sensorFormat.mbus_code,
> > BayerFormat::Packing::CSI2);
> > ASSERT(pixelFormat.isValid());
> > @@ -622,6 +622,7 @@ int PipelineHandlerRPi::configure(Camera *camera, CameraConfiguration *config)
> > Size maxSize, sensorSize;
> > unsigned int maxIndex = 0;
> > bool rawStream = false;
> > + unsigned int bitDepth = DEFAULT_RAW_BIT_DEPTH;
> >
> > /*
> > * Look for the RAW stream (if given) size as well as the largest
> > @@ -638,7 +639,9 @@ int PipelineHandlerRPi::configure(Camera *camera, CameraConfiguration *config)
> > sensorSize = cfg.size;
> > rawStream = true;
> > /* Check if the user has explicitly set an unpacked format. */
> > - packing = BayerFormat::fromPixelFormat(cfg.pixelFormat).packing;
> > + BayerFormat bayerFormat = BayerFormat::fromPixelFormat(cfg.pixelFormat);
> > + packing = bayerFormat.packing;
> > + bitDepth = bayerFormat.bitDepth;
> > } else {
> > if (cfg.size > maxSize) {
> > maxSize = config->at(i).size;
> > @@ -664,7 +667,7 @@ int PipelineHandlerRPi::configure(Camera *camera, CameraConfiguration *config)
> > }
> >
> > /* First calculate the best sensor mode we can use based on the user request. */
> > - V4L2SubdeviceFormat sensorFormat = findBestFormat(data->sensorFormats_, rawStream ? sensorSize : maxSize);
> > + V4L2SubdeviceFormat sensorFormat = findBestFormat(data->sensorFormats_, rawStream ? sensorSize : maxSize, bitDepth);
>
> This all looks like it's the right direction, so
>
> Reviewed-by: Kieran Bingham <kieran.bingham at ideasonboard.com>
>
>
> What happens if the appliation requests a RAW stream set with a size
> that is not the full sensor size, but one of the smaller modes that it
> can produce?
>
> At the moment, this looks like the conditional above will override the
> request for the smaller mode, and force output to the full sensor size?
>
> Should
> rawStream ? sensorSize : maxSize,
>
> be considered as well? (In another patch if so is fine).
>
> --
> Kieran
I think this is behaving properly, though perhaps the variable naming
here is sub-optimal. "sensorSize" is actually the size of the raw
stream that the application has requested, whereas "maxSize" is the
maximum size of any output (non-raw) streams. So it's selecting the
size passed in by the application as the raw stream, even if smaller
than the full sensor resolution.
In fact it's always been doing this - no change to the behaviour here.
This particular patch obviously extends it to pay attention to the
requested bit depth and packing as well, which I think was clearly an
omission previously.
Does that make sense?
Thanks!
David
>
>
>
> > ret = data->sensor_->setFormat(&sensorFormat);
> > if (ret)
> > return ret;
> > --
> > 2.30.2
> >
More information about the libcamera-devel
mailing list