[libcamera-devel] [PATCH v4 2/6] pipeline: raspberrypi: Set the ISP Output1 to 1/4 resolution if unused
Laurent Pinchart
laurent.pinchart at ideasonboard.com
Thu Feb 4 22:56:54 CET 2021
Hi Naush,
On Wed, Jan 27, 2021 at 08:14:29AM +0000, Naushir Patuck wrote:
> On Tue, 26 Jan 2021 at 22:53, Laurent Pinchart wrote:
> > On Tue, Jan 26, 2021 at 04:24:08PM +0000, Naushir Patuck wrote:
> > > In preparation for fast colour denoise, set the low resolution ISP
> > > output stream (Output1) to a 1/4 resolution of the application requested
> > > stream (Output0). This only happens if the application has not requested
> > > an additional YUV or RGB stream.
> > >
> > > We also constrain this 1/4 resolution to at most 1200 pixels in the
> > > largest dimension to avoid being too taxing on memory usage and system
> > > bandwidth.
> > >
> > > Also switch the default StreamRole::VideoRecording to YUV420 to allow
> > > fast colour denoise to run.
> >
> > Could you explain why all this is needed ? What happens if an
> > application requests the second output with a higher resolution that
> > this, will fast colour denoise still work correctly ?
>
> Fast colour denoise essentially works by doing some analysis at a 1:4 or
> 1:1 resolution image when compared with the output image resolution. It
> then applies correction on the output image directly based on this
> analysis. If the application were to require a second output stream that
> is not 1:4 or 1:1 in resolution ratio, the colour denoise will silently be
> bypassed.
That's useful information. Maybe this could be captured in a comment in
the code ?
I also wonder if we shouldn't consider this in the metadata we return.
When the CDN is silently bypassed, wouldn't it be better if the metadata
reflected that ?
> There is another constraint that this analysis image must be in
> YUV420 format. Unfortunately, this is a limitation with our
> implementation that we currently have.
Only the analysis image, or also the main output image ? This would also
be useful to capture in a comment, so that it doesn't get ignored later
when the code is reworked.
> > > Signed-off-by: Naushir Patuck <naush at raspberrypi.com>
> > > Reviewed-by: David Plowman <david.plowman at raspberrypi.com>
> > > Reviewed-by: Kieran Bingham <kieran.bingham at ideasonboard.com>
> > > ---
> > > .../pipeline/raspberrypi/raspberrypi.cpp | 35 ++++++++++++++++++-
> > > 1 file changed, 34 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/src/libcamera/pipeline/raspberrypi/raspberrypi.cpp b/src/libcamera/pipeline/raspberrypi/raspberrypi.cpp
> > > index d9895c779725..fe4c75f09925 100644
> > > --- a/src/libcamera/pipeline/raspberrypi/raspberrypi.cpp
> > > +++ b/src/libcamera/pipeline/raspberrypi/raspberrypi.cpp
> > > @@ -496,7 +496,7 @@ CameraConfiguration *PipelineHandlerRPi::generateConfiguration(Camera *camera,
> > >
> > > case StreamRole::VideoRecording:
> > > fmts = data->isp_[Isp::Output0].dev()->formats();
> > > - pixelFormat = formats::NV12;
Here for instance, you could write
/*
* The colour denoise algorithm require the analysis
* image, produced by the second ISP output, to be in
* YUV420 format. Select this format as the default, to
* maximize chances that it will be picked by
* applications and enable usage of the colour denoise
* algorithm.
*/
Or something more accurate, as I'm not sure to have picked all the
important information correctly :-)
> > > + pixelFormat = formats::YUV420;
> > > size = { 1920, 1080 };
> > > bufferCount = 4;
> > > outCount++;
> > > @@ -608,6 +608,7 @@ int PipelineHandlerRPi::configure(Camera *camera, CameraConfiguration *config)
> > > * StreamConfiguration appropriately.
> > > */
> > > V4L2DeviceFormat format;
> > > + bool output1Set = false;
> > > for (unsigned i = 0; i < config->size(); i++) {
> > > StreamConfiguration &cfg = config->at(i);
> > >
> > > @@ -632,6 +633,9 @@ int PipelineHandlerRPi::configure(Camera *camera, CameraConfiguration *config)
> > > format.size = cfg.size;
> > > format.fourcc = fourcc;
> > >
> > > + LOG(RPI, Info) << "Setting " << stream->name() << " to a resolution of "
> > > + << format.toString();
> > > +
> > > ret = stream->dev()->setFormat(&format);
> > > if (ret)
> > > return -EINVAL;
> > > @@ -645,6 +649,35 @@ int PipelineHandlerRPi::configure(Camera *camera, CameraConfiguration *config)
> > >
> > > cfg.setStream(stream);
> > > stream->setExternal(true);
> > > +
> > > + if (i != maxIndex)
> > > + output1Set = true;
> > > + }
> > > +
> > > + /*
> > > + * If ISP::Output1 stream has not been requested by the application, we
> > > + * set it up for internal use now. This second stream will be used for
> > > + * fast colour denoise, and must be a quarter resolution of the ISP::Output0
> > > + * stream. However, also limit the maximum size to 1200 pixels in the
> > > + * larger dimension, just to avoid being wasteful with buffer allocations
> > > + * and memory bandwidth.
> > > + */
> > > + if (!output1Set) {
> > > + V4L2DeviceFormat output1Format = format;
If only the analysis image has to be in YUV420 format, should YUV420 be
selected here in case output 0 uses a different format ?
> > > + constexpr unsigned int maxDimensions = 1200;
> > > + const Size limit = Size(maxDimensions, maxDimensions).boundedToAspectRatio(format.size);
> > > +
> > > + output1Format.size = (format.size / 2).boundedTo(limit).alignedDownTo(2, 2);
> > > +
> > > + LOG(RPI, Info) << "Setting ISP Output1 (internal) to a resolution of "
> > > + << output1Format.toString();
> > > +
> > > + ret = data->isp_[Isp::Output1].dev()->setFormat(&output1Format);
> > > + if (ret) {
> > > + LOG(RPI, Error) << "Failed to set format on ISP Output1: "
> > > + << ret;
> > > + return -EINVAL;
> > > + }
> > > }
> > >
> > > /* ISP statistics output format. */
--
Regards,
Laurent Pinchart
More information about the libcamera-devel
mailing list