[libcamera-devel] [PATCH v5] android: libyuv: Introduce PostProcessorYuv
Niklas Söderlund
niklas.soderlund at ragnatech.se
Tue Feb 23 16:39:50 CET 2021
Hi Honda-san and Laurent,
Thanks for your feedback.
On 2021-02-22 10:58:10 +0900, Hirokazu Honda wrote:
> Hi Niklas,
>
>
> On Mon, Feb 22, 2021 at 4:57 AM Laurent Pinchart
> <laurent.pinchart at ideasonboard.com> wrote:
> >
> > Hi Niklas,
> >
> > On Fri, Feb 19, 2021 at 06:48:12PM +0100, Niklas Söderlund wrote:
> > > Hi Honda-san,
> > >
> > > This patch breaks compilation for me in CrOS.
> > >
> > > ../../../../../tmp/portage/media-libs/libcamera-9999/work/libcamera-9999/src/android/yuv/post_processor_yuv.cpp:64:20: error: no member named 'NV12Scale' in namespace 'libyuv'
> > >
> > > I'm sure this is some local issue on my side, but to speed things up I
> > > thought I ask if you needed to update the chroot or modify the ebuild
> > > somehow?
> >
>
> NV12Scale is recently supported in libyuv. I upreved the libyuv
> version to have NV12Scale, crrev.com/c/2670844.
> I expect your environment is so old that the libyuv's version doesn't
> have NV12Scale.
> I think you have to update your ChromiumOS repository by `repo sync`.
>
> Regards,
> -Hiro
>
> > I've rebuilt the cros sdk over the weekend (using the 13801.0.0-rc2.xml
> > manifest), and libcamera compiled fine (both 0.0.0-r390 and the live
> > ebuild).
I will try with a new chroot based on a later manifest as this indeed
seems to be the root of my problem.
> >
> > > On 2021-02-04 22:04:20 +0000, Hirokazu Honda wrote:
> > > > This adds PostProcessorYuv. It supports NV12 buffer scaling
> > > > using libyuv.
> > > >
> > > > Signed-off-by: Hirokazu Honda <hiroh at chromium.org>
> > > > Reviewed-by: Jacopo Mondi <jacopo at jmondi.org>
> > > >
> > > > ---
> > > > src/android/meson.build | 1 +
> > > > src/android/yuv/post_processor_yuv.cpp | 145 +++++++++++++++++++++++++
> > > > src/android/yuv/post_processor_yuv.h | 42 +++++++
> > > > 3 files changed, 188 insertions(+)
> > > > create mode 100644 src/android/yuv/post_processor_yuv.cpp
> > > > create mode 100644 src/android/yuv/post_processor_yuv.h
> > > >
> > > > diff --git a/src/android/meson.build b/src/android/meson.build
> > > > index 95d0f420..50481eeb 100644
> > > > --- a/src/android/meson.build
> > > > +++ b/src/android/meson.build
> > > > @@ -49,6 +49,7 @@ android_hal_sources = files([
> > > > 'jpeg/exif.cpp',
> > > > 'jpeg/post_processor_jpeg.cpp',
> > > > 'jpeg/thumbnailer.cpp',
> > > > + 'yuv/post_processor_yuv.cpp'
> > > > ])
> > > >
> > > > android_camera_metadata_sources = files([
> > > > diff --git a/src/android/yuv/post_processor_yuv.cpp b/src/android/yuv/post_processor_yuv.cpp
> > > > new file mode 100644
> > > > index 00000000..aecb921f
> > > > --- /dev/null
> > > > +++ b/src/android/yuv/post_processor_yuv.cpp
> > > > @@ -0,0 +1,145 @@
> > > > +/* SPDX-License-Identifier: LGPL-2.1-or-later */
> > > > +/*
> > > > + * Copyright (C) 2021, Google Inc.
> > > > + *
> > > > + * post_processor_yuv.cpp - Post Processor using libyuv
> > > > + */
> > > > +
> > > > +#include "post_processor_yuv.h"
> > > > +
> > > > +#include <libyuv/scale.h>
> > > > +
> > > > +#include <libcamera/formats.h>
> > > > +#include <libcamera/geometry.h>
> > > > +#include <libcamera/internal/formats.h>
> > > > +#include <libcamera/internal/log.h>
> > > > +#include <libcamera/pixel_format.h>
> > > > +
> > > > +using namespace libcamera;
> > > > +
> > > > +LOG_DEFINE_CATEGORY(YUV)
> > > > +
> > > > +int PostProcessorYuv::configure(const StreamConfiguration &inCfg,
> > > > + const StreamConfiguration &outCfg)
> > > > +{
> > > > + if (inCfg.pixelFormat != outCfg.pixelFormat) {
> > > > + LOG(YUV, Error) << "Pixel format conversion is not supported"
> > > > + << " (from " << inCfg.toString()
> > > > + << " to " << outCfg.toString() << ")";
> > > > + return -EINVAL;
> > > > + }
> > > > +
> > > > + if (inCfg.size < outCfg.size) {
> > > > + LOG(YUV, Error) << "Up-scaling is not supported"
> > > > + << " (from " << inCfg.toString()
> > > > + << " to " << outCfg.toString() << ")";
> > > > + return -EINVAL;
> > > > + }
> > > > +
> > > > + if (inCfg.pixelFormat != formats::NV12) {
> > > > + LOG(YUV, Error) << "Unsupported format " << inCfg.pixelFormat
> > > > + << " (only NV12 is supported)";
> > > > + return -EINVAL;
> > > > + }
> > > > +
> > > > + calculateLengths(inCfg, outCfg);
> > > > + return 0;
> > > > +}
> > > > +
> > > > +int PostProcessorYuv::process(const FrameBuffer &source,
> > > > + libcamera::MappedBuffer *destination,
> > > > + [[maybe_unused]] const CameraMetadata &requestMetadata,
> > > > + [[maybe_unused]] CameraMetadata *metadata)
> > > > +{
> > > > + if (!isValidBuffers(source, *destination))
> > > > + return -EINVAL;
> > > > +
> > > > + const MappedFrameBuffer sourceMapped(&source, PROT_READ);
> > > > + if (!sourceMapped.isValid()) {
> > > > + LOG(YUV, Error) << "Failed to mmap camera frame buffer";
> > > > + return -EINVAL;
> > > > + }
> > > > +
> > > > + int ret = libyuv::NV12Scale(sourceMapped.maps()[0].data(),
> > > > + sourceStride_[0],
> > > > + sourceMapped.maps()[1].data(),
> > > > + sourceStride_[1],
> > > > + sourceSize_.width, sourceSize_.height,
> > > > + destination->maps()[0].data(),
> > > > + destinationStride_[0],
> > > > + destination->maps()[1].data(),
> > > > + destinationStride_[1],
> > > > + destinationSize_.width,
> > > > + destinationSize_.height,
> > > > + libyuv::FilterMode::kFilterBilinear);
> > > > + if (ret) {
> > > > + LOG(YUV, Error) << "Failed NV12 scaling: " << ret;
> > > > + return -EINVAL;
> > > > + }
> > > > +
> > > > + return 0;
> > > > +}
> > > > +
> > > > +bool PostProcessorYuv::isValidBuffers(const FrameBuffer &source,
> > > > + const libcamera::MappedBuffer &destination) const
> > > > +{
> > > > + if (source.planes().size() != 2u) {
> > > > + LOG(YUV, Error) << "Invalid number of source planes: "
> > > > + << source.planes().size();
> > > > + return false;
> > > > + }
> > > > + if (destination.maps().size() != 2u) {
> > > > + LOG(YUV, Error) << "Invalid number of destination planes: "
> > > > + << destination.maps().size();
> > > > + return false;
> > > > + }
> > > > +
> > > > + if (source.planes()[0].length < sourceLength_[0] ||
> > > > + source.planes()[1].length < sourceLength_[1]) {
> > > > + LOG(YUV, Error) << "The source planes lengths are too small"
> > > > + << ", actual size: {"
> > > > + << source.planes()[0].length << ", "
> > > > + << source.planes()[1].length << "}"
> > > > + << ", expected size: {"
> > > > + << sourceLength_[0] << ", "
> > > > + << sourceLength_[1] << "}";
> > > > + return false;
> > > > + }
> > > > + if (destination.maps()[0].size() < destinationLength_[0] ||
> > > > + destination.maps()[1].size() < destinationLength_[1]) {
> > > > + LOG(YUV, Error)
> > > > + << "The destination planes lengths are too small"
> > > > + << ", actual size: {" << destination.maps()[0].size()
> > > > + << ", " << destination.maps()[1].size() << "}"
> > > > + << ", expected size: {" << sourceLength_[0] << ", "
> > > > + << sourceLength_[1] << "}";
> > > > + return false;
> > > > + }
> > > > +
> > > > + return true;
> > > > +}
> > > > +
> > > > +void PostProcessorYuv::calculateLengths(const StreamConfiguration &inCfg,
> > > > + const StreamConfiguration &outCfg)
> > > > +{
> > > > + ASSERT(inCfg.pixelFormat == formats::NV12);
> > > > + ASSERT(outCfg.pixelFormat == formats::NV12);
> > > > +
> > > > + sourceSize_ = inCfg.size;
> > > > + destinationSize_ = outCfg.size;
> > > > +
> > > > + const PixelFormatInfo &nv12Info = PixelFormatInfo::info(formats::NV12);
> > > > + for (unsigned int i = 0; i < 2; i++) {
> > > > + sourceStride_[i] = nv12Info.stride(sourceSize_.width, i, 1);
> > > > + destinationStride_[i] = nv12Info.stride(destinationSize_.width, i, 1);
> > > > +
> > > > + const unsigned int vertSubSample =
> > > > + nv12Info.planes[i].verticalSubSampling;
> > > > + sourceLength_[i] =
> > > > + nv12Info.stride(sourceSize_.width, i, 1) *
> > > > + ((sourceSize_.height + vertSubSample - 1) / vertSubSample);
> > > > + destinationLength_[i] =
> > > > + nv12Info.stride(destinationSize_.width, i, 1) *
> > > > + ((destinationSize_.height + vertSubSample - 1) / vertSubSample);
> > > > + }
> > > > +}
> > > > diff --git a/src/android/yuv/post_processor_yuv.h b/src/android/yuv/post_processor_yuv.h
> > > > new file mode 100644
> > > > index 00000000..c58b4cf7
> > > > --- /dev/null
> > > > +++ b/src/android/yuv/post_processor_yuv.h
> > > > @@ -0,0 +1,42 @@
> > > > +/* SPDX-License-Identifier: LGPL-2.1-or-later */
> > > > +/*
> > > > + * Copyright (C) 2021, Google Inc.
> > > > + *
> > > > + * post_processor_yuv.h - Post Processor using libyuv
> > > > + */
> > > > +#ifndef __ANDROID_POST_PROCESSOR_YUV_H__
> > > > +#define __ANDROID_POST_PROCESSOR_YUV_H__
> > > > +
> > > > +#include "../post_processor.h"
> > > > +
> > > > +#include <libcamera/geometry.h>
> > > > +
> > > > +class CameraDevice;
> > > > +
> > > > +class PostProcessorYuv : public PostProcessor
> > > > +{
> > > > +public:
> > > > + PostProcessorYuv() = default;
> > > > +
> > > > + int configure(const libcamera::StreamConfiguration &incfg,
> > > > + const libcamera::StreamConfiguration &outcfg) override;
> > > > + int process(const libcamera::FrameBuffer &source,
> > > > + libcamera::MappedBuffer *destination,
> > > > + const CameraMetadata &requestMetadata,
> > > > + CameraMetadata *metadata) override;
> > > > +
> > > > +private:
> > > > + bool isValidBuffers(const libcamera::FrameBuffer &source,
> > > > + const libcamera::MappedBuffer &destination) const;
> > > > + void calculateLengths(const libcamera::StreamConfiguration &inCfg,
> > > > + const libcamera::StreamConfiguration &outCfg);
> > > > +
> > > > + libcamera::Size sourceSize_;
> > > > + libcamera::Size destinationSize_;
> > > > + unsigned int sourceLength_[2] = {};
> > > > + unsigned int destinationLength_[2] = {};
> > > > + unsigned int sourceStride_[2] = {};
> > > > + unsigned int destinationStride_[2] = {};
> > > > +};
> > > > +
> > > > +#endif /* __ANDROID_POST_PROCESSOR_YUV_H__ */
> >
> > --
> > Regards,
> >
> > Laurent Pinchart
--
Regards,
Niklas Söderlund
More information about the libcamera-devel
mailing list