[libcamera-devel] [PATCH 2/3] android: jpeg: Port to PostProcessor interface

Laurent Pinchart laurent.pinchart at ideasonboard.com
Fri Oct 16 06:57:16 CEST 2020


Hi Umang,

Thank you for the patch.

On Thu, Oct 15, 2020 at 08:32:29PM +0100, Kieran Bingham wrote:
> On 15/10/2020 18:14, Umang Jain wrote:
> > Port the CameraStream's JPEG-encoding bits to PostProcessorJpeg.
> > This encapsulates the encoder and EXIF generation code into the
> > PostProcessorJpeg layer and removes these specifics related to JPEG,
> > from the CameraStream itself.
> 
> Fantastic, getting all the JPEG specifics out of CameraStream is really
> good.

Yes, it looks much better.

> > Signed-off-by: Umang Jain <email at uajain.com>
> > ---
> >  src/android/camera_device.cpp            |   1 +
> >  src/android/camera_stream.cpp            |  77 ++++------------
> >  src/android/camera_stream.h              |   4 +-
> >  src/android/jpeg/encoder_libjpeg.cpp     |   2 +-
> >  src/android/jpeg/post_processor_jpeg.cpp | 110 +++++++++++++++++++++++
> >  src/android/jpeg/post_processor_jpeg.h   |  36 ++++++++
> >  src/android/meson.build                  |   1 +
> >  7 files changed, 169 insertions(+), 62 deletions(-)
> >  create mode 100644 src/android/jpeg/post_processor_jpeg.cpp
> >  create mode 100644 src/android/jpeg/post_processor_jpeg.h
> > 
> > diff --git a/src/android/camera_device.cpp b/src/android/camera_device.cpp
> > index c29fcb4..d6a8acb 100644
> > --- a/src/android/camera_device.cpp
> > +++ b/src/android/camera_device.cpp
> > @@ -7,6 +7,7 @@
> >  
> >  #include "camera_device.h"
> >  #include "camera_ops.h"
> > +#include "post_processor.h"
> >  
> >  #include <sys/mman.h>
> >  #include <tuple>
> > diff --git a/src/android/camera_stream.cpp b/src/android/camera_stream.cpp
> > index eac1480..2a0ba16 100644
> > --- a/src/android/camera_stream.cpp
> > +++ b/src/android/camera_stream.cpp
> > @@ -9,9 +9,9 @@
> >  
> >  #include "camera_device.h"
> >  #include "camera_metadata.h"
> > -#include "jpeg/encoder.h"
> > -#include "jpeg/encoder_libjpeg.h"
> > -#include "jpeg/exif.h"
> > +#include "jpeg/post_processor_jpeg.h"
> > +
> > +#include <libcamera/formats.h>
> >  
> >  using namespace libcamera;
> >  
> > @@ -24,8 +24,15 @@ CameraStream::CameraStream(CameraDevice *cameraDevice, Type type,
> >  {
> >  	config_ = cameraDevice_->cameraConfiguration();
> >  
> > -	if (type_ == Type::Internal || type_ == Type::Mapped)
> > -		encoder_ = std::make_unique<EncoderLibJpeg>();
> > +	if (type_ == Type::Internal || type_ == Type::Mapped) {
> > +		/*
> > +		 * \todo There might be multiple post-processors. The logic
> > +		 * which should be instantiated here, is deferred for future.

s/, is deferred for future/ is deferred to the future/

> > +		 * For now, we only have PostProcessorJpeg and that is what we
> > +		 * will instantiate here.

s/will //

> > +		 */
> > +		postProcessor_ = std::make_unique<PostProcessorJpeg>(cameraDevice_);
> 
> Agreed, we can leave it to the next PostProcessor to decide how we
> create these. We've got one - this is it ;-)
> 
> I anticipate we might also have to chain them together too, so we'll
> need to figure out how to handle multiple PostProcessors feeding into
> each other without turning into a full gstreamer-esque pipeline.

gstreamer is exactly what came to my mind :-) It's tempting to make it
overengineered, but let's try to keep it simple.

> (For instance, we'll have a Format Convertor to take YUYV->NV12, and
> then an Encoder to take NV12->MJPEG for UVC pipelines)

It would probably be more efficient to support encoding JPEG from YUYV
though. And we'll also need an MJPEG -> YUV/NV12 decompressor as webcams
often provide higher resolutions and frame rates in MJPEG. Anyway,
that's for later.

> > +	}
> >  
> >  	if (type == Type::Internal) {
> >  		allocator_ = std::make_unique<FrameBufferAllocator>(cameraDevice_->camera());
> > @@ -45,8 +52,10 @@ Stream *CameraStream::stream() const
> >  
> >  int CameraStream::configure()
> >  {
> > -	if (encoder_) {
> > -		int ret = encoder_->configure(configuration());
> > +	if (postProcessor_) {
> > +		StreamConfiguration output = configuration();
> > +		output.pixelFormat == formats::MJPEG;

s/==/=/

> > +		int ret = postProcessor_->configure(configuration(), output);
> >  		if (ret)
> >  			return ret;
> >  	}
> > @@ -69,60 +78,10 @@ int CameraStream::configure()
> >  int CameraStream::process(const libcamera::FrameBuffer &source,
> >  			  MappedCamera3Buffer *dest, CameraMetadata *metadata)
> >  {
> > -	if (!encoder_)
> > +	if (!postProcessor_)
> >  		return 0;
> >  
> > -	/* Set EXIF metadata for various tags. */
> > -	Exif exif;
> > -	/* \todo Set Make and Model from external vendor tags. */
> > -	exif.setMake("libcamera");
> > -	exif.setModel("cameraModel");
> > -	exif.setOrientation(cameraDevice_->orientation());
> > -	exif.setSize(configuration().size);
> > -	/*
> > -	 * We set the frame's EXIF timestamp as the time of encode.
> > -	 * Since the precision we need for EXIF timestamp is only one
> > -	 * second, it is good enough.
> > -	 */
> > -	exif.setTimestamp(std::time(nullptr));
> > -	if (exif.generate() != 0)
> > -		LOG(HAL, Error) << "Failed to generate valid EXIF data";
> > -
> > -	int jpeg_size = encoder_->encode(&source, dest->maps()[0], exif.data());
> > -	if (jpeg_size < 0) {
> > -		LOG(HAL, Error) << "Failed to encode stream image";
> > -		return jpeg_size;
> > -	}
> > -
> > -	/*
> > -	 * Fill in the JPEG blob header.
> > -	 *
> > -	 * The mapped size of the buffer is being returned as
> > -	 * substantially larger than the requested JPEG_MAX_SIZE
> > -	 * (which is referenced from maxJpegBufferSize_). Utilise
> > -	 * this static size to ensure the correct offset of the blob is
> > -	 * determined.
> > -	 *
> > -	 * \todo Investigate if the buffer size mismatch is an issue or
> > -	 * expected behaviour.
> > -	 */
> > -	uint8_t *resultPtr = dest->maps()[0].data() +
> > -			     cameraDevice_->maxJpegBufferSize() -
> > -			     sizeof(struct camera3_jpeg_blob);
> > -	auto *blob = reinterpret_cast<struct camera3_jpeg_blob *>(resultPtr);
> > -	blob->jpeg_blob_id = CAMERA3_JPEG_BLOB_ID;
> > -	blob->jpeg_size = jpeg_size;
> > -
> > -	/* Update the JPEG result Metadata. */
> > -	metadata->addEntry(ANDROID_JPEG_SIZE, &jpeg_size, 1);
> > -
> > -	const uint32_t jpeg_quality = 95;
> > -	metadata->addEntry(ANDROID_JPEG_QUALITY, &jpeg_quality, 1);
> > -
> > -	const uint32_t jpeg_orientation = 0;
> > -	metadata->addEntry(ANDROID_JPEG_ORIENTATION, &jpeg_orientation, 1);
> > -> -	return 0;
> > +	return postProcessor_->process(&source, dest->maps()[0], metadata);
> 
> I wonder if we should move this to the point that calls
> CameraStream::process() directly ... but equally, if we have multiple
> post-processors we might have to do more operations here ... so that
> makes me think we should keep this function for now..

I'd keep it for now.

> >  }
> >  
> >  FrameBuffer *CameraStream::getBuffer()
> > diff --git a/src/android/camera_stream.h b/src/android/camera_stream.h
> > index 8df0101..c55d90b 100644
> > --- a/src/android/camera_stream.h
> > +++ b/src/android/camera_stream.h
> > @@ -19,10 +19,10 @@
> >  #include <libcamera/geometry.h>
> >  #include <libcamera/pixel_format.h>
> >  
> > -class Encoder;
> >  class CameraDevice;
> >  class CameraMetadata;
> >  class MappedCamera3Buffer;
> > +class PostProcessor;
> >  
> >  class CameraStream
> >  {
> > @@ -135,7 +135,6 @@ private:
> >  	 */
> >  	unsigned int index_;
> >  
> > -	std::unique_ptr<Encoder> encoder_;
> >  	std::unique_ptr<libcamera::FrameBufferAllocator> allocator_;
> >  	std::vector<libcamera::FrameBuffer *> buffers_;
> >  	/*
> > @@ -143,6 +142,7 @@ private:
> >  	 * an std::vector in CameraDevice.
> >  	 */
> >  	 std::unique_ptr<std::mutex> mutex_;
> 
> Haha, Maybe I would have had the nitpick-fix patch first ;-) It really
> stands out here - but either way is fine.

I've just pushed it to the master branch :-)

> > +	std::unique_ptr<PostProcessor> postProcessor_;
> >  };
> >  
> >  #endif /* __ANDROID_CAMERA_STREAM__ */
> > diff --git a/src/android/jpeg/encoder_libjpeg.cpp b/src/android/jpeg/encoder_libjpeg.cpp
> > index a77f5b2..8995ba5 100644
> > --- a/src/android/jpeg/encoder_libjpeg.cpp
> > +++ b/src/android/jpeg/encoder_libjpeg.cpp
> > @@ -25,7 +25,7 @@
> >  
> >  using namespace libcamera;
> >  
> > -LOG_DEFINE_CATEGORY(JPEG)
> > +LOG_DECLARE_CATEGORY(JPEG);
> >  
> >  namespace {
> >  
> > diff --git a/src/android/jpeg/post_processor_jpeg.cpp b/src/android/jpeg/post_processor_jpeg.cpp
> > new file mode 100644
> > index 0000000..d1ec95b
> > --- /dev/null
> > +++ b/src/android/jpeg/post_processor_jpeg.cpp
> > @@ -0,0 +1,110 @@
> > +/* SPDX-License-Identifier: GPL-2.0-or-later */
> > +/*
> > + * Copyright (C) 2020, Google Inc.
> > + *
> > + * post_processor_jpeg.cpp - JPEG Post Processor
> > + */
> > +
> > +#include "post_processor_jpeg.h"
> > +
> > +#include "../camera_device.h"
> > +#include "../camera_metadata.h"
> > +#include "encoder_libjpeg.h"
> > +#include "exif.h"
> > +
> > +#include <libcamera/formats.h>
> > +
> > +#include "libcamera/internal/log.h"
> > +
> > +using namespace libcamera;
> > +
> > +LOG_DEFINE_CATEGORY(JPEG);
> 
> I'm wondering if we should separate the categories for the JPEG encoder,
> and the PostProcessor layers - but I think it's all JPEG related, so
> keeping fewer LOG_CATEGORY's is probably better, so I think I'm happy
> with this.
> 
> > +
> > +PostProcessorJpeg::PostProcessorJpeg(CameraDevice *device)
> > +	: cameraDevice_(device)
> > +{
> > +}
> > +
> > +int PostProcessorJpeg::configure(const StreamConfiguration &inCfg,
> > +				 const StreamConfiguration &outCfg)
> > +{
> > +	int ret = 0;
> > +
> > +	if (inCfg.size != outCfg.size) {
> > +		LOG(JPEG, Error) << "Mismatch of input and output stream sizes";
> > +		return -1;
> 
> We could use (negative) errno values here.
> 
> either:
> 
> EXDEV 18 Invalid cross-device link
> or
> EINVAL 22 Invalid argument
> 
> > +	}
> > +
> > +	if (outCfg.pixelFormat != formats::MJPEG) {
> > +		LOG(JPEG, Error) << "Output stream pixel format is not JPEG";
> > +		return -1;
> 
> Same here of course.
> 
> > +	}
> > +
> > +	streamSize_ = outCfg.size;
> > +	encoder_ = std::make_unique<EncoderLibJpeg>();
> 
> We could also create the encoder_ during construction, but wherever it
> happens, later there will have to be a factory/condition to decide
> whether to create the libjpeg encoder, or the hardware accellerated one,
> so I don't mind it being here at the moment... however...
> 
> > +
> > +	if (encoder_)
> > +		ret = encoder_->configure(inCfg);
> > +
> 
> You need to initialise ret to an error value, above as here it will
> return success if the encoder was not created.

std::make_unique<>() will never return null, will it ? Shouldn't the
check just be dropped ?

> > +	return ret;
> > +}
> > +
> > +int PostProcessorJpeg::process(const libcamera::FrameBuffer *source,
> > +			       const libcamera::Span<uint8_t> &destination,
> > +			       CameraMetadata *metadata)
> > +{
> > +	if (!encoder_)
> > +		return 0;
> > +
> > +	/* Set EXIF metadata for various tags. */
> > +	Exif exif;
> > +	/* \todo Set Make and Model from external vendor tags. */
> > +	exif.setMake("libcamera");
> > +	exif.setModel("cameraModel");
> > +	exif.setOrientation(cameraDevice_->orientation());
> > +	exif.setSize(streamSize_);
> > +	/*
> > +	 * We set the frame's EXIF timestamp as the time of encode.
> > +	 * Since the precision we need for EXIF timestamp is only one
> > +	 * second, it is good enough.
> > +	 */
> > +	exif.setTimestamp(std::time(nullptr));
> > +	if (exif.generate() != 0)
> > +		LOG(JPEG, Error) << "Failed to generate valid EXIF data";
> > +
> > +	int jpeg_size = encoder_->encode(source, destination, exif.data());
> > +	if (jpeg_size < 0) {
> > +		LOG(JPEG, Error) << "Failed to encode stream image";
> > +		return jpeg_size;
> > +	}
> > +
> > +	/*
> > +	 * Fill in the JPEG blob header.
> > +	 *
> > +	 * The mapped size of the buffer is being returned as
> > +	 * substantially larger than the requested JPEG_MAX_SIZE
> > +	 * (which is referenced from maxJpegBufferSize_). Utilise
> > +	 * this static size to ensure the correct offset of the blob is
> > +	 * determined.
> > +	 *
> > +	 * \todo Investigate if the buffer size mismatch is an issue or
> > +	 * expected behaviour.
> > +	 */
> > +	uint8_t *resultPtr = destination.data() +
> > +			     cameraDevice_->maxJpegBufferSize() -
> > +			     sizeof(struct camera3_jpeg_blob);
> > +	auto *blob = reinterpret_cast<struct camera3_jpeg_blob *>(resultPtr);
> > +	blob->jpeg_blob_id = CAMERA3_JPEG_BLOB_ID;
> > +	blob->jpeg_size = jpeg_size;
> > +
> > +	/* Update the JPEG result Metadata. */
> > +	metadata->addEntry(ANDROID_JPEG_SIZE, &jpeg_size, 1);
> > +
> > +	const uint32_t jpeg_quality = 95;
> > +	metadata->addEntry(ANDROID_JPEG_QUALITY, &jpeg_quality, 1);
> > +
> > +	const uint32_t jpeg_orientation = 0;
> > +	metadata->addEntry(ANDROID_JPEG_ORIENTATION, &jpeg_orientation, 1);
> > +
> > +	return 0;
> > +}
> > diff --git a/src/android/jpeg/post_processor_jpeg.h b/src/android/jpeg/post_processor_jpeg.h
> > new file mode 100644
> > index 0000000..72aca9b
> > --- /dev/null
> > +++ b/src/android/jpeg/post_processor_jpeg.h
> > @@ -0,0 +1,36 @@
> > +/* SPDX-License-Identifier: LGPL-2.1-or-later */
> > +/*
> > + * Copyright (C) 2020, Google Inc.
> > + *
> > + * post_processor_jpeg.h - JPEG Post Processor
> > + */
> > +#ifndef __ANDROID_POST_PROCESSOR_JPEG_H__
> > +#define __ANDROID_POST_PROCESSOR_JPEG_H__
> > +
> > +#include "../post_processor.h"
> > +#include "libcamera/internal/buffer.h"

This header...

> > +
> > +#include <libcamera/geometry.h>
> > +

... should go here.

> > +class Encoder;
> > +class CameraDevice;
> > +
> > +class PostProcessorJpeg : public PostProcessor
> > +{
> > +public:
> > +	PostProcessorJpeg(CameraDevice *device);
> > +
> > +	int configure(const libcamera::StreamConfiguration &incfg,
> > +		      const libcamera::StreamConfiguration &outcfg) override;
> > +	int process(const libcamera::FrameBuffer *source,
> > +		    const libcamera::Span<uint8_t> &destination,
> > +		    CameraMetadata *metadata) override;
> > +
> > +
> 
> There's an extra blank line here.
> 
> With negative errno values in  PostProcessorJpeg::configure, and the ret
> in that function handled correctly if the encoder isn't created:
> 
> Reviewed-by: Kieran Bingham <kieran.bingham at ideasonboard.com>
> 
> > +private:
> > +	CameraDevice *cameraDevice_;
> > +	std::unique_ptr<Encoder> encoder_;
> > +	libcamera::Size streamSize_;
> > +};
> > +
> > +#endif /* __ANDROID_POST_PROCESSOR_JPEG_H__ */
> > diff --git a/src/android/meson.build b/src/android/meson.build
> > index 802bb89..eacd544 100644
> > --- a/src/android/meson.build
> > +++ b/src/android/meson.build
> > @@ -23,6 +23,7 @@ android_hal_sources = files([
> >      'camera_stream.cpp',
> >      'jpeg/encoder_libjpeg.cpp',
> >      'jpeg/exif.cpp',
> > +    'jpeg/post_processor_jpeg.cpp',
> >  ])
> >  
> >  android_camera_metadata_sources = files([

-- 
Regards,

Laurent Pinchart


More information about the libcamera-devel mailing list