[libcamera-devel] [PATCH v5 12/19] libcamera: ipu3: Apply image format to the pipeline

Laurent Pinchart laurent.pinchart at ideasonboard.com
Tue Apr 2 12:02:10 CEST 2019


Hi Jacopo,

Thank you for the patch.

I really like the direction this is taking. There's quite a few comments
below, and a bit more refactoring will be needed than for the previous
patches, but I think you've done a good job here.

On Tue, Mar 26, 2019 at 09:38:55AM +0100, Jacopo Mondi wrote:
> Apply the requested image format to the CIO2 device, and apply the
> resulting adjusted one to the the ImgU subdevice and its input and

s/one/format/

Or maybe "and propagate formats through the ImgU." ?

> output video devices.
> 
> Signed-off-by: Jacopo Mondi <jacopo at jmondi.org>
> ---
>  src/libcamera/pipeline/ipu3/ipu3.cpp | 300 +++++++++++++++++++++++----
>  1 file changed, 265 insertions(+), 35 deletions(-)
> 
> diff --git a/src/libcamera/pipeline/ipu3/ipu3.cpp b/src/libcamera/pipeline/ipu3/ipu3.cpp
> index 0a059b75cadf..63b84706b9b2 100644
> --- a/src/libcamera/pipeline/ipu3/ipu3.cpp
> +++ b/src/libcamera/pipeline/ipu3/ipu3.cpp
> @@ -52,6 +52,20 @@ public:
>  	static constexpr unsigned int PAD_VF = 3;
>  	static constexpr unsigned int PAD_STAT = 4;
>  
> +	/* ImgU output identifiers: used as indexes for the below maps. */

s/indexes/indices/

> +	enum OutputId {
> +		MAIN_OUTPUT,
> +		SECONDARY_OUTPUT,
> +		STAT,
> +	};
> +
> +	/* ImgU output descriptor: group data specific to an ImgU output. */
> +	struct outputDesc {

s/outputDesc/OutputDesc/

> +		V4L2Device *dev;
> +		unsigned int pad;
> +		std::string name;
> +	};

This gets complicated. I think it would be more readable if you renamed
this to ImgUVideoDevice (feel free to bikeshed on the name), and
replaced the V4L2Device pointers in the ImgUDevice structure with it.
Something along the lines of

	struct ImgUVideoDevice {
		V4L2Device *dev;
		unsigned int pad;
		std::string name;
	};

	ImgUVideoDevice input_;
	ImgUVideoDevice output_;
	ImgUVideoDevice viewfinder_;
	ImgUVideoDevice stat_;

(you could possibly leave the input_ out of this, and then name the
structure ImgUOutputDevice or, better I think, just ImgUOutput)

> +
>  	ImgUDevice()
>  		: imgu_(nullptr), input_(nullptr), output_(nullptr),
>  		  viewfinder_(nullptr), stat_(nullptr)
> @@ -67,7 +81,25 @@ public:
>  		delete stat_;
>  	}
>  
> +	/* Imgu output map accessors. */
> +	V4L2Device *outputDevice(enum OutputId id)
> +	{
> +		return outputMap[id].dev;
> +	}
> +	unsigned int outputPad(enum OutputId id)
> +	{
> +		return outputMap[id].pad;
> +	}
> +	const std::string &outputName(enum OutputId id)
> +	{
> +		return outputMap[id].name;
> +	}

With the above change, you can remove the outputMap, and these methods
won't be needed anymore. The OutputId enum could go too, replaced by
direct usage of the four ImgUVideoDevice fields.

> +
>  	int init(MediaDevice *media, unsigned int index);
> +	int configureInput(const StreamConfiguration &config,
> +			   const V4L2SubdeviceFormat &cio2Format);
> +	int configureOutput(enum OutputId id,
> +			    const StreamConfiguration &config);

This function would take a pointer to the output device, or could
possibly become a member of the output device structure (although in
that case you'd have to store a backpointer to the ImgUDevice there,
which may not be worth it). This comment also applies to the other
functions that operate on outputs in the next patches.

>  
>  	unsigned int index_;
>  	std::string name_;
> @@ -79,6 +111,9 @@ public:
>  	V4L2Device *viewfinder_;
>  	V4L2Device *stat_;
>  	/* \todo Add param video device for 3A tuning */
> +
> +	/* ImgU output map: associate an output id with its descriptor. */
> +	std::map<enum OutputId, struct outputDesc> outputMap;
>  };
>  
>  class CIO2Device
> @@ -98,6 +133,8 @@ public:
>  
>  	const std::string &name() const;
>  	int init(const MediaDevice *media, unsigned int index);
> +	int configure(const StreamConfiguration &config,
> +		      V4L2SubdeviceFormat *format);
>  
>  	V4L2Device *output_;
>  	V4L2Subdevice *csi2_;
> @@ -203,61 +240,62 @@ int PipelineHandlerIPU3::configureStreams(Camera *camera,
>  					  std::map<Stream *, StreamConfiguration> &config)
>  {
>  	IPU3CameraData *data = cameraData(camera);
> -	StreamConfiguration *cfg = &config[&data->stream_];
> -	V4L2Subdevice *sensor = data->cio2_.sensor_;
> -	V4L2Subdevice *csi2 = data->cio2_.csi2_;
> -	V4L2Device *cio2 = data->cio2_.output_;
> -	V4L2SubdeviceFormat subdevFormat = {};
> -	V4L2DeviceFormat devFormat = {};
> +	const StreamConfiguration &cfg = config[&data->stream_];
> +	CIO2Device *cio2 = &data->cio2_;
> +	ImgUDevice *imgu = data->imgu_;
>  	int ret;
>  
> +	LOG(IPU3, Info)
> +		<< "Requested image format: " << cfg.width << "x"

s/format:/format/

> +		<< cfg.height << "-0x" << std::hex << std::setw(8)
> +		<< cfg.pixelFormat << " on camera:'" << camera->name() << "'";

s/on camera:/on camera /

> +
>  	/*
> -	 * FIXME: as of now, the format gets applied to the sensor and is
> -	 * propagated along the pipeline. It should instead be applied on the
> -	 * capture device and the sensor format calculated accordingly.
> +	 * Verify that the requested size respects the IPU3 alignement
> +	 * requirements (the image width shall be a multiple of 8 pixels and
> +	 * its height a multiple of 4 pixels) and the camera maximum sizes.
> +	 *
> +	 * \todo: consider the BDS scaling factor requirements:
> +	 * "the downscaling factor must be an integer value multiple of 1/32"
>  	 */
> +	if (cfg.width % 8 || cfg.height % 4) {
> +		LOG(IPU3, Error) << "Stream size not support: bad alignement";

s/support/supported/

Or maybe "Invalid stream size: bad alignment" ?

> +		return -EINVAL;
> +	}
> +
> +	SizeRange cio2MaxSize = cio2->maxSizes_.second;
> +	if (cfg.width > cio2MaxSize.maxWidth ||
> +	    cfg.height > cio2MaxSize.maxHeight) {
> +		LOG(IPU3, Error) << "Stream size not support: too big";

And here "Invalid stream size: larger than sensor resolution" ?

> +		return -EINVAL;
> +	}
>  
> -	ret = sensor->getFormat(0, &subdevFormat);
> +	/*
> +	 * Pass the requested output image size to the sensor and get back the
> +	 * adjusted one to be propagated to the to the ImgU devices.

s/to the to the/to the/

> +	 */
> +	V4L2SubdeviceFormat cio2Format = {};
> +	ret = cio2->configure(cfg, &cio2Format);
>  	if (ret)
>  		return ret;
>  
> -	subdevFormat.width = cfg->width;
> -	subdevFormat.height = cfg->height;
> -	ret = sensor->setFormat(0, &subdevFormat);
> +	ret = imgu->configureInput(cfg, cio2Format);
>  	if (ret)
>  		return ret;
>  
> -	/* Return error if the requested format cannot be applied to sensor. */
> -	if (subdevFormat.width != cfg->width ||
> -	    subdevFormat.height != cfg->height) {
> -		LOG(IPU3, Error)
> -			<< "Failed to apply image format "
> -			<< subdevFormat.width << "x" << subdevFormat.height
> -			<< " - got: " << cfg->width << "x" << cfg->height;
> -		return -EINVAL;
> -	}
> -
> -	ret = csi2->setFormat(0, &subdevFormat);
> +	/* Apply the format to the ImgU output, viewfinder and stat. */
> +	ret = imgu->configureOutput(ImgUDevice::MAIN_OUTPUT, cfg);
>  	if (ret)
>  		return ret;
>  
> -	ret = cio2->getFormat(&devFormat);
> +	ret = imgu->configureOutput(ImgUDevice::SECONDARY_OUTPUT, cfg);
>  	if (ret)
>  		return ret;
>  
> -	devFormat.width = subdevFormat.width;
> -	devFormat.height = subdevFormat.height;
> -	devFormat.fourcc = cfg->pixelFormat;
> -
> -	ret = cio2->setFormat(&devFormat);
> +	ret = imgu->configureOutput(ImgUDevice::STAT, cfg);
>  	if (ret)
>  		return ret;
>  
> -	LOG(IPU3, Info) << cio2->driverName() << ": "
> -			<< devFormat.width << "x" << devFormat.height
> -			<< "- 0x" << std::hex << devFormat.fourcc << " planes: "
> -			<< devFormat.planes;
> -
>  	return 0;
>  }
>  
> @@ -539,16 +577,140 @@ int ImgUDevice::init(MediaDevice *media, unsigned int index)
>  	if (ret)
>  		return ret;
>  
> +	struct outputDesc desc = {};
> +	desc.dev = output_;
> +	desc.pad = PAD_OUTPUT;
> +	desc.name = "output";
> +	outputMap[MAIN_OUTPUT] = desc;
> +
>  	viewfinder_ = V4L2Device::fromEntityName(media, name_ + " viewfinder");
>  	ret = viewfinder_->open();
>  	if (ret)
>  		return ret;
>  
> +	desc = {};
> +	desc.dev = viewfinder_;
> +	desc.pad = PAD_VF;
> +	desc.name = "viewfinder";
> +	outputMap[SECONDARY_OUTPUT] = desc;
> +
>  	stat_ = V4L2Device::fromEntityName(media, name_ + " 3a stat");
>  	ret = stat_->open();
>  	if (ret)
>  		return ret;
>  
> +	desc = {};
> +	desc.dev = stat_;
> +	desc.pad = PAD_STAT;
> +	desc.name = "stat";
> +	outputMap[STAT] = desc;
> +
> +	return 0;
> +}
> +
> +/**
> + * \brief Configure the ImgU unit input
> + * \param[in] config The requested GDC format configuration

Maybe just "The requested stream configuration" ? The fact that we
configure the GDC here is internal I think.

> + * \param[in] cio2Format The CIO2 output format to be applied to ImgU input

I would name the parameter inputFormat and update the documentation
accordingly, as in the future this may be used for memory-to-memory
reprocessing, without involving the CIO2. Let's try to keep the
CIO2Device and ImgUDevice independent, even in their respective
documentation.

> + *
> + * FIXME: the IPU3 driver implementation shall be changed to use the
> + * CIO2 output sizes as 'ImgU Input' subdevice sizes, and use the
> + * GDC output sizes to configure the crop/compose rectangles.
> + * The current IPU3 driver implementation uses GDC output sizes as
> + * 'ImgU Input' sizes, and the CIO2 output sizes to configure the
> + * crop/compose rectangles, contradicting the V4L2 specification.

I would move this comment to the function, just after the "ImgU input
feeder and BDS rectangle" message.

> + *
> + * \return 0 on success or a negative error code otherwise
> + */
> +int ImgUDevice::configureInput(const StreamConfiguration &config,
> +			       const V4L2SubdeviceFormat &cio2Format)
> +{
> +	int ret;
> +
> +	/* Configure the ImgU subdevice input pad with the requested sizes. */
> +	V4L2DeviceFormat inputFormat = {};
> +	inputFormat.width = cio2Format.width;
> +	inputFormat.height = cio2Format.height;
> +	inputFormat.fourcc = mediaBusToCIO2Format(cio2Format.mbus_code);
> +	inputFormat.planesCount = 1;
> +
> +	ret = input_->setFormat(&inputFormat);
> +	if (ret)
> +		return ret;
> +
> +	LOG(IPU3, Debug) << "ImgU input format = " << inputFormat.toString();
> +
> +	Rectangle rect = {
> +		.x = 0,
> +		.y = 0,
> +		.w = cio2Format.width,
> +		.h = cio2Format.height,
> +	};
> +	ret = imgu_->setCrop(PAD_INPUT, &rect);
> +	if (ret)
> +		return ret;
> +
> +	ret = imgu_->setCompose(PAD_INPUT, &rect);
> +	if (ret)
> +		return ret;
> +
> +	LOG(IPU3, Debug) << "ImgU input feeder and BDS rectangle = "
> +			 << rect.toString();
> +
> +	V4L2SubdeviceFormat imguFormat = {};
> +	imguFormat.width = config.width;
> +	imguFormat.height = config.height;
> +	imguFormat.mbus_code = MEDIA_BUS_FMT_FIXED;
> +
> +	ret = imgu_->setFormat(PAD_INPUT, &imguFormat);
> +	if (ret)
> +		return ret;
> +
> +	LOG(IPU3, Debug) << "ImgU GDC format = " << imguFormat.toString();
> +
> +	return 0;
> +}
> +
> +/**
> + * \brief Configure the ImgU unit \a id video output
> + * \param[in] id The output device node identifier
> + * \param[in] config The requested configuration
> + *
> + * \return 0 on success or a negative error code otherwise
> + */
> +int ImgUDevice::configureOutput(enum OutputId id,
> +				const StreamConfiguration &config)
> +{
> +	V4L2Device *output = outputDevice(id);
> +	unsigned int pad = outputPad(id);
> +	const std::string name = outputName(id);
> +
> +	V4L2SubdeviceFormat imguFormat = {};
> +	imguFormat.width = config.width;
> +	imguFormat.height = config.height;
> +	imguFormat.mbus_code = MEDIA_BUS_FMT_FIXED;
> +
> +	int ret = imgu_->setFormat(pad, &imguFormat);
> +	if (ret)
> +		return ret;
> +
> +	/* No need to apply format to the stat node. */
> +	if (id == STAT)
> +		return 0;
> +
> +	V4L2DeviceFormat outputFormat = {};
> +	outputFormat.width = config.width;
> +	outputFormat.height = config.height;
> +	outputFormat.fourcc = V4L2_PIX_FMT_NV12;
> +	outputFormat.planesCount = 2;
> +
> +	ret = output->setFormat(&outputFormat);
> +	if (ret)
> +		return ret;
> +
> +	LOG(IPU3, Debug) << "ImgU " << name << " format = "
> +			 << outputFormat.toString();
> +
>  	return 0;
>  }
>  
> @@ -650,6 +812,74 @@ int CIO2Device::init(const MediaDevice *media, unsigned int index)
>  	return 0;
>  }
>  
> +/**
> + * \brief Configure the CIO2 unit
> + * \param[in] config The requested configuration
> + * \param[out] format The CIO2 unit output image format to be applied on ImgU

I'd remove "to be applied on ImgU", to keep CIO2Device self-contained.
In theory the caller could do anything with the format, it doesn't have
to involve the ImgU (I'm thinking for instance about raw capture, when
we'll support that).

> + *
> + * \return 0 on success or a negative error code otherwise
> + */
> +int CIO2Device::configure(const StreamConfiguration &config,
> +			  V4L2SubdeviceFormat *format)

I think it would make more sense to return a V4L2DeviceFormat instead of
a V4L2SubdeviceFormat, as you want to tell what is captured to memory.

> +{
> +	unsigned int imageSize = config.width * config.height;
> +	unsigned int best = ~0;
> +	int ret;
> +
> +	for (auto it : sensor_->formats(0)) {
> +		/* Only consider formats consumable by the CIO2 unit. */
> +		if (mediaBusToCIO2Format(it.first) < 0)
> +			continue;
> +
> +		for (const SizeRange &size : it.second) {
> +			/*
> +			 * Only select formats bigger than the requested sizes
> +			 * as the IPU3 cannot up-scale.
> +			 */
> +			if (size.maxWidth < config.width ||
> +			    size.maxHeight < config.height)
> +				continue;
> +
> +			unsigned int diff = size.maxWidth * size.maxHeight
> +					  - imageSize;
> +			if (diff >= best)
> +				continue;
> +
> +			best = diff;
> +
> +			format->width = size.maxWidth;
> +			format->height = size.maxHeight;
> +			format->mbus_code = it.first;
> +		}

So you scale on the sensor as much as possible instead of scaling only
in the ImgU. As discussed previously, there are pros and cons, and for
now this is fine, but I would add a comment to state this explicitly.
Something along the lines of

"Unconditionally scale on the sensor as much as possible. This will need
to be revisited when implementing the scaling policy."

> +	}
> +
> +	/*
> +	 * Apply the selected format to the sensor, the CSI-2 receiver and
> +	 * the CIO2 output device.
> +	 */
> +	ret = sensor_->setFormat(0, format);
> +	if (ret)
> +		return ret;
> +
> +	ret = csi2_->setFormat(0, format);
> +	if (ret)
> +		return ret;
> +
> +	V4L2DeviceFormat cio2Format = {};
> +	cio2Format.width = format->width;
> +	cio2Format.height = format->height;
> +	cio2Format.fourcc = mediaBusToCIO2Format(format->mbus_code);
> +	cio2Format.planesCount = 1;
> +
> +	ret = output_->setFormat(&cio2Format);
> +	if (ret)
> +		return ret;
> +
> +	LOG(IPU3, Debug) << "CIO2 output format = " << cio2Format.toString();
> +
> +	return 0;
> +}
> +
>  REGISTER_PIPELINE_HANDLER(PipelineHandlerIPU3);
>  
>  } /* namespace libcamera */

-- 
Regards,

Laurent Pinchart


More information about the libcamera-devel mailing list