[PATCH v11 6/7] libcamera: virtual: Add ImageFrameGenerator

Jacopo Mondi jacopo.mondi at ideasonboard.com
Mon Sep 9 12:16:59 CEST 2024


Hi

On Sat, Sep 07, 2024 at 02:28:31PM GMT, Harvey Yang wrote:
> From: Konami Shu <konamiz at google.com>
>
> Besides TestPatternGenerator, this patch adds ImageFrameGenerator that
> loads real images (jpg / jpeg for now) as the source and generates
> scaled frames.
>
> Signed-off-by: Konami Shu <konamiz at google.com>
> Co-developed-by: Harvey Yang <chenghaoyang at chromium.org>
> Co-developed-by: Yunke Cao <yunkec at chromium.org>
> Co-developed-by: Tomasz Figa <tfiga at chromium.org>
> ---
>  src/libcamera/pipeline/virtual/README.md      |   9 +-
>  .../virtual/image_frame_generator.cpp         | 178 ++++++++++++++++++
>  .../pipeline/virtual/image_frame_generator.h  |  54 ++++++
>  src/libcamera/pipeline/virtual/meson.build    |   4 +
>  src/libcamera/pipeline/virtual/parser.cpp     |  76 +++++++-
>  src/libcamera/pipeline/virtual/parser.h       |   2 +
>  src/libcamera/pipeline/virtual/utils.h        |  17 ++
>  src/libcamera/pipeline/virtual/virtual.cpp    |  60 ++++--
>  src/libcamera/pipeline/virtual/virtual.h      |  24 ++-
>  9 files changed, 389 insertions(+), 35 deletions(-)
>  create mode 100644 src/libcamera/pipeline/virtual/image_frame_generator.cpp
>  create mode 100644 src/libcamera/pipeline/virtual/image_frame_generator.h
>  create mode 100644 src/libcamera/pipeline/virtual/utils.h
>
> diff --git a/src/libcamera/pipeline/virtual/README.md b/src/libcamera/pipeline/virtual/README.md
> index ef80bb48..18c8341b 100644
> --- a/src/libcamera/pipeline/virtual/README.md
> +++ b/src/libcamera/pipeline/virtual/README.md
> @@ -16,7 +16,13 @@ Each camera block is a dictionary, containing the following keys:
>      - `width` (`unsigned int`, default=1920): Width of the window resolution. This needs to be even.
>      - `height` (`unsigned int`, default=1080): Height of the window resolution.
>      - `frame_rates` (list of `int`, default=`[30,60]` ): Range of the frame rate (per second). If the list contains one value, it's the lower bound and the upper bound. If the list contains two values, the first is the lower bound and the second is the upper bound. No other number of values is allowed.
> -- `test_pattern` (`string`): Which test pattern to use as frames. The options are "bars", "lines".
> +- `test_pattern` (`string`): Which test pattern to use as frames. The options are "bars", "lines". Cannot be set with `frames`.
> +- `frames` (dictionary):
> +  - `path` (`string`): Path to an image, or path to a directory of a series of images. Cannot be set with `test_pattern`.
> +    - The test patterns are "bars" which means color bars, and "lines" which means diagonal lines.
> +    - The path to an image has ".jpg" extension.
> +    - The path to a directory ends with "/". The name of the images in the directory are "{n}.jpg" with {n} is the sequence of images starting with 0.
> +  - `scale_mode`(`string`, default="fill"): Scale mode when the frames are images. The only scale mode supported now is "fill". This does not affect the scale mode for now.

Wouldn't it be more logical to add the frame generator support before
the config file ? Otherwise you are adding pieces here to what has
been added just one patch ago

>  - `location` (`string`, default="front"): The location of the camera. Support "front" and "back". This is displayed in qcam camera selection window but this does not change the output.
>  - `model` (`string`, default="Unknown"): The model name of the camera. This is displayed in qcam camera selection window but this does not change the output.
>
> @@ -37,6 +43,7 @@ This is the procedure of the Parser class:
>  3. Parse each property and register the data.
>      - `parseSupportedFormats()`: Parses `supported_formats` in the config, which contains resolutions and frame rates.
>      - `parseTestPattern()`: Parses `test_pattern` in the config.
> +    - `parseFrame()`: Parses `frames` in the config.
>      - `parseLocation()`: Parses `location` in the config.
>      - `parseModel()`: Parses `model` in the config.
>  4. Back to `parseConfigFile()` and append the camera configuration.
> diff --git a/src/libcamera/pipeline/virtual/image_frame_generator.cpp b/src/libcamera/pipeline/virtual/image_frame_generator.cpp
> new file mode 100644
> index 00000000..db3efe15
> --- /dev/null
> +++ b/src/libcamera/pipeline/virtual/image_frame_generator.cpp
> @@ -0,0 +1,178 @@
> +/* SPDX-License-Identifier: LGPL-2.1-or-later */
> +/*
> + * Copyright (C) 2024, Google Inc.
> + *
> + * image_frame_generator.cpp - Derived class of FrameGenerator for
> + * generating frames from images
> + */
> +
> +#include "image_frame_generator.h"
> +
> +#include <filesystem>
> +#include <memory>
> +#include <string>
> +
> +#include <libcamera/base/file.h>
> +#include <libcamera/base/log.h>
> +
> +#include <libcamera/framebuffer.h>
> +
> +#include "libcamera/internal/mapped_framebuffer.h"
> +
> +#include "libyuv/convert.h"
> +#include "libyuv/scale.h"
> +
> +namespace libcamera {
> +
> +LOG_DECLARE_CATEGORY(Virtual)
> +
> +/*
> + * Factory function to create an ImageFrameGenerator object.
> + * Read the images and convert them to buffers in NV12 format.
> + * Store the pointers to the buffers to a list (imageFrameDatas)
> + */
> +std::unique_ptr<ImageFrameGenerator>
> +ImageFrameGenerator::create(ImageFrames &imageFrames)
> +{
> +	std::unique_ptr<ImageFrameGenerator> imageFrameGenerator =
> +		std::make_unique<ImageFrameGenerator>();
> +	imageFrameGenerator->imageFrames_ = &imageFrames;
> +
> +	/*
> +         * For each file in the directory, load the image,
> +         * convert it to NV12, and store the pointer.

weird indent

> +	 */
> +	for (unsigned int i = 0; i < imageFrames.number.value_or(1); i++) {
> +		std::filesystem::path path;
> +		if (!imageFrames.number)
> +			/* If the path is to an image */
> +			path = imageFrames.path;
> +		else
> +			/* If the path is to a directory */
> +			path = imageFrames.path / (std::to_string(i) + ".jpg");
> +
> +		File file(path);
> +		if (!file.open(File::OpenModeFlag::ReadOnly)) {
> +			LOG(Virtual, Error) << "Failed to open image file " << file.fileName()
> +					    << ": " << strerror(file.error());
> +			return nullptr;
> +		}
> +
> +		/* Read the image file to data */
> +		auto fileSize = file.size();
> +		auto buffer = std::make_unique<uint8_t[]>(file.size());
> +		if (file.read({ buffer.get(), static_cast<size_t>(fileSize) }) != fileSize) {
> +			LOG(Virtual, Error) << "Failed to read file " << file.fileName()
> +					    << ": " << strerror(file.error());
> +			return nullptr;
> +		}
> +
> +		/* Get the width and height of the image */
> +		int width, height;
> +		if (libyuv::MJPGSize(buffer.get(), fileSize, &width, &height)) {
> +			LOG(Virtual, Error) << "Failed to get the size of the image file: "
> +					    << file.fileName();
> +			return nullptr;
> +		}
> +
> +		/* Convert to NV12 and write the data to tmpY and tmpUV */
> +		unsigned int halfWidth = (width + 1) / 2;
> +		unsigned int halfHeight = (height + 1) / 2;

Do you need this for rounding ?

> +		std::unique_ptr<uint8_t[]> dstY =
> +			std::make_unique<uint8_t[]>(width * height);
> +		std::unique_ptr<uint8_t[]> dstUV =
> +			std::make_unique<uint8_t[]>(halfWidth * halfHeight * 2);

otherwise using width * height / 2 would do

> +		int ret = libyuv::MJPGToNV12(buffer.get(), fileSize,
> +					     dstY.get(), width, dstUV.get(),
> +					     width, width, height, width, height);
> +		if (ret != 0)
> +			LOG(Virtual, Error) << "MJPGToNV12() failed with " << ret;
> +
> +		imageFrameGenerator->imageFrameDatas_.emplace_back(
> +			ImageFrameData{ std::move(dstY), std::move(dstUV),
> +					Size(width, height) });
> +	}
> +
> +	return imageFrameGenerator;
> +}
> +
> +/* Scale the buffers for image frames. */
> +void ImageFrameGenerator::configure(const Size &size)
> +{
> +	/* Reset the source images to prevent multiple configuration calls */
> +	scaledFrameDatas_.clear();
> +	frameCount_ = 0;
> +	parameter_ = 0;
> +
> +	for (unsigned int i = 0; i < imageFrames_->number.value_or(1); i++) {
> +		/* Scale the imageFrameDatas_ to scaledY and scaledUV */
> +		unsigned int halfSizeWidth = (size.width + 1) / 2;
> +		unsigned int halfSizeHeight = (size.height + 1) / 2;
> +		std::unique_ptr<uint8_t[]> scaledY =
> +			std::make_unique<uint8_t[]>(size.width * size.height);
> +		std::unique_ptr<uint8_t[]> scaledUV =
> +			std::make_unique<uint8_t[]>(halfSizeWidth * halfSizeHeight * 2);
> +		auto &src = imageFrameDatas_[i];
> +
> +		/*
> +		 * \todo Some platforms might enforce stride due to GPU, like
> +		 * ChromeOS ciri (64). The weight needs to be a multiple of
> +		 * the stride to work properly for now.
> +		 */
> +		libyuv::NV12Scale(src.Y.get(), src.size.width,
> +				  src.UV.get(), src.size.width,
> +				  src.size.width, src.size.height,
> +				  scaledY.get(), size.width, scaledUV.get(), size.width,
> +				  size.width, size.height, libyuv::FilterMode::kFilterBilinear);
> +
> +		scaledFrameDatas_.emplace_back(
> +			ImageFrameData{ std::move(scaledY), std::move(scaledUV), size });
> +	}
> +}
> +
> +void ImageFrameGenerator::generateFrame(const Size &size, const FrameBuffer *buffer)
> +{
> +	/* Don't do anything when the list of buffers is empty*/
> +	ASSERT(!scaledFrameDatas_.empty());
> +
> +	MappedFrameBuffer mappedFrameBuffer(buffer, MappedFrameBuffer::MapFlag::Write);
> +
> +	auto planes = mappedFrameBuffer.planes();
> +
> +	/* Make sure the frameCount does not over the number of images */
> +	frameCount_ %= imageFrames_->number.value_or(1);
> +
> +	/* Write the scaledY and scaledUV to the mapped frame buffer */
> +	libyuv::NV12Copy(scaledFrameDatas_[frameCount_].Y.get(), size.width,
> +			 scaledFrameDatas_[frameCount_].UV.get(), size.width, planes[0].begin(),
> +			 size.width, planes[1].begin(), size.width,
> +			 size.width, size.height);
> +
> +	/* proceed an image every 4 frames */
> +	/* \todo read the parameter_ from the configuration file? */
> +	parameter_++;
> +	if (parameter_ % 4 == 0)

Make this a constexpr if not configurable

> +		frameCount_++;
> +}
> +
> +/**

We don't doxygen the pipelines, but doesn't hurt to have comments.
Maybe remove the /**


> + * \var ImageFrameGenerator::imageFrameDatas_
> + * \brief List of pointers to the not scaled image buffers
> + */
> +
> +/**
> + * \var ImageFrameGenerator::scaledFrameDatas_
> + * \brief List of pointers to the scaled image buffers
> + */
> +
> +/**
> + * \var ImageFrameGenerator::imageFrames_
> + * \brief Pointer to the imageFrames_ in VirtualCameraData
> + */
> +
> +/**
> + * \var ImageFrameGenerator::parameter_
> + * \brief Speed parameter. Change to the next image every parameter_ frames
> + */
> +
> +} /* namespace libcamera */
> diff --git a/src/libcamera/pipeline/virtual/image_frame_generator.h b/src/libcamera/pipeline/virtual/image_frame_generator.h
> new file mode 100644
> index 00000000..4ad8aad2
> --- /dev/null
> +++ b/src/libcamera/pipeline/virtual/image_frame_generator.h
> @@ -0,0 +1,54 @@
> +/* SPDX-License-Identifier: LGPL-2.1-or-later */
> +/*
> + * Copyright (C) 2024, Google Inc.
> + *
> + * image_frame_generator.h - Derived class of FrameGenerator for
> + * generating frames from images
> + */
> +
> +#pragma once
> +
> +#include <filesystem>
> +#include <memory>
> +#include <optional>
> +#include <stdint.h>
> +#include <sys/types.h>
> +
> +#include "frame_generator.h"
> +
> +namespace libcamera {
> +
> +enum class ScaleMode : char {
> +	Fill = 0,
> +};

I know you want more scale modes, but as long as you only support one
there's no need for a type

> +
> +/* Frame configuration provided by the config file */
> +struct ImageFrames {
> +	std::filesystem::path path;
> +	ScaleMode scaleMode;

As this can only be "Fill", nothing else is valid atm

> +	std::optional<unsigned int> number;
> +};
> +
> +class ImageFrameGenerator : public FrameGenerator
> +{
> +public:
> +	static std::unique_ptr<ImageFrameGenerator> create(ImageFrames &imageFrames);
> +
> +private:
> +	struct ImageFrameData {
> +		std::unique_ptr<uint8_t[]> Y;
> +		std::unique_ptr<uint8_t[]> UV;
> +		Size size;
> +	};
> +
> +	void configure(const Size &size) override;
> +	void generateFrame(const Size &size, const FrameBuffer *buffer) override;
> +
> +	std::vector<ImageFrameData> imageFrameDatas_;
> +	std::vector<ImageFrameData> scaledFrameDatas_;
> +	ImageFrames *imageFrames_;
> +	unsigned int frameCount_;
> +	unsigned int parameter_;
> +};
> +
> +} /* namespace libcamera */
> diff --git a/src/libcamera/pipeline/virtual/meson.build b/src/libcamera/pipeline/virtual/meson.build
> index d72ac5be..395919b3 100644
> --- a/src/libcamera/pipeline/virtual/meson.build
> +++ b/src/libcamera/pipeline/virtual/meson.build
> @@ -1,9 +1,13 @@
>  # SPDX-License-Identifier: CC0-1.0
>
>  libcamera_internal_sources += files([
> +    'image_frame_generator.cpp',
>      'parser.cpp',
>      'test_pattern_generator.cpp',
>      'virtual.cpp',
>  ])
>
> +libjpeg = dependency('libjpeg', required : false)
> +
>  libcamera_deps += [libyuv_dep]
> +libcamera_deps += [libjpeg]
> diff --git a/src/libcamera/pipeline/virtual/parser.cpp b/src/libcamera/pipeline/virtual/parser.cpp
> index d861a52a..5076e71c 100644
> --- a/src/libcamera/pipeline/virtual/parser.cpp
> +++ b/src/libcamera/pipeline/virtual/parser.cpp
> @@ -52,12 +52,12 @@ Parser::parseConfigFile(File &file, PipelineHandler *pipe)
>  			continue;
>  		}
>
> -		data->id_ = cameraId;
> +		data->config_.id = cameraId;
>  		ControlInfoMap::Map controls;
>  		/* todo: Check which resolution's frame rate to be reported */
>  		controls[&controls::FrameDurationLimits] =
> -			ControlInfo(int64_t(1000000 / data->supportedResolutions_[0].frameRates[1]),
> -				    int64_t(1000000 / data->supportedResolutions_[0].frameRates[0]));
> +			ControlInfo(int64_t(1000000 / data->config_.resolutions[0].frameRates[1]),
> +				    int64_t(1000000 / data->config_.resolutions[0].frameRates[0]));
>  		data->controlInfo_ = ControlInfoMap(std::move(controls), controls::controls);
>  		configurations.push_back(std::move(data));
>  	}
> @@ -75,7 +75,8 @@ Parser::parseCameraConfigData(const YamlObject &cameraConfigData,
>  	std::unique_ptr<VirtualCameraData> data =
>  		std::make_unique<VirtualCameraData>(pipe, resolutions);
>
> -	if (parseTestPattern(cameraConfigData, data.get()))
> +	if (parseTestPattern(cameraConfigData, data.get()) &&
> +	    parseFrame(cameraConfigData, data.get()))

This is fragile and only works because if parseTestPattern() returns 0
then parseFrame() is never called

>  		return nullptr;
>
>  	if (parseLocation(cameraConfigData, data.get()))
> @@ -148,16 +149,75 @@ int Parser::parseTestPattern(const YamlObject &cameraConfigData, VirtualCameraDa
>  {
>  	std::string testPattern = cameraConfigData["test_pattern"].get<std::string>("");
>
> -	/* Default value is "bars" */
>  	if (testPattern == "bars") {
> -		data->testPattern_ = TestPattern::ColorBars;
> +		data->config_.frame = TestPattern::ColorBars;
>  	} else if (testPattern == "lines") {
> -		data->testPattern_ = TestPattern::DiagonalLines;
> +		data->config_.frame = TestPattern::DiagonalLines;
>  	} else {
> -		LOG(Virtual, Error) << "Test pattern: " << testPattern
> +		LOG(Virtual, Debug) << "Test pattern: " << testPattern
>  				    << "is not supported";
>  		return -EINVAL;
>  	}
> +
> +	return 0;
> +}
> +
> +int Parser::parseFrame(const YamlObject &cameraConfigData, VirtualCameraData *data)

You can unify these two functions.

Get "testPattern", if valid make sure it's only either "bars" or
"lines" and fail if it is specified but has an unsupported value.

Then check for "frames". If both "frames" and "testPattern" are
specified, it's an error. Then parse "frames" like you do here.

> +{
> +	const YamlObject &frames = cameraConfigData["frames"];
> +
> +	/* When there is no frames provided in the config file, use color bar test pattern */
> +	if (frames.size() == 0) {
> +		data->config_.frame = TestPattern::ColorBars;
> +		return 0;
> +	}
> +
> +	if (!frames.isDictionary()) {
> +		LOG(Virtual, Error) << "'frames' is not a dictionary.";
> +		return -EINVAL;
> +	}
> +
> +	std::string path = frames["path"].get<std::string>("");
> +
> +	ScaleMode scaleMode;
> +	if (auto ext = std::filesystem::path(path).extension();
> +	    ext == ".jpg" || ext == ".jpeg") {
> +		if (parseScaleMode(frames, &scaleMode))
> +			return -EINVAL;
> +		data->config_.frame = ImageFrames{ path, scaleMode, std::nullopt };
> +	} else if (std::filesystem::is_directory(std::filesystem::symlink_status(path))) {
> +		if (parseScaleMode(frames, &scaleMode))
> +			return -EINVAL;

Could you parse scale mode before checking the file extensions ?

> +
> +		using std::filesystem::directory_iterator;
> +		unsigned int numOfFiles = std::distance(directory_iterator(path), directory_iterator{});
> +		if (numOfFiles == 0) {
> +			LOG(Virtual, Error) << "Empty directory";
> +			return -EINVAL;
> +		}
> +		data->config_.frame = ImageFrames{ path, scaleMode, numOfFiles };
> +	} else {
> +		LOG(Virtual, Error) << "Frame: " << path << " is not supported";
> +		return -EINVAL;
> +	}
> +
> +	return 0;
> +}
> +
> +int Parser::parseScaleMode(
> +	const YamlObject &framesConfigData, ScaleMode *scaleMode)

weird indent

> +{
> +	std::string mode = framesConfigData["scale_mode"].get<std::string>("");
> +
> +	/* Default value is fill */
> +	if (mode == "fill" || mode == "") {
> +		*scaleMode = ScaleMode::Fill;

You don't need a type as suggested above, you can either assume "Fill"
or fail during parsing.

> +	} else {
> +		LOG(Virtual, Error) << "scaleMode: " << mode
> +				    << " is not supported";
> +		return -EINVAL;
> +	}
> +
>  	return 0;
>  }
>
> diff --git a/src/libcamera/pipeline/virtual/parser.h b/src/libcamera/pipeline/virtual/parser.h
> index 09c3c56b..f65616e3 100644
> --- a/src/libcamera/pipeline/virtual/parser.h
> +++ b/src/libcamera/pipeline/virtual/parser.h
> @@ -35,8 +35,10 @@ private:
>  	int parseSupportedFormats(const YamlObject &cameraConfigData,
>  				  std::vector<VirtualCameraData::Resolution> *resolutions);
>  	int parseTestPattern(const YamlObject &cameraConfigData, VirtualCameraData *data);
> +	int parseFrame(const YamlObject &cameraConfigData, VirtualCameraData *data);
>  	int parseLocation(const YamlObject &cameraConfigData, VirtualCameraData *data);
>  	int parseModel(const YamlObject &cameraConfigData, VirtualCameraData *data);
> +	int parseScaleMode(const YamlObject &framesConfigData, ScaleMode *scaleMode);
>  };
>
>  } /* namespace libcamera */
> diff --git a/src/libcamera/pipeline/virtual/utils.h b/src/libcamera/pipeline/virtual/utils.h
> new file mode 100644
> index 00000000..43a14d4b
> --- /dev/null
> +++ b/src/libcamera/pipeline/virtual/utils.h
> @@ -0,0 +1,17 @@
> +/* SPDX-License-Identifier: LGPL-2.1-or-later */
> +/*
> + * Copyright (C) 2024, Google Inc.
> + *
> + * utils.h - Utility types for Virtual Pipeline Handler
> + */
> +
> +namespace libcamera {
> +
> +template<class... Ts>
> +struct overloaded : Ts... {
> +	using Ts::operator()...;
> +};
> +template<class... Ts>
> +overloaded(Ts...) -> overloaded<Ts...>;
> +

Does using a standard construct like std::visit a custom definition
like this one ? Also, this header is included form a single cpp file,
move its content there if you can.


> +} /* namespace libcamera */
> diff --git a/src/libcamera/pipeline/virtual/virtual.cpp b/src/libcamera/pipeline/virtual/virtual.cpp
> index 55bc30df..98aed412 100644
> --- a/src/libcamera/pipeline/virtual/virtual.cpp
> +++ b/src/libcamera/pipeline/virtual/virtual.cpp
> @@ -27,6 +27,7 @@
>  #include "libcamera/internal/yaml_parser.h"
>
>  #include "parser.h"
> +#include "utils.h"
>
>  namespace libcamera {
>
> @@ -49,17 +50,18 @@ uint64_t currentTimestamp()
>
>  VirtualCameraData::VirtualCameraData(PipelineHandler *pipe,
>  				     std::vector<Resolution> supportedResolutions)
> -	: Camera::Private(pipe), supportedResolutions_(std::move(supportedResolutions))
> +	: Camera::Private(pipe)
>  {
> -	for (const auto &resolution : supportedResolutions_) {
> -		if (minResolutionSize_.isNull() || minResolutionSize_ > resolution.size)
> -			minResolutionSize_ = resolution.size;
> +	config_.resolutions = std::move(supportedResolutions);
> +	for (const auto &resolution : config_.resolutions) {
> +		if (config_.minResolutionSize.isNull() || config_.minResolutionSize > resolution.size)
> +			config_.minResolutionSize = resolution.size;
>
> -		maxResolutionSize_ = std::max(maxResolutionSize_, resolution.size);
> +		config_.maxResolutionSize = std::max(config_.maxResolutionSize, resolution.size);
>  	}
>
>  	properties_.set(properties::PixelArrayActiveAreas,
> -			{ Rectangle(maxResolutionSize_) });
> +			{ Rectangle(config_.maxResolutionSize) });
>
>  	/* \todo Support multiple streams and pass multi_stream_test */
>  	streamConfigs_.resize(kMaxStream);
> @@ -87,7 +89,7 @@ CameraConfiguration::Status VirtualCameraConfiguration::validate()
>
>  	for (StreamConfiguration &cfg : config_) {
>  		bool found = false;
> -		for (const auto &resolution : data_->supportedResolutions_) {
> +		for (const auto &resolution : data_->config_.resolutions) {
>  			if (resolution.size.width == cfg.size.width &&
>  			    resolution.size.height == cfg.size.height) {
>  				found = true;
> @@ -102,7 +104,7 @@ CameraConfiguration::Status VirtualCameraConfiguration::validate()
>  			 * Defining the default logic in PipelineHandler to
>  			 * find the closest resolution would be nice.
>  			 */
> -			cfg.size = data_->maxResolutionSize_;
> +			cfg.size = data_->config_.maxResolutionSize;
>  			status = Adjusted;
>  		}
>
> @@ -145,11 +147,11 @@ PipelineHandlerVirtual::generateConfiguration(Camera *camera,
>  	for (const StreamRole role : roles) {
>  		std::map<PixelFormat, std::vector<SizeRange>> streamFormats;
>  		PixelFormat pixelFormat = formats::NV12;
> -		streamFormats[pixelFormat] = { { data->minResolutionSize_, data->maxResolutionSize_ } };
> +		streamFormats[pixelFormat] = { { data->config_.minResolutionSize, data->config_.maxResolutionSize } };
>  		StreamFormats formats(streamFormats);
>  		StreamConfiguration cfg(formats);
>  		cfg.pixelFormat = pixelFormat;
> -		cfg.size = data->maxResolutionSize_;
> +		cfg.size = data->config_.maxResolutionSize;
>  		cfg.bufferCount = VirtualCameraConfiguration::kBufferCount;
>
>  		switch (role) {
> @@ -181,6 +183,7 @@ int PipelineHandlerVirtual::configure(Camera *camera,
>  	VirtualCameraData *data = cameraData(camera);
>  	for (size_t i = 0; i < config->size(); ++i) {
>  		config->at(i).setStream(&data->streamConfigs_[i].stream);
> +		/* Start reading the images/generating test patterns */
>  		data->streamConfigs_[i].frameGenerator->configure(
>  			data->streamConfigs_[i].stream.configuration().size);
>  	}
> @@ -274,10 +277,14 @@ bool PipelineHandlerVirtual::match([[maybe_unused]] DeviceEnumerator *enumerator
>  		std::set<Stream *> streams;
>  		for (auto &streamConfig : data->streamConfigs_)
>  			streams.insert(&streamConfig.stream);
> -		std::string id = data->id_;
> +		std::string id = data->config_.id;
>  		std::shared_ptr<Camera> camera = Camera::create(std::move(data), id, streams);
>
> -		initFrameGenerator(camera.get());
> +		if (!initFrameGenerator(camera.get())) {
> +			LOG(Virtual, Error) << "Failed to initialize frame "
> +					    << "generator for camera: " << id;
> +			continue;
> +		}
>
>  		registerCamera(std::move(camera));
>  	}
> @@ -285,15 +292,30 @@ bool PipelineHandlerVirtual::match([[maybe_unused]] DeviceEnumerator *enumerator
>  	return true;
>  }
>
> -void PipelineHandlerVirtual::initFrameGenerator(Camera *camera)
> +bool PipelineHandlerVirtual::initFrameGenerator(Camera *camera)
>  {
>  	auto data = cameraData(camera);
> -	for (auto &streamConfig : data->streamConfigs_) {
> -		if (data->testPattern_ == TestPattern::DiagonalLines)
> -			streamConfig.frameGenerator = std::make_unique<DiagonalLinesGenerator>();
> -		else
> -			streamConfig.frameGenerator = std::make_unique<ColorBarsGenerator>();
> -	}
> +	auto &frame = data->config_.frame;
> +	std::visit(overloaded{
> +			   [&](TestPattern &testPattern) {
> +				   for (auto &streamConfig : data->streamConfigs_) {
> +					   if (testPattern == TestPattern::DiagonalLines)
> +						   streamConfig.frameGenerator = std::make_unique<DiagonalLinesGenerator>();
> +					   else
> +						   streamConfig.frameGenerator = std::make_unique<ColorBarsGenerator>();
> +				   }
> +			   },
> +			   [&](ImageFrames &imageFrames) {
> +				   for (auto &streamConfig : data->streamConfigs_)
> +					   streamConfig.frameGenerator = ImageFrameGenerator::create(imageFrames);
> +			   } },
> +		   frame);
> +
> +	for (auto &streamConfig : data->streamConfigs_)
> +		if (!streamConfig.frameGenerator)
> +			return false;
> +
> +	return true;
>  }
>
>  REGISTER_PIPELINE_HANDLER(PipelineHandlerVirtual, "virtual")
> diff --git a/src/libcamera/pipeline/virtual/virtual.h b/src/libcamera/pipeline/virtual/virtual.h
> index 8830e00f..efa97e88 100644
> --- a/src/libcamera/pipeline/virtual/virtual.h
> +++ b/src/libcamera/pipeline/virtual/virtual.h
> @@ -8,6 +8,8 @@
>  #pragma once
>
>  #include <memory>
> +#include <string>
> +#include <variant>
>  #include <vector>
>
>  #include <libcamera/base/file.h>
> @@ -16,10 +18,14 @@
>  #include "libcamera/internal/dma_buf_allocator.h"
>  #include "libcamera/internal/pipeline_handler.h"
>
> +#include "frame_generator.h"
> +#include "image_frame_generator.h"
>  #include "test_pattern_generator.h"
>
>  namespace libcamera {
>
> +using VirtualFrame = std::variant<TestPattern, ImageFrames>;
> +
>  class VirtualCameraData : public Camera::Private
>  {
>  public:
> @@ -33,18 +39,22 @@ public:
>  		Stream stream;
>  		std::unique_ptr<FrameGenerator> frameGenerator;
>  	};
> +	/* The config file is parsed to the Configuration struct */
> +	struct Configuration {
> +		std::string id;
> +		std::vector<Resolution> resolutions;
> +		VirtualFrame frame;
> +
> +		Size maxResolutionSize;
> +		Size minResolutionSize;
> +	};
>
>  	VirtualCameraData(PipelineHandler *pipe,
>  			  std::vector<Resolution> supportedResolutions);
>
>  	~VirtualCameraData() = default;
>
> -	std::string id_;
> -	TestPattern testPattern_ = TestPattern::ColorBars;
> -
> -	const std::vector<Resolution> supportedResolutions_;
> -	Size maxResolutionSize_;
> -	Size minResolutionSize_;
> +	Configuration config_;
>
>  	std::vector<StreamConfig> streamConfigs_;
>  };
> @@ -89,7 +99,7 @@ private:
>  		return static_cast<VirtualCameraData *>(camera->_d());
>  	}
>
> -	void initFrameGenerator(Camera *camera);
> +	bool initFrameGenerator(Camera *camera);
>
>  	DmaBufAllocator dmaBufAllocator_;
>  };
> --
> 2.46.0.469.g59c65b2a67-goog
>


More information about the libcamera-devel mailing list