[libcamera-devel] [PATCH v3 3/3] android: camera_device: Configure one stream for identical stream requests
Laurent Pinchart
laurent.pinchart at ideasonboard.com
Tue Aug 31 22:56:33 CEST 2021
Hi Hiro,
Thank you for the patch.
On Tue, Aug 31, 2021 at 06:34:39PM +0900, Hirokazu Honda wrote:
> An Android HAL client may request identical stream requests. It is
> redundant that a native camera device produces a separate stream for
> each of the identical requests.
> Configure camera one stream configuration for the identical stream
> requests.
>
> Signed-off-by: Hirokazu Honda <hiroh at chromium.org>
> Reviewed-by: Jacopo Mondi <jacopo at jmondi.org>
> ---
> src/android/camera_device.cpp | 28 +++++++++++++++++++++++-----
> 1 file changed, 23 insertions(+), 5 deletions(-)
>
> diff --git a/src/android/camera_device.cpp b/src/android/camera_device.cpp
> index 74a95a2a..1cb4e675 100644
> --- a/src/android/camera_device.cpp
> +++ b/src/android/camera_device.cpp
> @@ -618,14 +618,32 @@ int CameraDevice::configureStreams(camera3_stream_configuration_t *stream_list)
> continue;
> }
>
> + /* This stream will be produced by hardware. */
> + stream->usage |= GRALLOC_USAGE_HW_CAMERA_WRITE;
Strictly speaking this isn't true, given that the software stream will
not be produced by hardware, but by a memcpy, right ? Is this needed
because otherwise the gralloc module will not pick the right format ? If
so, could you capture it in the comment ?
> +
> + /*
> + * If a CameraStream with the same size and format of the
> + * current stream has already been requested, associate the two.
> + */
> + auto iter = std::find_if(
> + streamConfigs.begin(), streamConfigs.end(),
> + [size, format](const Camera3StreamConfig &streamConfig) {
> + return streamConfig.config.size == size &&
> + streamConfig.config.pixelFormat == format;
> + });
> + if (iter != streamConfigs.end()) {
> + /* Add usage to copy the buffer in streams[0] to stream. */
> + iter->streams[0].stream->usage |= GRALLOC_USAGE_SW_READ_OFTEN;
> + stream->usage |= GRALLOC_USAGE_SW_WRITE_OFTEN;
> + iter->streams.push_back({ stream, CameraStream::Type::Mapped });
> + continue;
> + }
> +
> Camera3StreamConfig streamConfig;
> streamConfig.streams = { { stream, CameraStream::Type::Direct } };
> streamConfig.config.size = size;
> streamConfig.config.pixelFormat = format;
> streamConfigs.push_back(std::move(streamConfig));
> -
> - /* This stream will be produced by hardware. */
> - stream->usage |= GRALLOC_USAGE_HW_CAMERA_WRITE;
> }
>
> /* Now handle the MJPEG streams, adding a new stream if required. */
> @@ -1144,12 +1162,12 @@ void CameraDevice::requestComplete(Request *request)
> resultMetadata = std::make_unique<CameraMetadata>(0, 0);
> }
>
> - /* Handle any JPEG compression. */
> + /* Handle post-processing. */
> for (camera3_stream_buffer_t &buffer : descriptor.buffers_) {
> CameraStream *cameraStream =
> static_cast<CameraStream *>(buffer.stream->priv);
>
> - if (cameraStream->camera3Stream().format != HAL_PIXEL_FORMAT_BLOB)
> + if (cameraStream->type() == CameraStream::Type::Direct)
> continue;
I would have moved those changes to 2/3 as that's where you introduce
usage of the YUV post-processor, but it's not enabled before this patch,
so it doesn't matter much.
With the above comment expanded (or my understanding corrected),
Reviewed-by: Laurent Pinchart <laurent.pinchart at ideasonboard.com>
>
> FrameBuffer *src = request->findBuffer(cameraStream->stream());
--
Regards,
Laurent Pinchart
More information about the libcamera-devel
mailing list