[PATCH] gstreamer: Add GstVideoMeta support

Nicolas Dufresne nicolas at ndufresne.ca
Fri Oct 25 17:23:28 CEST 2024


Thank for looking into this.

Le vendredi 25 octobre 2024 à 12:37 +0900, Hou Qi a écrit :
> This change is to provide correct stride and offset for downstream
                                    strides    offsets
> plugin who needs GstVideoMeta when camera is connected to waylandsink
> or others.
> 
> Without GstVideoMeta, waylandsink uses video-info calculated stride an
                                                               strides
> offset to create wlbuffer. The calculated stride is 4 aligned with width.
  offsets

Just state that the stride from GStreamer libgstvideo may differ from the one
used by the camera. It is otherwise too specific to your lab testing.

> While camera stride got from driver reported bytesperline maybe the
> same size with width. There will be mismatch between camera output
> buffer and wlbuffer if camera generated size is not 4 aligned for
> multi-plane formats such as NV12 and NV16. It causes display abnormal.

There is room for wording improvement here too.

> 
> ---
>  src/gstreamer/gstlibcamerasrc.cpp | 22 +++++++++++++++++++++-
>  1 file changed, 21 insertions(+), 1 deletion(-)
> 
> diff --git a/src/gstreamer/gstlibcamerasrc.cpp b/src/gstreamer/gstlibcamerasrc.cpp
> index 912a8d55..c357dda1 100644
> --- a/src/gstreamer/gstlibcamerasrc.cpp
> +++ b/src/gstreamer/gstlibcamerasrc.cpp
> @@ -285,10 +285,30 @@ int GstLibcameraSrcState::processRequest()
>  	GstFlowReturn ret = GST_FLOW_OK;
>  	gst_flow_combiner_reset(src_->flow_combiner);
>  
> -	for (GstPad *srcpad : srcpads_) {
> +	for (gsize i = 0; i < srcpads_.size(); i++) {
> +		GstPad *srcpad = srcpads_[i];
>  		Stream *stream = gst_libcamera_pad_get_stream(srcpad);
>  		GstBuffer *buffer = wrap->detachBuffer(stream);
> +		StreamConfiguration &stream_cfg = src_->state->config_->at(i);
> +		GstVideoInfo info;
> +		g_autoptr(GstCaps) caps = gst_pad_get_current_caps(srcpad);
>  
> +		if (!gst_video_info_from_caps(&info, caps)) {
> +			GST_WARNING("Failed to create video info");
> +		} else {
> +			guint k, stride;
> +			gsize offset = 0;
> +			for (k = 0; k < GST_VIDEO_INFO_N_PLANES(&info); k++) {
> +				stride = gst_video_format_info_extrapolate_stride(info.finfo, k, stream_cfg.stride);
> +				info.stride[k] = stride;
> +				info.offset[k] = offset;
> +				offset += stride * GST_VIDEO_FORMAT_INFO_SCALE_HEIGHT(info.finfo, k, GST_VIDEO_INFO_HEIGHT(&info));
> +			}

> +			gst_buffer_add_video_meta_full(buffer, GST_VIDEO_FRAME_FLAG_NONE,
> +						       GST_VIDEO_INFO_FORMAT(&info), GST_VIDEO_INFO_WIDTH(&info),
> +						       GST_VIDEO_INFO_HEIGHT(&info), GST_VIDEO_INFO_N_PLANES(&info),
> +						       info.offset, info.stride);

Once we are streaming, this will never change. Please, pre-calculate the
information and encapsulate this into the buffer pool implementation. This will
also avoid doing malloc for every request, and ensure there is only one video
meta per buffers.

Note that downstream may not support VideoMeta (think of filesink), this is
detected in the allocation query. You must copy to system memory the buffer
before pushing this. Otherwise, save disk image will vary in padding, making
them impossible to read back.

Nicolas

> +		}
>  		FrameBuffer *fb = gst_libcamera_buffer_get_frame_buffer(buffer);
>  
>  		if (GST_CLOCK_TIME_IS_VALID(wrap->pts_)) {



More information about the libcamera-devel mailing list