[libcamera-devel] gstreamer libcamerasrc to v4l2h264enc hardware encoder

Nicolas Dufresne nicolas at ndufresne.ca
Sat Jun 13 16:24:42 CEST 2020


Le ven. 12 juin 2020 11 h 27, Xishan Sun <sunxishan at gmail.com> a écrit :

> I am trying to link libcamerasrc with v4l2h264enc to use hardware encoder
> on Raspberry Pi.
>
> 1) if I directly feed into v4l2h264enc with something like:
>
> gst-launch-1.0 libcamerasrc ! video/x-raw,width=1600,height=1200 !
> v4l2h264enc ! h264parse ! rtph264pay name=pay0 pt=96 ! fakesink
>
> gstreamer will complain that format cannot be transformed. debug info:
>
> ...
> 0:00:00.671398274 22045  0x12d5400 WARN           basetransform
> gstbasetransform.c:1355:gst_base_transform_setcaps:<capsfilter0> transform
> could not transform video/x-raw, format=(string)NV21, width=(int)1600,
> height=(int)1200 in anything we support
> 0:00:00.671467254 22045  0x12d5400 WARN            libcamerasrc
> gstlibcamerasrc.cpp:321:gst_libcamera_src_task_run:<libcamerasrc0> error:
> Internal data stream error.
> 0:00:00.671491235 22045  0x12d5400 WARN            libcamerasrc
> gstlibcamerasrc.cpp:321:gst_libcamera_src_task_run:<libcamerasrc0> error:
> streaming stopped, reason not-negotiated (-4)
> ...
>
> it is strange that
> gst-inspect-1.0 v4l2h264enc
> shows me that  v4l2h264enc accepts NV21 or NV12 format:
>
> ...
> Pad Templates:
>   SINK template: 'sink'
>     Availability: Always
>     Capabilities:
>       video/x-raw
>                  format: { (string)I420, (string)YV12, (string)NV12,
> (string)NV21, (string)RGB16, (string)RGB, (string)BGR, (string)BGRx,
> (string)BGRA, (string)YUY2, (string)YVYU, (string)UYVY }
>                   width: [ 1, 32768 ]
>                  height: [ 1, 32768 ]
>               framerate: [ 0/1, 2147483647/1 ]
> ...
>
>  2) if I add videoconvert into pipeline as:
>   gst-launch-1.0 libcamerasrc ! video/x-raw,width=1600,height=1200 !
> videoconvert ! v4l2h264enc ! h264parse ! rtph264pay name=pay0 pt=96 !
> fakesink
> then everything works just fine at the beginning. However after I insert
> the pipeline into a RTSP server with long term run (about 4-5 hours)  I
> see Segmentation fault like:
> ...
> Thread 13 "pool" received signal SIGSEGV, Segmentation fault.
> [Switching to Thread 0xb4ce23a0 (LWP 6879)]
> std::_Rb_tree<libcamera::Stream*, std::pair<libcamera::Stream* const,
> libcamera::FrameBuffer*>, std::_Select1st<std::pair<libcamera::Stream*
> const, libcamera::FrameBuffer*> >, std::less<libcamera::Stream*>,
> std::allocator<std::pair<libcamera::Stream* const, libcamera::FrameBuffer*>
> > >::_M_lower_bound (this=0xb4316690, __x=0x517060b, __y=0xb4316694,
> __k=@0xb4ce1728: 0xb4319060) at /usr/include/c++/8/bits/stl_tree.h:1904
> 1904 if (!_M_impl._M_key_compare(_S_key(__x), __k))
> ...
>

This shouldn't happen, does the ... means the backtrace is truncated? If so
can you port the complete backtrace?

>
> Question:
> 1) Can we have libcamerasrc output direct to v4l2h264enc without
> videoconvert? If we can define "output-io-mode" of libcamerasrc into
> "dmabuf", it would be even better.
>

output-io-mode=dmabuf-import enables the experimental buffer importation.
This support was very weak in 1.14, and 1.16, and should be safe starting
from upcoming 1.18. Though V4L API lacks important features to make this
work reliably in all situations.


2) is that with videoconvert the best solution right now? I tried
> v4l2convert and it didn't work. I think libcamerasrc is already using the
> device.
>

v4l2convert works fine for me on Exynos and i.MX6. Which SoC are you
running on ? How does it not work for you ? How are you using it ?


> Thanks,
>
> --
> Xishan Sun
> _______________________________________________
> libcamera-devel mailing list
> libcamera-devel at lists.libcamera.org
> https://lists.libcamera.org/listinfo/libcamera-devel
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.libcamera.org/pipermail/libcamera-devel/attachments/20200613/f32dd669/attachment.htm>


More information about the libcamera-devel mailing list