[libcamera-devel] [PATCH 3/8] cam: Use SensorTimestamp rather than buffer metadata

Umang Jain umang.jain at ideasonboard.com
Wed Jun 1 10:34:52 CEST 2022


Hi All,

On 12/8/21 21:59, Kieran Bingham wrote:
> Quoting Laurent Pinchart (2021-12-08 18:57:10)
>> Hi Umang,
>>
>> On Tue, Dec 07, 2021 at 07:07:46PM +0530, Umang Jain wrote:
>>> On 12/7/21 6:51 PM, Umang Jain wrote:
>>>> On 12/7/21 5:39 AM, Laurent Pinchart wrote:
>>>>> On Mon, Dec 06, 2021 at 11:39:43PM +0000, Kieran Bingham wrote:
>>>>>> The SensorTimestamp is defined to be the time of capture of the image.
>>>>>> While all streams should have the same timestamp, this is not always
>>>>>> defined or guaranteed as ISP drivers may not forward sequence numbers
>>>>>> and timestamps from their input buffer.
>>>>> That should then bo solved by the pipeline handler, which should store
>>>>> the correct timestamp in the buffer metadata.
>>>>>
>>>>>> Use the Request metadata to get the SensorTimestamp which must be
>>>>>> set by the pipeline handlers according to the data from the capture
>>>>>> device.
>>>>>>
>>>>>> Signed-off-by: Kieran Bingham <kieran.bingham at ideasonboard.com>
>>>>>> ---
>>>>>>    src/cam/camera_session.cpp | 7 ++++---
>>>>>>    1 file changed, 4 insertions(+), 3 deletions(-)
>>>>>>
>>>>>> diff --git a/src/cam/camera_session.cpp b/src/cam/camera_session.cpp
>>>>>> index 1bf460fa3fb7..50170723c30f 100644
>>>>>> --- a/src/cam/camera_session.cpp
>>>>>> +++ b/src/cam/camera_session.cpp
>>>>>> @@ -359,10 +359,11 @@ void CameraSession::processRequest(Request
>>>>>> *request)
>>>>>>        const Request::BufferMap &buffers = request->buffers();
>>>>>>          /*
>>>>>> -     * Compute the frame rate. The timestamp is arbitrarily retrieved from
>>>>>> -     * the first buffer, as all buffers should have matching timestamps.
>>>>>> +     * Compute the frame rate. The timestamp is retrieved from the
>>>>>> +     * SensorTimestamp property, though all streams should have the
>>>>>> +     * same timestamp.
>>>>>>         */
>>>>>> -    uint64_t ts = buffers.begin()->second->metadata().timestamp;
>>>>>> +    uint64_t ts = request->metadata().get(controls::SensorTimestamp);
>>>>> This seems reasonable. Why do we have timestamps in the buffer metadata
>>>>> ? :-)
>>> Strong chance I have mis-understood the question, I later realized this
>>> is a cam-related patch.
>>>
>>> to me, the question translated :
>>>
>>>       why do we have timestamps in the FrameBuffer.metadata_.timestamp ?
>> To clarify, I meant to ask why we have both controls::SensorTimestamp
>> *and* timestamps in individual buffers
>> (FrameBuffer::metadata_.timestamp). I suppose it's historical, the
>> timestamp member comes from
>>
>> commit e94e52c0cb27f92c085da6e776af8b3d3172bbb2
>> Author: Niklas Söderlund <niklas.soderlund at ragnatech.se>
>> Date:   Thu Jan 24 23:34:51 2019 +0100
>>
>>      libcamera: v4l2_device: Update dequeued buffer information
>>
>> while controls::SensorTimestamp has been introduced in
>>
>> commit 84e79bd8b5347d91309cbd759bbd988b5144fb8d
>> Author: Jacopo Mondi <jacopo at jmondi.org>
>> Date:   Thu Jul 30 15:18:50 2020 +0200
>>
>>      libcamera: control_ids: Define draft controls
>>
>>> So ignore the discussion (if you want) :-P
>>>
>>>> Because there is no buffer assigned on the time-point where we want to
>>>> capture the timestamp.
>>>>
>>>> The exact location of the timestamp is a \todo
>>>>
>>>>               * \todo The sensor timestamp should be better estimated
>>>> by connecting
>>>>               * to the V4L2Device::frameStart signal.
>>>>
>>>> in all the pipeline-handlers as of now.
>>>>
>>>> We *want* to capture at frameStart signal, which emits in response to
>>>> VIDIOC_DQEVENT, but we can't as no buffer assigned (yet)
>>>>
>>>> We probably need to assign a container taking care of timestamps with
>>>> sequence numbers until a buffer is assigned down-the-line and then set
>>>> the buffer metadata by reading seq# & timestamp from that container.
>> Should we just drop FrameBuffer::metadata_.timestamp in favour of
>> controls::SensorTimestamp ? What would be the drawbacks, if any ?


Understanding the 'why' per-framebuffer timestamp made me realize that
it's a part of partial request completion implementation. From 
application-developer
guide:

"
Receiving notifications about the single buffer
completion event allows applications to implement partial request completion
support, and to inspect the buffer content before the request it is part 
of has
fully completed.
"

So that's why we have the timestamps in buffers. But it is also 
something that
can be done on applications' side as well (as discussed further in the 
series)

> Drawbacks for using Metadata control:
>   - Have to parse the metadata control list to get timestamps
>     (/sequences)
>
> Pro's for Metadata control:
>   - Only one place for it, all buffers in the same request are defined as
>     having the same value (sequence and timestamp).
>
>
> Drawbacks for per-buffer metadata:
>   - Must be explictily assigned to keep them in sync even though they are
>     the output of multiple video devices perhaps.
>
> Pro's for for per-buffer metadata:
>   - Quicker to retrieve the values? (defined structure?)
>
> Any more on either side?
>
>
> Personally, I'm currently thinking these are better defined by the
> metadata() control list. After all, that's the purpose of the list
> right?


I might be pulling away the discussion to partial request completion side,
but a single metadata() control list can be returned at various points to
have a 'clear' partial completion implementation because currently?
Might need to work with ControlList::merge() or ControlList::update() stuff,
so it's a very separate topic indeed.

So I do think that metadata() control list is better for defining as you 
suggested.

>
>
>>>>>>        double fps = ts - last_;
>>>>>>        fps = last_ != 0 && fps ? 1000000000.0 / fps : 0.0;
>>>>>>        last_ = ts;
>> -- 
>> Regards,
>>
>> Laurent Pinchart


More information about the libcamera-devel mailing list