[libcamera-devel] [PATCH-Resend] Add getting started guide for application developers

Umang Jain email at uajain.com
Thu Jun 18 14:58:16 CEST 2020


Hi Chris,

Thank you for the intensive work.
The guide for most parts looks good for first initial steps and provides 
a decent
foundation to improve it incrementally over the time.

The comments below represents my own view on the guide as application 
developer.
This certainly is in the right direction, providing major concepts on 
how to go about
implementing/integrating with libcamera. There are a few "bumps" and/or 
rough corners
that I noticed along the way - but we can surely work to smoothen them 
out to make
sure the flow does not break.

On 6/15/20 4:20 PM, Kieran Bingham wrote:
> From: Chris Chinchilla <chris at gregariousmammal.com>
>
> ---
>
> [Kieran:]
> Resending this inline to ease review on the libcamera mailing-list.
>
>
>   .../guides/libcamera-application-author.rst   | 472 ++++++++++++++++++
>   1 file changed, 472 insertions(+)
>   create mode 100644 Documentation/guides/libcamera-application-author.rst
>
> diff --git a/Documentation/guides/libcamera-application-author.rst b/Documentation/guides/libcamera-application-author.rst
> new file mode 100644
> index 000000000000..c5f723820004
> --- /dev/null
> +++ b/Documentation/guides/libcamera-application-author.rst
> @@ -0,0 +1,472 @@
> +Supporting libcamera in your application
> +========================================
> +
> +This tutorial shows you how to create an application that uses libcamera
> +to connect to a camera on a system, capture frames from it for 3
> +seconds, and write metadata about the frames to standard out.
> +
> +.. TODO: How much of the example code runs before camera start etc?
> +
> +Create a pointer to the camera
> +------------------------------
> +
> +Before the ``int main()`` function, create a global shared pointer
> +variable for the camera:
> +
> +.. code:: cpp
> +
> +   std::shared_ptr<Camera> camera;
> +
> +   int main()
> +   {
> +       // Code to follow
> +   }
> +
> +Camera Manager
> +--------------
> +
> +Every libcamera-based application needs an instance of a
> +`CameraManager <https://u15657259.ct.sendgrid.net/ls/click?upn=6Fkx53fyxf8QbQSo3n6XwVvpJlL3Ck-2FiT-2BzZvLbjxYi22g2nmOtpnzDUQVty8E1U2YtX7D6Xiv4TCLS8GCycB9oWwYg3tDzL4mTdnBmeIJg-3DD9b__C3wFy2Q4UgRsRLDAYieRZ5Z3EhAWyy0-2FkOzyYc6FPc1dn6ROcAJqKXb9hjP566uPQRc1RrXj1WIpwXGZdY33rB33oB3-2FfEP5a0jYOMGaYSC-2B1NPQEwZrA2WYD75R1F48De8BJrWsncw7HQ0ytx0MSDcpZf2c40QCugah38dmfPyMpFkTtb29jfL7UuigMHRzQa3UkKiKfK0vTWLmzFNsx6bNN6MuoJVI7q2LbpQdRP9Xt4nBrHqdkCLZhHQRntWq>`_
> +that runs for the life of the application. When you start the Camera
> +Manager, it finds all the cameras available to the current system.
> +Behind the scenes, the libcamera Pipeline Handler abstracts and manages
> +the complex pipelines that kernel drivers expose through the `Linux
> +Media
> +Controller <https://u15657259.ct.sendgrid.net/ls/click?upn=8H1KCc2bev8KdIveckpOED7JN61TE8ZE6fRA1sds43cLJ-2BKWg0K-2FfHSIn7y3Zv8wZr4JjY4qaNpDDqKrML97LAmGbrcll-2Fid-2Fawr0xqB-2FsWSme11klvsujepH9rSbEPjBZhB_C3wFy2Q4UgRsRLDAYieRZ5Z3EhAWyy0-2FkOzyYc6FPc1dn6ROcAJqKXb9hjP566uPQRc1RrXj1WIpwXGZdY33rPUheUY78qOAfGwcs1bQ9IKfPQQe9l9y5aNYWsrwbcTlV0q3vO7Tcl7uSqtMDAsEFhEvYqXinRVZ70B9MwCMf2rN029C7hXwepnNmapCHhIHygT9fwTmdVEZnm3OtRxV3KMaGMwn0-2BE2VJNJ-2FuEja9yiB-2FyOeOpv-2F3isLDH6-2BhV3>`__
> +and `V4L2 <https://u15657259.ct.sendgrid.net/ls/click?upn=8H1KCc2bev8KdIveckpOEIT-2Bh57j-2Bls7LlbNtVnJCQidKCNLJwzAcLCkSRRu7uJuKERu_C3wFy2Q4UgRsRLDAYieRZ5Z3EhAWyy0-2FkOzyYc6FPc1dn6ROcAJqKXb9hjP566uPQRc1RrXj1WIpwXGZdY33rELvzyodEYWtm2327u1j2Ke3sozfSaylMTxSWeBqqXVZIBYzADUfl87YZSTSxUtLcXLocH2pwWhdAnJF45wYzfCz5tgCYPRDV-2FrzRnL4UDY88YlGdZT3oNfqneQ8wU8jTiUwC-2B-2B-2FC3Yl9sJGKP1qgDb04YA6BPgQvORsXesYWMuP>`__ APIs, meaning that an
> +application doesn’t need to handle device or driver specifics.
Certainly, I get your point here but I would refrain from mentioning 
terms like
"pipeline handler" term and V4L2 specifics/links in the first paragraph 
itself.
I would just say libcamera "abstracts away all the complex device drives 
specifics
so the application doesn't need to bother about it".

It's just that the info. is helpful, no doubt, but it's an 
implementation-wise detail of libcamera.
Maybe we can hook up another section like "Under the hood" at the end 
and mention the bird's
eye view of libcamera's architecture and how to interacts/integrates 
with V4L2 and end
with a sub-section around "Further suggested reading" or such.
> +
> +To create and start a new Camera Manager, create a new pointer variable
> +to the instance, and then start it:
> +
> +.. code:: cpp
> +
> +   CameraManager *cm = new CameraManager();
> +   cm->start();
> +
> +When you build the application, the Camera Manager identifies all
> +supported devices and creates cameras the application can interact with.
> +
> +The code below identifies all available cameras, and for this example,
> +writes them to standard output:
> +
> +.. code:: cpp
> +
> +   for (auto const &camera : cm->cameras())
> +       std::cout << camera->name() << std::endl;
> +
> +For example, the output on Ubuntu running in a VM on macOS is
> +``FaceTime HD Camera (Built-in):``, and for x y, etc.
Yes, I like the way you wrote a code snippet and also included a 
'output' from
your machine. Can you also include the code snippets' expected output 
down below
where you have explicitly mentioned a code snippet to explain the concepts.
I think it simply make the grasping better. I will point out below.
> +
> +.. TODO: Better examples
> +
> +Create and acquire a camera
> +---------------------------
> +
> +What libcamera considers a camera
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +The libcamera library supports fixed and hot-pluggable cameras,
> +including cameras, plugged and unplugged after initializing the library.
> +The libcamera library supports point-and-shoot still image and video
> +capture, either controlled directly by the CPU or exposed through an
> +internal USB bus as a UVC device designed for video conferencing usage.
> +The libcamera library considers any device that includes independent
> +camera sensors, such as front and back sensors as multiple different
> +camera devices.
> +
> +Once you know what camera you want to use, your application needs to
> +acquire a lock to it so no other application can use it.
> +
> +This example application uses a single camera that the Camera Manager
> +reports as available to applications.
> +
> +The code below creates the name of the first available camera as a
> +convenience variable, fetches that camera, and acquires the device for
> +exclusive access:
> +
> +.. code:: cpp
> +
> +   std::string cameraName = cm->cameras()[0]->name();
> +   camera = cm->get(cameraName);
> +   camera->acquire();
> +
> +Configure the camera
> +--------------------
> +
> +Before the application can do anything with the camera, you need to know
> +what it’s capabilities are. These capabilities include scalars,
> +resolutions, supported formats and converters. The libcamera library
> +uses ``StreamRole``\ s to define four predefined ways an application
I think we should add a line or two to explain StreamRole better.
To me, it basically what the "mode" of camera, the application wants to 
use it as.
Something like:
"Depending on the way your application intends to use the camera (for 
e.g. still-capture,
video streaming or recording), one needs to check the camera's 
capabilities and configure
it accordingly prior to use it. This is where `StreamRole` ... "

My point here is, once you give a "overall" concept to the application 
developer beforehand,
it's much easier to follow though and they start connecting the dots 
instantaneously.
> +intends to use a camera (`You can read the full list in the API
> +documentation <https://u15657259.ct.sendgrid.net/ls/click?upn=6Fkx53fyxf8QbQSo3n6XwVvpJlL3Ck-2FiT-2BzZvLbjxYgjmdniwHUIAFJbhJNnqivwgevvTgjgfehxOPAVk91hbBh8o9KrFVwcHj-2F-2FRW3L389lXvmJ4pIkJ5fI7dCY5gfSPYq0_C3wFy2Q4UgRsRLDAYieRZ5Z3EhAWyy0-2FkOzyYc6FPc1dn6ROcAJqKXb9hjP566uPQRc1RrXj1WIpwXGZdY33rExoagPKvPf7KXxnIZ4V-2BJwnRz6kQAr-2BvM-2BH27kqnq1K0ek9EJlsIz9NFR-2F6qLk1ewCyRu6a7iOOrbWK5epWegitdA3fUO0aLZfsFQElFKN6PQYenomqbn4i2W4VrAr0DHvWcsT7mgYV-2BLsHBLp47TPOfTV-2FSnZYrng-2FkQn-2Bf2VM>`__).
> +
> +To find out if how your application wants to use the camera is possible,
> +generate a new configuration using a vector of ``StreamRole``\ s, and
> +send that vector to the camera. To do this, create a new configuration
> +variable and use the ``generateConfiguration`` function to produce a
> +``CameraConfiguration`` for it. If the camera can handle the
> +configuration it returns a full ``CameraConfiguration``, and if it
> +can’t, a null pointer.
> +
> +.. code:: cpp
> +
> +   std::unique_ptr<CameraConfiguration> config = camera->generateConfiguration( { StreamRole::Viewfinder } );
> +
> +A ``CameraConfiguration`` has a ``StreamConfiguration`` instance for
> +each ``StreamRole`` the application requested, and that the camera can
> +support. Each of these has a default size and format that the camera
s/format/ pixel-data format maybe?
> +assigned, depending on the ``StreamRole`` requested.
> +
> +The code below creates a new ``StreamConfiguration`` variable and
> +populates it with the value of the first (and only) ``StreamRole`` in
> +the camera configuration. It then outputs the value to standard out.
> +
> +.. code:: cpp
> +
> +   StreamConfiguration &streamConfig = config->at(0);
> +   std::cout << "Default viewfinder configuration is: " << streamConfig.toString() << std::endl;
This is where I would like to see a demo output too, like you did above 
with camera->name().
> +
> +Change and validate the configuration
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +Once you have a ``StreamConfiguration``, your application can make
> +changes to the parameters it contains, for example, to change the width
> +and height you could use the following code:
> +
> +.. code:: cpp
> +
> +   streamConfig.size.width = 640;
> +   streamConfig.size.height = 480;
> +
> +If your application makes changes to any parameters, validate them
> +before applying them to the camera by using the ``validate`` function.
> +If the new value is invalid, the validation process adjusts the
> +parameter to what it considers a valid value. An application should
> +check that the adjusted configuration is something you expect (you can
> +use the
> +`Status <https://u15657259.ct.sendgrid.net/ls/click?upn=6Fkx53fyxf8QbQSo3n6XwVvpJlL3Ck-2FiT-2BzZvLbjxYi22g2nmOtpnzDUQVty8E1UlWA8zVzvP7nZi5be89y9qNZuk4V-2B9mz1-2FmA8GXsWvi4DmNzsTpoiD7BRXNzFADe0zFdSSlMKcgItqvJCKLpufHJb0oV4X5q3MSXkMnlcT4Q-3DFz5d_C3wFy2Q4UgRsRLDAYieRZ5Z3EhAWyy0-2FkOzyYc6FPc1dn6ROcAJqKXb9hjP566uPQRc1RrXj1WIpwXGZdY33rH5JgUtCHL-2BwQMFGf5eKQHuA6ZZDitmYJJp3pDpMA0m7UCo7JgNNktkoiHia9Yinfj4DnDTaTv29h4yRYDFrJL5uUKWuesnTNzyrltrSzRRoSxADXrpiSo5-2Fx0gUEZo96EwtAKRz0ZKxpJ9poUj758hx8x8DYz5pu27LzbNJc7v1>`_
> +method to check if the Pipeline Handler adjusted the configuration).
> +
> +For example, above you set the width and height to 640x480, but if the
> +camera cannot produce an image that large, it might return the
> +configuration with a new size of 320x240 and a status of ``Adjusted``.
> +
> +For this example application, the code below prints the adjusted values
> +to standard out.
> +
> +.. code:: cpp
> +
> +   config->validate();
> +   std::cout << "Validated viewfinder configuration is: " << streamConfig.toString() << std::endl;
Demo output on your machine will help here too.
> +
> +With a validated ``CameraConfiguration``, send it to the camera to
> +confirm the new configuration:
> +
> +.. code:: cpp
> +
> +   camera->configure(config.get());
> +
> +If you don’t first validate the configuration before calling
> +``configure``, there’s a chance that calling the function fails.
> +
> +Allocate FrameBuffers
> +---------------------
> +
This section, I feel went a bit steep with FrameAllocator etc.
If you can summarize a bit, like:
'application needs to arrange placeholders(or buffers) for incoming
pixel-data for each image frame from the camera that libcamera
can write to and the application can read from it. This is done by
FrameBuffer instances ...'
> +The libcamera library consumes buffers provided by applications as
> +``FrameBuffer`` instances, which makes libcamera a consumer of buffers
> +exported by other devices (such as displays or video encoders), or
> +allocated from an external allocator (such as ION on Android).
> +
> +The libcamera library uses ``FrameBuffer`` instances to buffer frames of
> +data from memory, but first, your application should reserve enough
> +memory for the ``FrameBuffer``\ s your streams need based on the sizes
> +and formats you configured.
> +
> +In some situations, applications do not have any means to allocate or
> +get hold of suitable buffers, for instance, when no other device is
> +involved, or on Linux platforms that lack a centralized allocator. The
> +``FrameBufferAllocator`` class provides a buffer allocator that you can
> +use in these situations.
> +
> +An application doesn’t have to use the default ``FrameBufferAllocator``
> +that libcamera provides, and can instead allocate memory manually, and
> +pass the buffers in ``Request``\ s (read more about ``Request``\ s in
> +`the frame capture section <#frame-capture>`__ of this guide). The
> +example in this guide covers using the ``FrameBufferAllocator`` that
> +libcamera provides.
> +
> +Using the libcamera ``FrameBufferAllocator``
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +As the camera manager knows what configuration is available, it can
> +allocate all the resources for you with a single method, and de-allocate
> +with another.
> +
> +Applications create a ``FrameBufferAllocator`` for a Camera, and use it
> +to allocate buffers for streams of a ``CameraConfiguration`` with the
> +``allocate()`` function.
> +
> +.. code:: cpp
> +
> +   for (StreamConfiguration &cfg : *config) {
> +       int ret = allocator->allocate(cfg.stream());
> +       if (ret < 0) {
> +           std::cerr << "Can't allocate buffers" << std::endl;
> +           return -ENOMEM;
> +       }
> +
> +       unsigned int allocated = allocator->buffers(cfg.stream()).size();
> +       std::cout << "Allocated " << allocated << " buffers for stream" << std::endl;

Can we print the stream role here for which `allocated` buffers are 
allocated, in this ouput ?
Also demo output here too, maybe?

> +   }
> +
> +Frame Capture
> +~~~~~~~~~~~~~
> +
> +The libcamera library follows a familiar streaming request model for
> +data (frames in this case). For each frame a camera captures, your
> +application must queue a request for it to the camera.
This section is another "bump" which I felt disrupted the flow.
I recommend here to summarize the relationship between Stream, their 
associated
Framebuffers(allocated from allocator) and `Request`s objects - how they 
play out.

You can explain these points sequentially:
- Stating the allocator will allocate a number of buffers for each 
stream configured
- A Request object is the basic building block of requesting data from 
the camera
        - "Each Request can only have one buffer per stream" (Quoting 
from Request::addBuffer)
- A Requst object is created using camera->createRequest and associated 
with a buffer with
   request->addBuffer()
- After creating `Requests`s object, we need to queue them to the camera 
for reading the pixels
   data as per the Stream's pixelFormat.

Something on this lines, gives me an 'overview' of how buffer allocation 
for streams is done -
to reading the pixel data received in those buffers. This now feels a 
good, self-contained, block
of the guide. After I can get into more specifics and code as you have 
done below.

One question (that I have), and I feel it's needed for bit more 
explaination here is:
Why does the allocator allocate mutiple buffers for each stream? Why do 
we need multiple?
What are the implications of it. - I will keep researching on it and/or 
ask the overlords.
> +
> +In the case of libcamera, a ‘Request’ is at least one Stream (one source
> +from a Camera), with a FrameBuffer full of image data.
I didn't understand the phrase 'full of image data' here.
> +
> +First, create an instance of a ``Stream`` from the ``StreamConfig``,
> +assign a vector of ``FrameBuffer``\ s to the allocation created above,
> +and create a vector of the requests the application will make.
> +
> +.. code:: cpp
> +
> +   Stream *stream = streamConfig.stream();
> +   const std::vector<std::unique_ptr<FrameBuffer>> &buffers = allocator->buffers(stream);
> +   std::vector<Request *> requests;
> +
> +Create ``Request``\ s for the size of the ``FrameBuffer`` by using the
> +``createRequest`` function and adding all the requests created to a
> +vector. For each request add a buffer to it with the ``addBuffer``
> +function, passing the stream the buffer belongs to, and the ``FrameBuffer``.
> +
> +.. code:: cpp
> +
> +       for (unsigned int i = 0; i < buffers.size(); ++i) {
> +           Request *request = camera->createRequest();
> +           if (!request)
> +           {
> +               std::cerr << "Can't create request" << std::endl;
> +               return -ENOMEM;
> +           }
> +
> +           const std::unique_ptr<FrameBuffer> &buffer = buffers[i];
> +           int ret = request->addBuffer(stream, buffer.get());
> +           if (ret < 0)
> +           {
> +               std::cerr << "Can't set buffer for request"
> +                     << std::endl;
> +               return ret;
> +           }
> +
> +           requests.push_back(request);
> +
> +           /*
> +            * todo: Set controls
> +            *
> +            * ControlList &Request::controls();
> +            * controls.set(controls::Brightness, 255);
> +            */
> +       }
> +
> +.. TODO: Controls
> +.. TODO: A request can also have controls or parameters that you can apply to the image. -->
> +
> +Event handling and callbacks
> +----------------------------
> +
> +The libcamera library uses the concept of signals and slots (`similar to
> +Qt <https://u15657259.ct.sendgrid.net/ls/click?upn=8H1KCc2bev8KdIveckpOEPt1XdSmgqg0qCQgL3YM68mk8IOfyUDY23gNG8Sj1oMejdaSA46YcKRVKB10Hkl-2F9g-3D-3DCdAf_C3wFy2Q4UgRsRLDAYieRZ5Z3EhAWyy0-2FkOzyYc6FPc1dn6ROcAJqKXb9hjP566uPQRc1RrXj1WIpwXGZdY33rJyS8ZfGonTJFhp-2BPFtoaJaTXmcp5eaXVAobjbQ0aKGgsNXbslkZibaTF4BY43o8fMMOECizXYlOdqssnUGSQbwmz-2FBz6h7ScjcvsqcEovJEEcg-2BDh2wqlaDvYhSsK7WtsX6mXNZLmnIRXfDZ-2BxWZBIlfb0kqWfEnixktvbdiJw6>`__) to connect events
> +with callbacks to handle those events.
> +
> +Signals
> +~~~~~~~
> +
> +Signals are emitted when the buffer has been completed (image data
> +written into), and because a Request can contain multiple buffers - the
I read in the source code documentation:
"Each Request can only have one buffer per stream"

So maybe we can explicitly clarify that the 'contain multiple buffers' 
means
mutiple buffers but each belonging to different stream. Do you think we 
should
pass on this information here? (being explicit avoids 
confusions/assumptions)
> +Request completed signal is emitted when all buffers within the request
> +are completed.
> +
> +A camera class instance emits a completed request signal to report when
> +all the buffers in a request are complete with image data written to
> +them. To receive these signals, connect a slot function to the signal
> +you are interested in. For this example application, that’s when the
> +camera completes a request.
> +
> +.. code:: cpp
> +
> +   camera->requestCompleted.connect(requestComplete);
> +
> +Slots
> +~~~~~
> +
> +Every time the camera request completes, it emits a signal, and the
> +connected slot invoked, passing the Request as a parameter.
> +
> +For this example application, the ``requestComplete`` slot outputs
> +information about the ``FrameBuffer`` to standard out, but the callback is
> +typically where your application accesses the image data from the camera
> +and does something with it.
> +
> +Signals operate in the libcamera ``CameraManager`` thread context, so it
> +is important not to block the thread for a long time, as this blocks
> +internal processing of the camera pipelines, and can affect realtime
> +performances, leading to skipped frames etc.
+1 For relaying this information in the guide!
> +
> +First, create the function that matches the slot:
> +
> +.. code:: cpp
> +
> +   static void requestComplete(Request *request)
> +   {
> +       // Code to follow
> +   }
> +
> +The signal/slot flow is the only way to pass requests and buffers from
> +libcamera back to the application. There are times when a request can
> +emit a ``requestComplete`` signal, but this request is actually
> +cancelled, for example by application shutdown. To avoid an application
> +processing image data that doesn’t exist, it’s worth checking that the
> +request is still in the state you expect (You can find `a full list of
> +the completion statuses in the
> +documentation <https://u15657259.ct.sendgrid.net/ls/click?upn=8H1KCc2bev8KdIveckpOEHfchHZMjEb-2BRRylkutFY1Y4OP-2B5wTAGKXtXa-2BQygIkj0lCMjr0RBDl64EEaC-2BsiLfXoDZlyCihx1EQ6DDEkpNLfRmrF0K9z0WiUIxsBOBRdDsXbJLGf4FmW7xEI2qB23A-3D-3D3kvo_C3wFy2Q4UgRsRLDAYieRZ5Z3EhAWyy0-2FkOzyYc6FPc1dn6ROcAJqKXb9hjP566uPQRc1RrXj1WIpwXGZdY33rB5eFtFgDeEcNDEcJqqcMZ-2BX-2BoL2Jkt4PJYC2e84-2FOU4Ls8tuUh2Bw839rTsi0fhccysKBjrrrZHHCFf4YQLVvOqb0e5wmlYevZWpEEAhyF7-2BMCBVtVhe9yNgRC-2BIrrY3HaxvINp6xBsM9AN-2Fb7oSZKrdYZpZ27cJwIP2HlM320N>`__).
> +
> +.. code:: cpp
> +
> +   if (request->status() == Request::RequestCancelled) return;
> +
> +When the request completes, you can access the buffers from the request
> +using the ``buffers()`` function which returns a map of each buffer, and
> +the stream it is associated with.
> +
> +.. code:: cpp
> +
> +   const std::map<Stream *, FrameBuffer *> &buffers = request->buffers();
> +
> +Iterating through the map allows you to inspect each buffer from each
> +stream completed in this request, and access the metadata for each frame
> +the camera captured. The buffer metadata contains information such as
> +capture status, a timestamp and the bytes used.
> +
> +.. code:: cpp
> +
> +   for (auto bufferPair : buffers) {
> +       FrameBuffer *buffer = bufferPair.second;
> +       const FrameMetadata &metadata = buffer->metadata();
> +   }
> +
> +The buffer describes the image data, but a buffer can consist of more
> +than one image plane that in memory hold the image data in the Frames.
> +For example, the Y, U, and V components of a YUV-encoded image are
> +described by a plane.
This is nice except I would expect this explaination a bit earlier in 
the guide.
Maybe combine this concept in "Capturing Frame" section.
> +
> +For this example application, still inside the ``for`` loop from above,
> +print the Frame sequence number and details of the planes.
> +
> +.. code:: cpp
> +
> +   std::cout << " seq: " << std::setw(6) << std::setfill('0') << metadata.sequence << " bytesused: ";
> +
> +   unsigned int nplane = 0;
> +   for (const FrameMetadata::Plane &plane : metadata.planes)
> +   {
> +       std::cout << plane.bytesused;
> +       if (++nplane < metadata.planes.size()) std::cout << "/";
> +   }
> +
> +   std::cout << std::endl;
Demo output here too maybe?
> +
> +With the handling of this request complete, reuse the buffer by adding
> +it back to the request with its matching stream, and create a new
> +request using the ``createRequest`` function.
> +
> +.. code:: cpp
> +
> +       request = camera->createRequest();
> +       if (!request)
> +       {
> +           std::cerr << "Can't create request" << std::endl;
> +           return;
> +       }
> +
> +       for (auto it = buffers.begin(); it != buffers.end(); ++it)
> +       {
> +           Stream *stream = it->first;
> +           FrameBuffer *buffer = it->second;
> +
> +           request->addBuffer(stream, buffer);
> +       }
> +
> +       camera->queueRequest(request);
> +
> +Start the camera and event loop
> +-------------------------------
> +
> +If you build and run the application at this point, none of the code in
> +the slot method above runs. While most of the code to handle processing
> +camera data is in place, you need first to start the camera to begin
> +capturing frames and queuing requests to it.

Hmm, I expected this queuing to camera information above (even I have 
mentioned it
in the review). I would still mention it there and here - helps keep the 
dot connected.

> +
> +.. code:: cpp
> +
> +   camera->start();
> +   for (Request *request : requests)
> +       camera->queueRequest(request);
> +
> +To emit signals that slots can respond to, your application needs an
> +event loop. You can use the ``EventDispatcher`` class as an event loop
> +for your application to listen to signals from resources libcamera
> +handles.
> +
> +The libcamera library does this by creating instances of the
> +``EventNotifier`` class, which models a file descriptor event source an
> +application can monitor and registers them with the ``EventDispatcher``.
> +Whenever the ``EventDispatcher`` detects an event it is monitoring, and
> +it emits an ``EventNotifier::activated signal``. The ``Timer`` class to
> +control the length event loops run for that you can register with a
> +dispatcher with the ``registerTimer`` function.
> +
I will also link some basic documentation  to understand Event loop here.
I think libcamera's one is similar to Qt Event loop ? Laurent, might 
know better.
> +The code below creates a new instance of the ``EventDispatcher`` class,
> +adds it to the camera manager, creates a timer to run for 3 seconds, and
> +during the length of that timer, the ``EventDispatcher`` processes
> +events that occur, and calls the relevant signals.
> +
> +.. code:: cpp
> +
> +   EventDispatcher *dispatcher = cm->eventDispatcher();
> +   Timer timer;
> +   timer.start(3000);
> +   while (timer.isRunning())
> +       dispatcher->processEvents();
> +
> +Clean up and stop application
> +-----------------------------
> +
> +The application is now finished with the camera and the resources the
> +camera uses, so you need to do the following:
> +
> +-  stop the camera
> +-  free the stream from the ``FrameBufferAllocator``
> +-  delete the ``FrameBufferAllocator``
> +-  release the lock on the camera and reset the pointer to it
> +-  stop the camera manager
> +-  exit the application
> +
> +.. code:: cpp
> +
> +   camera->stop();
> +   allocator->free(stream);
> +   delete allocator;
> +   camera->release();
> +   camera.reset();
> +   cm->stop();
> +
> +   return 0;
> +
> +Conclusion
> +----------


More information about the libcamera-devel mailing list