Trouble capturing still images with StillCapture role on Raspberry Pi
Laurent Pinchart
laurent.pinchart at ideasonboard.com
Thu May 15 10:22:32 CEST 2025
On Thu, May 15, 2025 at 10:07:21AM +0200, Jacopo Mondi wrote:
> On Thu, May 15, 2025 at 03:58:23AM +0000, dominik greitmann wrote:
> > Hello,
> >
> > I am currently setting up a Raspberry Pi Camera using the libcamera
> > library.
> >
> > I would like to capture still images and therefore want to use the
> > StillCapture role. However, I am having trouble finding the correct
> > format or reading the image data from the buffer and saving it as an
> > image file.
> >
> > With the Viewfinder role, everything works fine, but since I care
> > about the quality of the images, I would prefer to use the
> > StillCapture role.
> >
> > Here is my setup:
> >
> > * Raspberry Pi Camera v2.1
> > * Raspberry Pi 4 Model B Rev 1.2
> >
> > I've also attached the relevant code files and an example image.
> > I would be very grateful for any help or suggestions.
>
> Not going to review in detail your code, but I could suggest to
> decouple the png generation from the capture operations.
>
> Try save YUV420 frames and inspect them. The rpi-cam camera app might
> serve as a reference, as I'm pretty StillCapture role works well
> there on Pi4.
Also note that roles are only used to set default values. On Raspberry
Pi platforms, the difference between StillCapture and Viewfinder is the
format (YUV420 vs. RGB888), size (full sensor resolution vs. 800x600)
and number of buffers (1 vs. 4). All those values can be overridden in
your application after generating the configuration.
> > [16:40:48.052609462] [16098] INFO Camera camera_manager.cpp:327 libcamera v0.4.0+53-29156679
> > [16:40:48.097310862] [16110] WARN RPiSdn sdn.cpp:40 Using legacy SDN tuning - please consider moving SDN inside rpi.denoise
> > [16:40:48.100302028] [16110] INFO RPI vc4.cpp:447 Registered camera /base/soc/i2c0mux/i2c at 1/imx219 at 10 to Unicam device /dev/media0 and ISP device /dev/media2
> > Camera ID: /base/soc/i2c0mux/i2c at 1/imx219 at 10
> > Default StillCapture configuration is: 3280x2464-YUV420
> > Supported: 3280x2464-YUV420
> > Validated StillCapture configuration is: 3280x2464-YUV420
> > [16:40:48.101485298] [16098] INFO Camera camera.cpp:1202 configuring streams: (0) 3280x2464-YUV420
> > [16:40:48.103915320] [16110] INFO RPI vc4.cpp:622 Sensor: /base/soc/i2c0mux/i2c at 1/imx219 at 10 - Selected sensor format: 3280x2464-SBGGR10_1X10 - Selected unicam format: 3280x2464-pBAA
> > Allocated 2 buffers for stream
> > Camera started
> > Captured frame sequence: 7
> > Image capture completed!
> > Image saved as PNG: ./captured_images/captured_image.png
> > Processed image saved as binary_image.png
> > Program completed successfully
> > All requests freed.
> > Memory region successfully freed.
> > Frame buffers freed.
> > Camera stopped.
> > Resource cleanup completed.
>
>
> > #include "CameraHandler.hpp"
> >
> > #include <iomanip>
> > #include <iostream>
> > #include <thread>
> > #include <fstream>
> > #include <sys/mman.h> // Für mmap und munmap
> > #include <fcntl.h> // Für Dateideskriptoren
> >
> > using namespace libcamera;
> > using namespace std::chrono_literals;
> >
> > void CameraHandler::requestComplete(Request *request)
> > {
> > if (request->status() == Request::RequestCancelled)
> > return;
> >
> > const std::map<const Stream *, FrameBuffer *> &buffers = request->buffers();
> > for (auto bufferPair : buffers) {
> > FrameBuffer *buffer = bufferPair.second;
> > const FrameMetadata &metadata = buffer->metadata();
> > std::cout << " seq: " << std::setw(6) << std::setfill('0') << metadata.sequence << " bytesused: ";
> >
> > unsigned int nplane = 0;
> > for (const FrameMetadata::Plane &plane : metadata.planes())
> > {
> > std::cout << plane.bytesused;
> > if (++nplane < metadata.planes().size()) std::cout << "/";
> > }
> >
> > std::cout << std::endl;
> > }
> > }
> >
> > CameraHandler::CameraHandler()
> > : stream(nullptr),
> > allocator(nullptr),
> > buffers(nullptr),
> > streamConfig(nullptr),
> > imageData(nullptr),
> > dataSize(0),
> > cameraInitialized(false),
> > captureCompleted(false)
> > {
> > }
> >
> > CameraHandler::~CameraHandler()
> > {
> > cleanup();
> > }
> >
> > bool CameraHandler::initialize()
> > {
> > // Create camera manager
> > cm = std::make_unique<CameraManager>();
> > cm->start();
> >
> > for (auto const &cam : cm->cameras())
> > std::cout << "Camera ID: " << cam->id() << std::endl;
> >
> > auto cameras = cm->cameras();
> > if (cameras.empty()) {
> > std::cerr << "No cameras were identified on the system." << std::endl;
> > cm->stop();
> > return false;
> > }
> >
> > std::string cameraId = cameras[0]->id();
> > camera = cm->get(cameraId);
> >
> > if (!camera) {
> > std::cerr << "Failed to get camera." << std::endl;
> > return false;
> > }
> >
> > camera->acquire();
> > cameraInitialized = true;
> >
> > return true;
> > }
> >
> > bool CameraHandler::configureCamera(int width, int height)
> > {
> > if (!cameraInitialized) {
> > std::cerr << "Camera not initialized." << std::endl;
> > return false;
> > }
> >
> > // Use StillCapture role instead of Viewfinder
> > config = camera->generateConfiguration({ StreamRole::StillCapture });
> >
> > streamConfig = &config->at(0);
> > std::cout << "Default StillCapture configuration is: " << streamConfig->toString() << std::endl;
> >
> > // Check supported configurations for StillCapture role
> > for (const StreamRole &role : { StreamRole::StillCapture }) {
> > std::unique_ptr<CameraConfiguration> testConfig = camera->generateConfiguration({role});
> > for (const StreamConfiguration &cfg : *testConfig) {
> > std::cout << "Supported: " << cfg.toString() << std::endl;
> > }
> > }
> >
> > // Configure resolution and format for Pi Camera v2
> > streamConfig->size.width = width; // Use the width parameter
> > streamConfig->size.height = height; // Use the height parameter
> > streamConfig->pixelFormat = libcamera::formats::YUV420;
> > streamConfig->bufferCount = 2; // Ensure adequate buffers for still capture
> >
> > config->validate();
> > std::cout << "Validated StillCapture configuration is: " << streamConfig->toString() << std::endl;
> >
> > camera->configure(config.get());
> >
> > allocator = new FrameBufferAllocator(camera);
> >
> > for (StreamConfiguration &cfg : *config) {
> > int ret = allocator->allocate(cfg.stream());
> > if (ret < 0) {
> > std::cerr << "Can't allocate buffers" << std::endl;
> > return false;
> > }
> >
> > size_t allocated = allocator->buffers(cfg.stream()).size();
> > std::cout << "Allocated " << allocated << " buffers for stream" << std::endl;
> > }
> >
> > stream = streamConfig->stream();
> > buffers = &allocator->buffers(stream);
> >
> > for (unsigned int i = 0; i < buffers->size(); ++i) {
> > std::unique_ptr<Request> request = camera->createRequest();
> > if (!request) {
> > std::cerr << "Can't create request" << std::endl;
> > return false;
> > }
> >
> > const std::unique_ptr<FrameBuffer> &buffer = (*buffers)[i];
> > int ret = request->addBuffer(stream, buffer.get());
> > if (ret < 0) {
> > std::cerr << "Can't set buffer for request" << std::endl;
> > return false;
> > }
> >
> > requests.push_back(std::move(request));
> > }
> >
> > camera->start();
> > std::cout << "Camera started" << std::endl;
> > std::this_thread::sleep_for(1s); // Give camera time to stabilize
> >
> > return true;
> > }
> >
> > bool CameraHandler::captureImage(const std::string& outputFolder)
> > {
> > // Create output directory
> > std::system(("mkdir -p " + outputFolder).c_str());
> >
> > // Create capture request
> > std::unique_ptr<Request> captureRequest = camera->createRequest();
> > if (!captureRequest) {
> > std::cerr << "Can't create capture request" << std::endl;
> > return false;
> > }
> >
> > const std::unique_ptr<FrameBuffer> &buffer = (*buffers)[0]; // Use first buffer
> > int ret = captureRequest->addBuffer(stream, buffer.get());
> > if (ret < 0) {
> > std::cerr << "Can't set buffer for capture request" << std::endl;
> > return false;
> > }
> >
> > // Queue request
> > ret = camera->queueRequest(captureRequest.get());
> > if (ret < 0) {
> > std::cerr << "Can't queue capture request" << std::endl;
> > return false;
> > }
> >
> > // Wait for completion (increase wait time for high-res still capture)
> > std::this_thread::sleep_for(4s);
> >
> > // Process frame metadata
> > const FrameMetadata &metadata = buffer->metadata();
> > std::cout << "Captured frame sequence: " << metadata.sequence << std::endl;
> > std::cout << "Image capture completed!" << std::endl;
> >
> > // Access the image data
> > const std::vector<FrameBuffer::Plane> &planes = buffer->planes();
> > if (planes.empty()) {
> > std::cerr << "No planes available in the buffer." << std::endl;
> > return false;
> > }
> >
> > // For YUV420 format - handle multiple planes
> > if (planes.size() < 3) {
> > std::cerr << "Expected at least 3 planes for YUV420 format, got " << planes.size() << std::endl;
> > return false;
> > }
> >
> > // Map the Y plane (luminance)
> > int fd_y = planes[0].fd.get();
> > size_t length_y = planes[0].length;
> > void* data_y = mmap(nullptr, length_y, PROT_READ, MAP_SHARED, fd_y, 0);
> > if (data_y == MAP_FAILED) {
> > std::cerr << "Error mapping Y plane." << std::endl;
> > return false;
> > }
> >
> > // Map the U plane
> > int fd_u = planes[1].fd.get();
> > size_t length_u = planes[1].length;
> > void* data_u = mmap(nullptr, length_u, PROT_READ, MAP_SHARED, fd_u, 0);
> > if (data_u == MAP_FAILED) {
> > munmap(data_y, length_y);
> > std::cerr << "Error mapping U plane." << std::endl;
> > return false;
> > }
> >
> > // Map the V plane
> > int fd_v = planes[2].fd.get();
> > size_t length_v = planes[2].length;
> > void* data_v = mmap(nullptr, length_v, PROT_READ, MAP_SHARED, fd_v, 0);
> > if (data_v == MAP_FAILED) {
> > munmap(data_y, length_y);
> > munmap(data_u, length_u);
> > std::cerr << "Error mapping V plane." << std::endl;
> > return false;
> > }
> >
> > // Create OpenCV Mat objects for each plane
> > int width = streamConfig->size.width;
> > int height = streamConfig->size.height;
> >
> > cv::Mat y_plane(height, width, CV_8UC1, data_y);
> > cv::Mat u_plane(height/2, width/2, CV_8UC1, data_u);
> > cv::Mat v_plane(height/2, width/2, CV_8UC1, data_v);
> >
> > // Create YUV image by merging planes
> > cv::Mat yuv_image;
> > std::vector<cv::Mat> yuv_planes = {y_plane, u_plane, v_plane};
> >
> > // Convert YUV to BGR
> > cv::cvtColor(y_plane, capturedImage, cv::COLOR_GRAY2BGR);
> >
> > // For a proper YUV420 to BGR conversion, we would use a more complex conversion
> > // like YUV420 to YUV, then YUV to BGR, but this requires careful plane handling
> > // For now we'll use just the Y channel (grayscale) for simplicity
> >
> > // Save as PNG
> > std::string fileName = outputFolder + "captured_image.png";
> > if (cv::imwrite(fileName, capturedImage)) {
> > std::cout << "Image saved as PNG: " << fileName << std::endl;
> > } else {
> > std::cerr << "Error saving image as PNG." << std::endl;
> > return false;
> > }
> >
> > // Store references to memory regions for cleanup
> > imageData = data_y;
> > dataSize = length_y;
> >
> > // Unmap U and V planes as we only keep Y for cleanup in the destructor
> > munmap(data_u, length_u);
> > munmap(data_v, length_v);
> >
> > captureCompleted = true;
> > return true;
> > }
> >
> > bool CameraHandler::processImage(const std::string& outputFolder)
> > {
> > if (!captureCompleted || capturedImage.empty()) {
> > std::cerr << "No image has been captured yet!" << std::endl;
> > return false;
> > }
> >
> > // Convert to grayscale
> > cv::Mat gray;
> > cv::cvtColor(capturedImage, gray, cv::COLOR_BGR2GRAY);
> >
> > // Reduce noise
> > cv::Mat blurred;
> > cv::GaussianBlur(gray, blurred, cv::Size(9, 9), 2.0, 2.0);
> >
> > // Canny edge detection
> > cv::Mat edges;
> > cv::Canny(blurred, edges, 35, 55);
> >
> > // Optional: Thicken edges (morphological operation)
> > cv::Mat dilated;
> > cv::dilate(edges, dilated, cv::Mat(), cv::Point(-1,-1), 1);
> >
> > // Thresholding
> > cv::Mat binary;
> > cv::threshold(dilated, binary, 128, 255, cv::THRESH_BINARY);
> >
> > // Save processed image
> > if (cv::imwrite(outputFolder + "binary_image.png", binary)) {
> > std::cout << "Processed image saved as binary_image.png" << std::endl;
> > return true;
> > } else {
> > std::cerr << "Error saving processed image." << std::endl;
> > return false;
> > }
> > }
> >
> > void CameraHandler::cleanup()
> > {
> > // Free all requests
> > requests.clear();
> > std::cout << "All requests freed." << std::endl;
> >
> > // Free memory region
> > if (imageData && dataSize > 0) {
> > if (munmap(imageData, dataSize) != 0) {
> > std::cerr << "Error freeing memory region." << std::endl;
> > } else {
> > std::cout << "Memory region successfully freed." << std::endl;
> > }
> > imageData = nullptr;
> > }
> >
> > // Free frame buffers
> > if (allocator) {
> > delete allocator;
> > allocator = nullptr;
> > std::cout << "Frame buffers freed." << std::endl;
> > }
> >
> > // Stop camera
> > if (camera) {
> > camera->stop();
> > camera->release();
> > camera.reset();
> > std::cout << "Camera stopped." << std::endl;
> > }
> >
> > std::cout << "Resource cleanup completed." << std::endl;
> > }
>
> > #pragma once
> >
> > #include <memory>
> > #include <vector>
> > #include <string>
> > #include <opencv2/opencv.hpp>
> > #include <libcamera/libcamera.h>
> >
> > // Forward declarations
> > class CameraHandler {
> > public:
> > CameraHandler();
> > ~CameraHandler();
> >
> > // Initialize camera system
> > bool initialize();
> >
> > // Configure the camera with specific resolution
> > bool configureCamera(int width = 3280, int height = 2464);
> >
> > // Capture a single image
> > bool captureImage(const std::string& outputFolder = "./captured_images/");
> >
> > // Process the captured image (to grayscale, apply effects, etc.)
> > bool processImage(const std::string& outputFolder = "./captured_images/");
> >
> > // Clean up resources
> > void cleanup();
> >
> > private:
> > static void requestComplete(libcamera::Request *request);
> >
> > // Internal camera objects
> > std::unique_ptr<libcamera::CameraManager> cm;
> > std::shared_ptr<libcamera::Camera> camera;
> > libcamera::Stream *stream;
> >
> > // Buffer handling
> > libcamera::FrameBufferAllocator *allocator;
> > std::vector<std::unique_ptr<libcamera::Request>> requests;
> > const std::vector<std::unique_ptr<libcamera::FrameBuffer>> *buffers;
> >
> > // Configuration
> > std::unique_ptr<libcamera::CameraConfiguration> config;
> > libcamera::StreamConfiguration *streamConfig;
> >
> > // Image data
> > void *imageData;
> > size_t dataSize;
> > cv::Mat capturedImage;
> >
> > // Status flags
> > bool cameraInitialized;
> > bool captureCompleted;
> > };
>
> > #include "CameraHandler.hpp"
> > #include <iostream>
> >
> > int main()
> > {
> > CameraHandler cameraHandler;
> >
> > // Initialize camera
> > if (!cameraHandler.initialize()) {
> > std::cerr << "Failed to initialize camera." << std::endl;
> > return EXIT_FAILURE;
> > }
> >
> > // Configure camera (using default high resolution)
> > if (!cameraHandler.configureCamera()) {
> > std::cerr << "Failed to configure camera." << std::endl;
> > return EXIT_FAILURE;
> > }
> >
> > // Capture an image
> > if (!cameraHandler.captureImage()) {
> > std::cerr << "Failed to capture image." << std::endl;
> > return EXIT_FAILURE;
> > }
> >
> > // Process the captured image
> > if (!cameraHandler.processImage()) {
> > std::cerr << "Failed to process image." << std::endl;
> > return EXIT_FAILURE;
> > }
> >
> > // Resources are automatically cleaned up when cameraHandler goes out of scope
> >
> > std::cout << "Program completed successfully" << std::endl;
> > return EXIT_SUCCESS;
> > }
--
Regards,
Laurent Pinchart
More information about the libcamera-devel
mailing list