[PATCH 04/10] ipa: libipa: Add MeanLuminanceAgc base class
Jacopo Mondi
jacopo.mondi at ideasonboard.com
Mon Mar 25 21:16:00 CET 2024
Hi Dan
On Fri, Mar 22, 2024 at 01:14:45PM +0000, Daniel Scally wrote:
> The Agc algorithms for the RkIsp1 and IPU3 IPAs do the same thing in
> very large part; following the Rpi IPA's algorithm in spirit with a
> few tunable values in that IPA being hardcoded in the libipa ones.
> Add a new base class for MeanLuminanceAgc which implements the same
nit: I would rather call this one AgcMeanLuminance.
One other note, not sure if applicable here, from the experience of
trying to upstream an auto-focus algorithm for RkISP1. We had there a
base class to define the algorithm interface, one derived class for
the common calculation and one platform-specific part for statistics
collection and IPA module interfacing.
The base class was there mostly to handle the algorithm state machine
by handling the controls that influence the algorithm behaviour. In
case of AEGC I can think, in example, about handling the switch
between enable/disable of auto-mode (and consequentially handling a
manually set ExposureTime and AnalogueGain), switching between
different ExposureModes etc
This is the last attempt for AF
https://patchwork.libcamera.org/patch/18510/
and I'm wondering if it would be desirable to abstract away from the
MeanLuminance part the part that is tightly coupled with the
libcamera's controls definition.
Let's make a concrete example: look how the rkisp1 and the ipu3 agc
implementations handle AeEnabled (or better, look at how the rkisp1
does that and the IPU3 does not).
Ideally, my goal would be to abstract the handling of the control and
all the state machine that decides if the manual or auto-computed
values should be used to an AEGCAlgorithm base class.
Your series does that for the tuning file parsing already but does
that in the MeanLuminance method implementation, while it should or
could be common to all AEGC methods.
One very interesting experiment could be starting with this and then
plumb-in AeEnable support in the IPU3 in example and move everything
common to a base class.
> algorithm and additionally parses yaml tuning files to inform an IPA
> module's Agc algorithm about valid constraint and exposure modes and
> their associated bounds.
>
> Signed-off-by: Daniel Scally <dan.scally at ideasonboard.com>
> ---
> src/ipa/libipa/agc.cpp | 526 +++++++++++++++++++++++++++++++++++++
> src/ipa/libipa/agc.h | 82 ++++++
> src/ipa/libipa/meson.build | 2 +
> 3 files changed, 610 insertions(+)
> create mode 100644 src/ipa/libipa/agc.cpp
> create mode 100644 src/ipa/libipa/agc.h
>
> diff --git a/src/ipa/libipa/agc.cpp b/src/ipa/libipa/agc.cpp
> new file mode 100644
> index 00000000..af57a571
> --- /dev/null
> +++ b/src/ipa/libipa/agc.cpp
> @@ -0,0 +1,526 @@
> +/* SPDX-License-Identifier: LGPL-2.1-or-later */
> +/*
> + * Copyright (C) 2024 Ideas on Board Oy
> + *
> + * agc.cpp - Base class for libipa-compliant AGC algorithms
> + */
> +
> +#include "agc.h"
> +
> +#include <cmath>
> +
> +#include <libcamera/base/log.h>
> +#include <libcamera/control_ids.h>
> +
> +#include "exposure_mode_helper.h"
> +
> +using namespace libcamera::controls;
> +
> +/**
> + * \file agc.h
> + * \brief Base class implementing mean luminance AEGC.
nit: not '.' at the end of briefs
> + */
> +
> +namespace libcamera {
> +
> +using namespace std::literals::chrono_literals;
> +
> +LOG_DEFINE_CATEGORY(Agc)
> +
> +namespace ipa {
> +
> +/*
> + * Number of frames to wait before calculating stats on minimum exposure
> + * \todo should this be a tunable value?
Does this depend on the ISP (comes from IPA), the sensor (comes from
tuning file) or... both ? :)
> + */
> +static constexpr uint32_t kNumStartupFrames = 10;
> +
> +/*
> + * Default relative luminance target
> + *
> + * This value should be chosen so that when the camera points at a grey target,
> + * the resulting image brightness looks "right". Custom values can be passed
> + * as the relativeLuminanceTarget value in sensor tuning files.
> + */
> +static constexpr double kDefaultRelativeLuminanceTarget = 0.16;
> +
> +/**
> + * \struct MeanLuminanceAgc::AgcConstraint
> + * \brief The boundaries and target for an AeConstraintMode constraint
> + *
> + * This structure describes an AeConstraintMode constraint for the purposes of
> + * this algorithm. The algorithm will apply the constraints by calculating the
> + * Histogram's inter-quantile mean between the given quantiles and ensure that
> + * the resulting value is the right side of the given target (as defined by the
> + * boundary and luminance target).
> + */
Here, in example.
controls::AeConstraintMode and the supported values are defined as
(core|vendor) controls in control_ids_*.yaml
The tuning file expresses the constraint modes using the Control
definition (I wonder if this has always been like this) but it
definitely ties the tuning file to the controls definition.
Applications use controls::AeConstraintMode to select one of the
constrained modes to have the algorithm use it.
In all of this, how much is part of the MeanLuminance implementation
and how much shared between possibly multiple implementations ?
> +
> +/**
> + * \enum MeanLuminanceAgc::AgcConstraint::Bound
> + * \brief Specify whether the constraint defines a lower or upper bound
> + * \var MeanLuminanceAgc::AgcConstraint::LOWER
> + * \brief The constraint defines a lower bound
> + * \var MeanLuminanceAgc::AgcConstraint::UPPER
> + * \brief The constraint defines an upper bound
> + */
> +
> +/**
> + * \var MeanLuminanceAgc::AgcConstraint::bound
> + * \brief The type of constraint bound
> + */
> +
> +/**
> + * \var MeanLuminanceAgc::AgcConstraint::qLo
> + * \brief The lower quantile to use for the constraint
> + */
> +
> +/**
> + * \var MeanLuminanceAgc::AgcConstraint::qHi
> + * \brief The upper quantile to use for the constraint
> + */
> +
> +/**
> + * \var MeanLuminanceAgc::AgcConstraint::yTarget
> + * \brief The luminance target for the constraint
> + */
> +
> +/**
> + * \class MeanLuminanceAgc
> + * \brief a mean-based auto-exposure algorithm
> + *
> + * This algorithm calculates a shutter time, analogue and digital gain such that
> + * the normalised mean luminance value of an image is driven towards a target,
> + * which itself is discovered from tuning data. The algorithm is a two-stage
> + * process:
> + *
> + * In the first stage, an initial gain value is derived by iteratively comparing
> + * the gain-adjusted mean luminance across an entire image against a target, and
> + * selecting a value which pushes it as closely as possible towards the target.
> + *
> + * In the second stage we calculate the gain required to drive the average of a
> + * section of a histogram to a target value, where the target and the boundaries
> + * of the section of the histogram used in the calculation are taken from the
> + * values defined for the currently configured AeConstraintMode within the
> + * tuning data. The gain from the first stage is then clamped to the gain from
> + * this stage.
> + *
> + * The final gain is used to adjust the effective exposure value of the image,
> + * and that new exposure value divided into shutter time, analogue gain and
> + * digital gain according to the selected AeExposureMode.
> + */
> +
> +MeanLuminanceAgc::MeanLuminanceAgc()
> + : frameCount_(0), filteredExposure_(0s), relativeLuminanceTarget_(0)
> +{
> +}
> +
> +/**
> + * \brief Parse the relative luminance target from the tuning data
> + * \param[in] tuningData The YamlObject holding the algorithm's tuning data
> + */
> +void MeanLuminanceAgc::parseRelativeLuminanceTarget(const YamlObject &tuningData)
> +{
> + relativeLuminanceTarget_ =
> + tuningData["relativeLuminanceTarget"].get<double>(kDefaultRelativeLuminanceTarget);
How do you expect this to be computed in the tuning file ?
> +}
> +
> +/**
> + * \brief Parse an AeConstraintMode constraint from tuning data
> + * \param[in] modeDict the YamlObject holding the constraint data
> + * \param[in] id The constraint ID from AeConstraintModeEnum
> + */
> +void MeanLuminanceAgc::parseConstraint(const YamlObject &modeDict, int32_t id)
> +{
> + for (const auto &[boundName, content] : modeDict.asDict()) {
> + if (boundName != "upper" && boundName != "lower") {
> + LOG(Agc, Warning)
> + << "Ignoring unknown constraint bound '" << boundName << "'";
> + continue;
> + }
> +
> + unsigned int idx = static_cast<unsigned int>(boundName == "upper");
> + AgcConstraint::Bound bound = static_cast<AgcConstraint::Bound>(idx);
> + double qLo = content["qLo"].get<double>().value_or(0.98);
> + double qHi = content["qHi"].get<double>().value_or(1.0);
> + double yTarget =
> + content["yTarget"].getList<double>().value_or(std::vector<double>{ 0.5 }).at(0);
> +
> + AgcConstraint constraint = { bound, qLo, qHi, yTarget };
> +
> + if (!constraintModes_.count(id))
> + constraintModes_[id] = {};
> +
> + if (idx)
> + constraintModes_[id].push_back(constraint);
> + else
> + constraintModes_[id].insert(constraintModes_[id].begin(), constraint);
> + }
> +}
> +
> +/**
> + * \brief Parse tuning data file to populate AeConstraintMode control
> + * \param[in] tuningData the YamlObject representing the tuning data for Agc
> + *
> + * The Agc algorithm's tuning data should contain a dictionary called
> + * AeConstraintMode containing per-mode setting dictionaries with the key being
> + * a value from \ref controls::AeConstraintModeNameValueMap. Each mode dict may
> + * contain either a "lower" or "upper" key, or both, in this format:
> + *
> + * \code{.unparsed}
> + * algorithms:
> + * - Agc:
> + * AeConstraintMode:
> + * ConstraintNormal:
> + * lower:
> + * qLo: 0.98
> + * qHi: 1.0
> + * yTarget: 0.5
Ok, so this ties the tuning file not just to the libcamera controls
definition, but to this specific implementation of the algorithm ?
Not that it was not expected, and I think it's fine, as using a
libcamera defined control value as 'index' makes sure the applications
will deal with the same interface, but this largely conflicts with the
idea to have shared parsing for all algorithms in a base class.
Also, we made AeConstrainModes a 'core control' because at the time
when RPi first implemented AGC support there was no alternative to
that. Then the RPi implementation has been copied in all other
platforms and this is still fine as a 'core control'. This however
seems to be a tuning parameter for this specific algorithm
implementation, isn't it ?
> + * ConstraintHighlight:
> + * lower:
> + * qLo: 0.98
> + * qHi: 1.0
> + * yTarget: 0.5
> + * upper:
> + * qLo: 0.98
> + * qHi: 1.0
> + * yTarget: 0.8
> + *
> + * \endcode
> + *
> + * The parsed dictionaries are used to populate an array of available values for
> + * the AeConstraintMode control and stored for later use in the algorithm.
> + *
> + * \return -EINVAL Where a defined constraint mode is invalid
> + * \return 0 on success
> + */
> +int MeanLuminanceAgc::parseConstraintModes(const YamlObject &tuningData)
> +{
> + std::vector<ControlValue> availableConstraintModes;
> +
> + const YamlObject &yamlConstraintModes = tuningData[controls::AeConstraintMode.name()];
> + if (yamlConstraintModes.isDictionary()) {
> + for (const auto &[modeName, modeDict] : yamlConstraintModes.asDict()) {
> + if (AeConstraintModeNameValueMap.find(modeName) ==
> + AeConstraintModeNameValueMap.end()) {
> + LOG(Agc, Warning)
> + << "Skipping unknown constraint mode '" << modeName << "'";
> + continue;
> + }
> +
> + if (!modeDict.isDictionary()) {
> + LOG(Agc, Error)
> + << "Invalid constraint mode '" << modeName << "'";
> + return -EINVAL;
> + }
> +
> + parseConstraint(modeDict,
> + AeConstraintModeNameValueMap.at(modeName));
> + availableConstraintModes.push_back(
> + AeConstraintModeNameValueMap.at(modeName));
> + }
> + }
> +
> + /*
> + * If the tuning data file contains no constraints then we use the
> + * default constraint that the various Agc algorithms were adhering to
> + * anyway before centralisation.
> + */
> + if (constraintModes_.empty()) {
> + AgcConstraint constraint = {
> + AgcConstraint::Bound::LOWER,
> + 0.98,
> + 1.0,
> + 0.5
> + };
> +
> + constraintModes_[controls::ConstraintNormal].insert(
> + constraintModes_[controls::ConstraintNormal].begin(),
> + constraint);
> + availableConstraintModes.push_back(
> + AeConstraintModeNameValueMap.at("ConstraintNormal"));
> + }
> +
> + controls_[&controls::AeConstraintMode] = ControlInfo(availableConstraintModes);
> +
> + return 0;
> +}
> +
> +/**
> + * \brief Parse tuning data file to populate AeExposureMode control
> + * \param[in] tuningData the YamlObject representing the tuning data for Agc
> + *
> + * The Agc algorithm's tuning data should contain a dictionary called
> + * AeExposureMode containing per-mode setting dictionaries with the key being
> + * a value from \ref controls::AeExposureModeNameValueMap. Each mode dict should
> + * contain an array of shutter times with the key "shutter" and an array of gain
> + * values with the key "gain", in this format:
> + *
> + * \code{.unparsed}
> + * algorithms:
> + * - Agc:
> + * AeExposureMode:
Same reasoning as per the constraints, really.
There's nothing bad here, if not me realizing our controls definition
is intimately tied with the algorithms implementation, so I wonder
again if even handling things like AeEnable in a common form makes
any sense..
Not going to review the actual implementation now, as it comes from
the existing ones...
> + * ExposureNormal:
> + * shutter: [ 100, 10000, 30000, 60000, 120000 ]
> + * gain: [ 1.0, 2.0, 4.0, 6.0, 6.0 ]
> + * ExposureShort:
> + * shutter: [ 100, 10000, 30000, 60000, 120000 ]
> + * gain: [ 1.0, 2.0, 4.0, 6.0, 6.0 ]
> + *
> + * \endcode
> + *
> + * The parsed dictionaries are used to populate an array of available values for
> + * the AeExposureMode control and to create ExposureModeHelpers
> + *
> + * \return -EINVAL Where a defined constraint mode is invalid
> + * \return 0 on success
> + */
> +int MeanLuminanceAgc::parseExposureModes(const YamlObject &tuningData)
> +{
> + std::vector<ControlValue> availableExposureModes;
> + int ret;
> +
> + const YamlObject &yamlExposureModes = tuningData[controls::AeExposureMode.name()];
> + if (yamlExposureModes.isDictionary()) {
> + for (const auto &[modeName, modeValues] : yamlExposureModes.asDict()) {
> + if (AeExposureModeNameValueMap.find(modeName) ==
> + AeExposureModeNameValueMap.end()) {
> + LOG(Agc, Warning)
> + << "Skipping unknown exposure mode '" << modeName << "'";
> + continue;
> + }
> +
> + if (!modeValues.isDictionary()) {
> + LOG(Agc, Error)
> + << "Invalid exposure mode '" << modeName << "'";
> + return -EINVAL;
> + }
> +
> + std::vector<uint32_t> shutters =
> + modeValues["shutter"].getList<uint32_t>().value_or(std::vector<uint32_t>{});
> + std::vector<double> gains =
> + modeValues["gain"].getList<double>().value_or(std::vector<double>{});
> +
> + std::vector<utils::Duration> shutterDurations;
> + std::transform(shutters.begin(), shutters.end(),
> + std::back_inserter(shutterDurations),
> + [](uint32_t time) { return std::chrono::microseconds(time); });
> +
> + std::shared_ptr<ExposureModeHelper> helper =
> + std::make_shared<ExposureModeHelper>();
> + if ((ret = helper->init(shutterDurations, gains) < 0)) {
> + LOG(Agc, Error)
> + << "Failed to parse exposure mode '" << modeName << "'";
> + return ret;
> + }
> +
> + exposureModeHelpers_[AeExposureModeNameValueMap.at(modeName)] = helper;
> + availableExposureModes.push_back(AeExposureModeNameValueMap.at(modeName));
> + }
> + }
> +
> + /*
> + * If we don't have any exposure modes in the tuning data we create an
> + * ExposureModeHelper using empty shutter time and gain arrays, which
> + * will then internally simply drive the shutter as high as possible
> + * before touching gain
> + */
> + if (availableExposureModes.empty()) {
> + int32_t exposureModeId = AeExposureModeNameValueMap.at("ExposureNormal");
> + std::vector<utils::Duration> shutterDurations = {};
> + std::vector<double> gains = {};
> +
> + std::shared_ptr<ExposureModeHelper> helper =
> + std::make_shared<ExposureModeHelper>();
> + if ((ret = helper->init(shutterDurations, gains) < 0)) {
> + LOG(Agc, Error)
> + << "Failed to create default ExposureModeHelper";
> + return ret;
> + }
> +
> + exposureModeHelpers_[exposureModeId] = helper;
> + availableExposureModes.push_back(exposureModeId);
> + }
> +
> + controls_[&controls::AeExposureMode] = ControlInfo(availableExposureModes);
> +
> + return 0;
> +}
> +
> +/**
> + * \fn MeanLuminanceAgc::constraintModes()
> + * \brief Get the constraint modes that have been parsed from tuning data
> + */
> +
> +/**
> + * \fn MeanLuminanceAgc::exposureModeHelpers()
> + * \brief Get the ExposureModeHelpers that have been parsed from tuning data
> + */
> +
> +/**
> + * \fn MeanLuminanceAgc::controls()
> + * \brief Get the controls that have been generated after parsing tuning data
> + */
> +
> +/**
> + * \fn MeanLuminanceAgc::estimateLuminance(const double gain)
> + * \brief Estimate the luminance of an image, adjusted by a given gain
> + * \param[in] gain The gain with which to adjust the luminance estimate
> + *
> + * This function is a pure virtual function because estimation of luminance is a
> + * hardware-specific operation, which depends wholly on the format of the stats
> + * that are delivered to libcamera from the ISP. Derived classes must implement
> + * an overriding function that calculates the normalised mean luminance value
> + * across the entire image.
> + *
> + * \return The normalised relative luminance of the image
> + */
> +
> +/**
> + * \brief Estimate the initial gain needed to achieve a relative luminance
> + * target
nit: we don't usually indent in briefs, or in general when breaking
lines in doxygen as far as I can tell
Not going
> + *
> + * To account for non-linearity caused by saturation, the value needs to be
> + * estimated in an iterative process, as multiplying by a gain will not increase
> + * the relative luminance by the same factor if some image regions are saturated
> + *
> + * \return The calculated initial gain
> + */
> +double MeanLuminanceAgc::estimateInitialGain()
> +{
> + double yTarget = relativeLuminanceTarget_;
> + double yGain = 1.0;
> +
> + for (unsigned int i = 0; i < 8; i++) {
> + double yValue = estimateLuminance(yGain);
> + double extra_gain = std::min(10.0, yTarget / (yValue + .001));
> +
> + yGain *= extra_gain;
> + LOG(Agc, Debug) << "Y value: " << yValue
> + << ", Y target: " << yTarget
> + << ", gives gain " << yGain;
> +
> + if (utils::abs_diff(extra_gain, 1.0) < 0.01)
> + break;
> + }
> +
> + return yGain;
> +}
> +
> +/**
> + * \brief Clamp gain within the bounds of a defined constraint
> + * \param[in] constraintModeIndex The index of the constraint to adhere to
> + * \param[in] hist A histogram over which to calculate inter-quantile means
> + * \param[in] gain The gain to clamp
> + *
> + * \return The gain clamped within the constraint bounds
> + */
> +double MeanLuminanceAgc::constraintClampGain(uint32_t constraintModeIndex,
> + const Histogram &hist,
> + double gain)
> +{
> + std::vector<AgcConstraint> &constraints = constraintModes_[constraintModeIndex];
> + for (const AgcConstraint &constraint : constraints) {
> + double newGain = constraint.yTarget * hist.bins() /
> + hist.interQuantileMean(constraint.qLo, constraint.qHi);
> +
> + if (constraint.bound == AgcConstraint::Bound::LOWER &&
> + newGain > gain)
> + gain = newGain;
> +
> + if (constraint.bound == AgcConstraint::Bound::UPPER &&
> + newGain < gain)
> + gain = newGain;
> + }
> +
> + return gain;
> +}
> +
> +/**
> + * \brief Apply a filter on the exposure value to limit the speed of changes
> + * \param[in] exposureValue The target exposure from the AGC algorithm
> + *
> + * The speed of the filter is adaptive, and will produce the target quicker
> + * during startup, or when the target exposure is within 20% of the most recent
> + * filter output.
> + *
> + * \return The filtered exposure
> + */
> +utils::Duration MeanLuminanceAgc::filterExposure(utils::Duration exposureValue)
> +{
> + double speed = 0.2;
> +
> + /* Adapt instantly if we are in startup phase. */
> + if (frameCount_ < kNumStartupFrames)
> + speed = 1.0;
> +
> + /*
> + * If we are close to the desired result, go faster to avoid making
> + * multiple micro-adjustments.
> + * \todo Make this customisable?
> + */
> + if (filteredExposure_ < 1.2 * exposureValue &&
> + filteredExposure_ > 0.8 * exposureValue)
> + speed = sqrt(speed);
> +
> + filteredExposure_ = speed * exposureValue +
> + filteredExposure_ * (1.0 - speed);
> +
> + return filteredExposure_;
> +}
> +
> +/**
> + * \brief Calculate the new exposure value
> + * \param[in] constraintModeIndex The index of the current constraint mode
> + * \param[in] exposureModeIndex The index of the current exposure mode
> + * \param[in] yHist A Histogram from the ISP statistics to use in constraining
> + * the calculated gain
> + * \param[in] effectiveExposureValue The EV applied to the frame from which the
> + * statistics in use derive
> + *
> + * Calculate a new exposure value to try to obtain the target. The calculated
> + * exposure value is filtered to prevent rapid changes from frame to frame, and
> + * divided into shutter time, analogue and digital gain.
> + *
> + * \return Tuple of shutter time, analogue gain, and digital gain
> + */
> +std::tuple<utils::Duration, double, double>
> +MeanLuminanceAgc::calculateNewEv(uint32_t constraintModeIndex,
> + uint32_t exposureModeIndex,
> + const Histogram &yHist,
> + utils::Duration effectiveExposureValue)
> +{
> + /*
> + * The pipeline handler should validate that we have received an allowed
> + * value for AeExposureMode.
> + */
> + std::shared_ptr<ExposureModeHelper> exposureModeHelper =
> + exposureModeHelpers_.at(exposureModeIndex);
> +
> + double gain = estimateInitialGain();
> + gain = constraintClampGain(constraintModeIndex, yHist, gain);
> +
> + /*
> + * We don't check whether we're already close to the target, because
> + * even if the effective exposure value is the same as the last frame's
> + * we could have switched to an exposure mode that would require a new
> + * pass through the splitExposure() function.
> + */
> +
> + utils::Duration newExposureValue = effectiveExposureValue * gain;
> + utils::Duration maxTotalExposure = exposureModeHelper->maxShutter()
> + * exposureModeHelper->maxGain();
> + newExposureValue = std::min(newExposureValue, maxTotalExposure);
> +
> + /*
> + * We filter the exposure value to make sure changes are not too jarring
> + * from frame to frame.
> + */
> + newExposureValue = filterExposure(newExposureValue);
> +
> + frameCount_++;
> + return exposureModeHelper->splitExposure(newExposureValue);
> +}
> +
> +}; /* namespace ipa */
> +
> +}; /* namespace libcamera */
> diff --git a/src/ipa/libipa/agc.h b/src/ipa/libipa/agc.h
> new file mode 100644
> index 00000000..902a359a
> --- /dev/null
> +++ b/src/ipa/libipa/agc.h
> @@ -0,0 +1,82 @@
> +/* SPDX-License-Identifier: LGPL-2.1-or-later */
> +/*
> + * Copyright (C) 2024 Ideas on Board Oy
> + *
> + agc.h - Base class for libipa-compliant AGC algorithms
> + */
> +
> +#pragma once
> +
> +#include <tuple>
> +#include <vector>
> +
> +#include <libcamera/controls.h>
> +
> +#include "libcamera/internal/yaml_parser.h"
> +
> +#include "exposure_mode_helper.h"
> +#include "histogram.h"
> +
> +namespace libcamera {
> +
> +namespace ipa {
> +
> +class MeanLuminanceAgc
> +{
> +public:
> + MeanLuminanceAgc();
> + virtual ~MeanLuminanceAgc() = default;
> +
> + struct AgcConstraint {
> + enum class Bound {
> + LOWER = 0,
> + UPPER = 1
> + };
> + Bound bound;
> + double qLo;
> + double qHi;
> + double yTarget;
> + };
> +
> + void parseRelativeLuminanceTarget(const YamlObject &tuningData);
> + void parseConstraint(const YamlObject &modeDict, int32_t id);
> + int parseConstraintModes(const YamlObject &tuningData);
> + int parseExposureModes(const YamlObject &tuningData);
> +
> + std::map<int32_t, std::vector<AgcConstraint>> constraintModes()
> + {
> + return constraintModes_;
> + }
> +
> + std::map<int32_t, std::shared_ptr<ExposureModeHelper>> exposureModeHelpers()
> + {
> + return exposureModeHelpers_;
> + }
> +
> + ControlInfoMap::Map controls()
> + {
> + return controls_;
> + }
> +
> + virtual double estimateLuminance(const double gain) = 0;
> + double estimateInitialGain();
> + double constraintClampGain(uint32_t constraintModeIndex,
> + const Histogram &hist,
> + double gain);
> + utils::Duration filterExposure(utils::Duration exposureValue);
> + std::tuple<utils::Duration, double, double>
> + calculateNewEv(uint32_t constraintModeIndex, uint32_t exposureModeIndex,
> + const Histogram &yHist, utils::Duration effectiveExposureValue);
> +private:
> + uint64_t frameCount_;
> + utils::Duration filteredExposure_;
> + double relativeLuminanceTarget_;
> +
> + std::map<int32_t, std::vector<AgcConstraint>> constraintModes_;
> + std::map<int32_t, std::shared_ptr<ExposureModeHelper>> exposureModeHelpers_;
> + ControlInfoMap::Map controls_;
> +};
> +
> +}; /* namespace ipa */
> +
> +}; /* namespace libcamera */
> diff --git a/src/ipa/libipa/meson.build b/src/ipa/libipa/meson.build
> index 37fbd177..31cc8d70 100644
> --- a/src/ipa/libipa/meson.build
> +++ b/src/ipa/libipa/meson.build
> @@ -1,6 +1,7 @@
> # SPDX-License-Identifier: CC0-1.0
>
> libipa_headers = files([
> + 'agc.h',
> 'algorithm.h',
> 'camera_sensor_helper.h',
> 'exposure_mode_helper.h',
> @@ -10,6 +11,7 @@ libipa_headers = files([
> ])
>
> libipa_sources = files([
> + 'agc.cpp',
> 'algorithm.cpp',
> 'camera_sensor_helper.cpp',
> 'exposure_mode_helper.cpp',
> --
> 2.34.1
>
More information about the libcamera-devel
mailing list