[libcamera-devel] Thoughts on Bayer re-processing

David Plowman david.plowman at raspberrypi.com
Wed Jan 11 12:41:17 CET 2023


Hi everyone

I've been meaning to write something down on the subject of Bayer
re-processing for ages, but have never managed to get my thoughts into
a presentable order. Nonetheless I think this subject really does want
discussing, so what follows is going to be a bit rambling, for which I
apologise.

1. Use Cases

The principal use cases I'm thinking of include:

* Zero shutter lag capture
* Processing images from DNG (or other) files
* Temporal denoise
* HDR

In general many of the use cases are likely to involve multi-frame
image processing/fusion of various kinds, and I'm sure there will be
many more such scenarios than I have listed!

2. How might we use Bayer re-processing?

My feeling is that sometimes I might want to re-process an image "like
the camera" (zero shutter lag might be a bit like this). But often it
will need to be processed differently, because of the way I've
processed the raw image before re-submitting it.

In the latter case I'd probably have custom "tunings" for my use
cases, designed to work with the pre-processing that I've done. I
might well want to be able to produce multiple different versions of
an image, each with different pre-processing, and different "tuning".

Often I think I'd want to be able to fix certain processing parameters
(colour gains, for example) because I might know what they are, or I
might have remembered them from when the camera was running, or they
might be stored in a DNG file.

In other cases I could imagine wanting to let the algorithms run
"normally". That certainly seems likely for LSC (lens shading), and I
guess it's possible for some of the other algorithms too, perhaps
AWB. Even AEC/AGC might need to run for the purpose of calculating a
digital gain to apply.

3. What might libcamera APIs look like?

I expect that platforms that support Bayer re-processing might
advertise a "re-processing" or "memory" camera (nomenclature TBD). I
would want to be able to open this "camera" multiple times (if the
platform supports it), each time with a different "tuning file". It
would obviously be good to get away from environment variables for
this purpose!

I'd probably configure this "camera" much as I do now, with multiple
output streams in whatever formats and sizes I want. The key
difference is that it would be mandatory to specify a raw stream and
in this case, the **raw stream is an input**. The configuration of
the raw stream defines exactly what format of buffer you're going to
be supplying.

Thereafter we simply submit requests, passing in the raw buffer that
we have, and wait for them to complete.

Fixing the colour gains, for example, is easily accomplished with our
existing controls. We might want a few more controls like this, for
the colour matrix perhaps, so that we could (for example) pass in the
exact matrix we have obtained from a DNG file.

When control algorithms are left to run "normally", we will have a
choice whether we let them adapt to the computed values slowly or
immediately. On the Pi, many of these algorithms run asynchronously,
so there'd probably need to be some mechanism whereby we can wait for
them to finish before using the calculated values. I'm not currently
sure exactly how we'd want to indicate those behaviours.

Furthermore, because you need statistics before the IPAs can run, we
may find that a number of use cases may require us to submit the image
twice - the first time to sort out the algorithms, and the second time
to get the correct image result. This is perhaps very slightly
awkward, but I don't think it's a real problem, and it does simply
reflect the way stuff actually works.


Anyway, that's as far as I've got, so sorry if it's all a bit
vague. But I'd be very interested to hear about the directions other
folks have perhaps been considering... similar, or very different?

All contributions would be very interesting!

Thanks

David


More information about the libcamera-devel mailing list