[libcamera-devel] [RFC v4 1/1] ipa: ipu3: af: Auto focus for dw9719 Surface Go2 VCM
Kate Hsuan
hpa at redhat.com
Fri Jan 14 11:46:17 CET 2022
Hi Jean-Michel,
On Thu, Jan 13, 2022 at 8:58 PM Jean-Michel Hautbois
<jeanmichel.hautbois at ideasonboard.com> wrote:
>
> Hi Kate,
>
> Before I forget it again :-) could you please answer in plain-text and
> not HTML ;-) ?
Ops, my fault. I'll double-check my Gmail setting. Sorry for the trouble.
>
> On 13/01/2022 12:35, Kate Hsuan wrote:
> > Hi Jean-Michel,
> >
> >
>
> <snip>
>
> > > >>>>>
> > > >>>>>>
> > > >>>>>>> +
> > > >>>>>>> +/* settings for Auto Focus from the kernel */
> > > >>>>>>> +static struct ipu3_uapi_af_config_s imgu_css_af_defaults = {
> > > >>>>>>> + .filter_config = {
> > > >>>>>>> + { 0, 0, 0, 0 },
> > > >>>>>>> + { 0, 0, 0, 0 },
> > > >>>>>>> + { 0, 0, 0, 128 },
> > > >>>>>>> + 0,
> > > >>>>>>> + { 0, 0, 0, 0 },
> > > >>>>>>> + { 0, 0, 0, 0 },
> > > >>>>>>> + { 0, 0, 0, 128 },
> > > >>>>>>> + 0,
> > > >>>>>>> + .y_calc = { 8, 8, 8, 8 },
> > > >>>>>>> + .nf = { 0, 7, 0, 7, 0 },
> > > >>>>>>> + },
> > > >>>>
> > > >>>> I would like your thinking here (and perhaps someone else).
> > It looks
> > > >>>> like the ImgU implements a Gabor filter, but I can't see all the
> > > >>>> parameters I would expect from it. Putting it down, I can't
> > see with
> > > >>>> this default filter how y1 and y2 could represent low-pass
> > and high
> > > >>>> pass, as the filters are the same. But, it could be a
> > horizontal and a
> > > >>>> vertical filter instead, with y1 the horizontal and y2 the
> > vertical
> > > >>>> resulting filtered values.
> > > >>>>
> > > >>>> It would be very interesting to display the y1_avg and
> > y2_avg values as
> > > >>>> an image, I just don't have lots of time for it. We have 16
> > bits values,
> > > >>>> so we could dump those into a binary file and use python to
> > display the
> > > >>>> values as a raw gray image or something similar.
> > > >>>
> > > >>> You could check out the URL below to see the grey scale image
> > of the AF buffer.
> > > >>>
> > https://drive.google.com/file/d/15rrfeRKA1hgGngzRRk-a9Uzhf8tYEU1T/view?usp=sharing
> > <https://drive.google.com/file/d/15rrfeRKA1hgGngzRRk-a9Uzhf8tYEU1T/view?usp=sharing>
> > > >>>
> > > >>> Since the result is a tiny 16x16 image, all the detail of the
> > original
> > > >>> image was compressed into a black block.
> > > >>>
> > > >>
> > > >> Nice :-).
> > > >> As expected, as y1 and y2 are the same filter, images are the
> > same.
> > > >> Could you test those values ?
> > > >>
> > > >> y1 =>
> > > >> (0, 1, 3, 7
> > > >> 11, 13, 1, 2
> > > >> 8, 19, 34, 242)
> > > >> vector: 7fdffbfe - normalization factor: 9
> > > >> y2 =>
> > > >> (0, 1, 6, 6
> > > >> 13, 25, 3, 0
> > > >> 25, 3, 117, 254)
> > > >> vector: 4e53ca72 - normalization factor: 9
> > > >> Channels coefficients: (gr: 8, r: 8, gb: 8, b: 8)
> > > >>
> > > >
> > > > Please check out the results from the URL below.
> > > >
> > > >
> > https://drive.google.com/file/d/1jePzgsw0zwO0EtDnLBrOaCKucx4QWLIg/view?usp=sharing
> > <https://drive.google.com/file/d/1jePzgsw0zwO0EtDnLBrOaCKucx4QWLIg/view?usp=sharing>
> > > >
> > > > Y1 became a black image and was difficult to determine the
> > coarse focus value.
> > > >
> > >
> >
> > > Sounds surprising... Could you share the way you are generating the
> > > images of Y1 and Y2 BTW ?
> > >
> > > > Happy new year :)
> > >
> > > Best wishes, hoping for a far away Covid next year !
> > I hope too :)
> >
> > After some checks, I made some mistakes to convert the image to png
> > file.
> >
> > Check out the following URL for the new results.
> > https://drive.google.com/file/d/1o4EcY9EsYrBYu5YTZ-_tlxP8sevDd3QY/view?usp=sharing
> > <https://drive.google.com/file/d/1o4EcY9EsYrBYu5YTZ-_tlxP8sevDd3QY/view?usp=sharing>
> >
> > But, for the Y2, the result becomes very bright and sometimes the
> > image will be all white. Are the results right?
> >
> > My python code for converting the pixels to tiff is shown below.
> >
> > =========
> > import numpy as np
> > from scipy import ndimage, misc
> > import imageio
> > from PIL import Image
> > import sys
> >
> > rawfile = np.fromfile(sys.argv[1], "uint16")
> > rawfile.shape = (16,16)
> > new_image = Image.fromarray(rawfile, mode='I;16')
> > new_image.convert('L').save('{}.tiff'.format(sys.argv[1][:-4]))
> > ==========
> >
>
> <snip>
>
> >
> > I also did some experiments on grid configuration of AF scene.
> >
> > You can see the picture below.
> >
> > https://drive.google.com/file/d/1UQc6emy4Qx-TAMqN3Bk-BO7lOOzoPRyq/view?usp=sharing
> > <https://drive.google.com/file/d/1UQc6emy4Qx-TAMqN3Bk-BO7lOOzoPRyq/view?usp=sharing>
> > https://drive.google.com/file/d/1RC5JmCrLFetw5PEo_hlGnjbJXnplsc6F/view?usp=sharing
> > <https://drive.google.com/file/d/1RC5JmCrLFetw5PEo_hlGnjbJXnplsc6F/view?usp=sharing>
> >
> > The first image showed the configuration of the AF grid. The green box
> > was the range of the AF scene.
> > The image of Y1 was the same as the AF scene but Y2 was completely white :(.
> >
>
> The image used for AF calculations is the BDS output one, so you should
> configure your grid based on this.
>
> See:
> https://elixir.bootlin.com/linux/latest/source/drivers/staging/media/ipu3/ipu3-css-params.c#L2541
>
> > The grid configuration as shown below
> >
> > {
> > .width = 16,
> > .height = 16,
> > .block_width_log2 = 3,
> > .block_height_log2 = 3,
> > .height_per_slice = 2,
> > .x_start = 640,
> > .y_start = 360 | IPU3_UAPI_GRID_Y_START_EN,
> > .x_end = 0,
> > .y_end = 0
> > },
>
> This grid configuration is probably not what you really want :-).
> You are configuring a 128x128 grid, and your BDS output is (fill in the
> blank please :-)). So you want:
> x_start = (BdsOutput.width / 2 - 64),
> y_start = (BdsOutput.height / 2 - 64) | IPU3_UAPI_GRID_Y_START_EN,
>
> > Moreover, the x_start seems can't be set to greater than 650 (image
> > width/2+10). If it is >650 the AF scene will be (10, 2) the default
> > configuration.
>
> From:
> https://chromium.googlesource.com/chromiumos/platform/arc-camera/+/refs/heads/master/hal/intel/include/ia_imaging/af_public.h#74
>
> "margins of 10 per horizontal direction and 2 per vertical direction" =>
> It means the real AF grid is using 10 pixels before your x_start, so it
> can't be less than 10, and 2 pixels before your y_start, so it can't be
> less than 2.
>
> You can obviously take it into account if you really want to, but you
> are trying to center the grid, so it should not be an issue.
>
> You should set height_per_slice to 1, as it is calculated by the kernel,
> and I would be interested by this: could it be a limitation related to:
> https://patchwork.linuxtv.org/project/linux-media/patch/20210916172504.677919-1-jeanmichel.hautbois@ideasonboard.com/
>
> Can you try building the kernel with this patch by any chance ?
Thanks for the reply. I'll test all of them then share the results
with you. Please allow me some time to test them.
One thing I discovered, the grid coordinate seems start from the
center of the AF grid if x_end and y_end are set to 0. Once I make
sure the behaviour, I'll let you know.
>
> Thanks for your feedback !
> JM
>
--
BR,
Kate
More information about the libcamera-devel
mailing list