[libcamera-devel] [PATCH v7 00/16] Intel IPU3 ImgU patchset

Tomasz Figa tfiga at chromium.org
Tue Jan 29 09:56:35 CET 2019


On Mon, Jan 28, 2019 at 7:08 PM Jacopo Mondi <jacopo at jmondi.org> wrote:
>
> Hi Sakari, everyone..
>
> On Wed, Jan 02, 2019 at 10:26:56PM +0200, Sakari Ailus wrote:
> > Hi Laurent,
> >
> > On Wed, Jan 02, 2019 at 10:20:13AM +0200, Laurent Pinchart wrote:
> > > Hello Bingbu,
> > >
> > > On Wednesday, 2 January 2019 04:38:33 EET Bingbu Cao wrote:
> > > > On 12/26/2018 07:03 PM, Laurent Pinchart wrote:
> > > > > On Monday, 17 December 2018 05:14:44 EET Bingbu Cao wrote:
> > > > >> On 12/14/2018 06:24 AM, Laurent Pinchart wrote:
> > > > >>> On Wednesday, 12 December 2018 06:55:53 EET Bingbu Cao wrote:
> > > > >>>> On 12/11/2018 09:43 PM, Laurent Pinchart wrote:
> > > > >>>>> On Tuesday, 11 December 2018 15:34:49 EET Laurent Pinchart wrote:
> > > > >>>>>> On Wednesday, 5 December 2018 02:30:46 EET Mani, Rajmohan wrote:
> > > > >>>>>>
> > > > >>>>>> [snip]
> > > > >>>>>>
> > > > >>>>>>> I can see a couple of steps missing in the script below.
> > > > >>>>>>> (https://lists.libcamera.org/pipermail/libcamera-devel/2018-November
> > > > >>>>>>> /000040.html)
> > > > >>>>>>>
> > > > >>>>>>>    From patch 02 of this v7 series "doc-rst: Add Intel IPU3
> > > > >>>>>>>    documentation", under section "Configuring ImgU V4L2 subdev for
> > > > >>>>>>>    image processing"...
> > > > >>>>>>>
> > > > >>>>>>> 1. The pipe mode needs to be configured for the V4L2 subdev.
> > > > >>>>>>>
> > > > >>>>>>> Also the pipe mode of the corresponding V4L2 subdev should be set as
> > > > >>>>>>> desired (e.g 0 for video mode or 1 for still mode) through the
> > > > >>>>>>> control id 0x009819a1 as below.
> > > > >>>>>>>
> > > > >>>>>>> e.g v4l2n -d /dev/v4l-subdev7 --ctrl=0x009819A1=1
> > > > >>>>>>
> > > > >>>>>> I assume the control takes a valid default value ? It's better to set
> > > > >>>>>> it explicitly anyway, so I'll do so.
> > > > >>>>
> > > > >>>> The video mode is set by default. If you want to set to still mode or
> > > > >>>> change mode, you need set the subdev control.
> > > > >>>>
> > > > >>>>>>> 2. ImgU pipeline needs to be configured for image processing as
> > > > >>>>>>> below.
> > > > >>>>>>>
> > > > >>>>>>> RAW bayer frames go through the following ISP pipeline HW blocks to
> > > > >>>>>>> have the processed image output to the DDR memory.
> > > > >>>>>>>
> > > > >>>>>>> RAW bayer frame -> Input Feeder -> Bayer Down Scaling (BDS) ->
> > > > >>>>>>> Geometric Distortion Correction (GDC) -> DDR
> > > > >>>>>>>
> > > > >>>>>>> The ImgU V4L2 subdev has to be configured with the supported
> > > > >>>>>>> resolutions in all the above HW blocks, for a given input
> > > > >>>>>>> resolution.
> > > > >>>>>>>
> > > > >>>>>>> For a given supported resolution for an input frame, the Input
> > > > >>>>>>> Feeder, Bayer Down Scaling and GDC blocks should be configured with
> > > > >>>>>>> the supported resolutions. This information can be obtained by
> > > > >>>>>>> looking at the following IPU3 ISP configuration table for ov5670
> > > > >>>>>>> sensor.
> > > > >>>>>>>
> > > > >>>>>>> https://chromium.googlesource.com/chromiumos/overlays/board-overlays
> > > > >>>>>>> /+/master/baseboard-poppy/media-libs/cros-camera-hal-configs-poppy/
> > > > >>>>>>> files/gcss/graph_settings_ov5670.xml
> > > > >>>>>>>
> > > > >>>>>>> For the ov5670 example, for an input frame with a resolution of
> > > > >>>>>>> 2592x1944 (which is input to the ImgU subdev pad 0), the
> > > > >>>>>>> corresponding resolutions for input feeder, BDS and GDC are
> > > > >>>>>>> 2592x1944, 2592x1944 and 2560x1920 respectively.
> > > > >>>>>>
> > > > >>>>>> How is the GDC output resolution computed from the input resolution ?
> > > > >>>>>> Does the GDC always consume 32 columns and 22 lines ?
> > > > >>>>
> > > > >>>> All the intermediate resolutions in the pipeline are determined by the
> > > > >>>> actual use case, in other word determined by the IMGU input
> > > > >>>> resolution(sensor output) and the final output and viewfinder
> > > > >>>> resolution. BDS mainly do Bayer downscaling, it has limitation that the
> > > > >>>> downscaling factor must be a value a integer multiple of 1/32.
> > > > >>>> GDC output depends on the input and width should be x8 and height x4
> > > > >>>> alignment.
> > > > >>>
> > > > >>> Thank you for the information. This will need to be captured in the
> > > > >>> documentation, along with information related to how each block in the
> > > > >>> hardware pipeline interacts with the image size. It should be possible
> > > > >>> for a developer to compute the output and viewfinder resolutions based
> > > > >>> on the parameters of the image processing algorithms just with the
> > > > >>> information contained in the driver documentation.
> > > > >>>
> > > > >>>>>>> The following steps prepare the ImgU ISP pipeline for the image
> > > > >>>>>>> processing.
> > > > >>>>>>>
> > > > >>>>>>> 1. The ImgU V4L2 subdev data format should be set by using the
> > > > >>>>>>> VIDIOC_SUBDEV_S_FMT on pad 0, using the GDC width and height
> > > > >>>>>>> obtained
> > > > >>>>>>> above.
> > > > >>>>>>
> > > > >>>>>> If I understand things correctly, the GDC resolution is the pipeline
> > > > >>>>>> output resolution. Why is it configured on pad 0 ?
> > > > >>>>
> > > > >>>> We see the GDC output resolution as the input of output system, the
> > > > >>>> sink pad format is used for output and viewfinder resolutions.
> > > > >>>
> > > > >>> The ImgU subdev is supposed to represent the ImgU. Pad 0 should thus be
> > > > >>> the ImgU input, the format configured there should correspond to the
> > > > >>> format on the connected video node, and should thus be the sensor
> > > > >>> format. You can then use the crop and compose rectangles on pad 0, along
> > > > >>> with the format, crop and compose rectangles on the output and
> > > > >>> viewfinder pads, to configure the device. This should be fixed in the
> > > > >>> driver, and the documentation should then be updated accordingly.
> > > > >>
> > > > >> Hi, Laurent,
> > > > >>
> > > > >> Thanks for your review.
> > > > >>
> > > > >> I think it make sense for me that using Pad 0 as the ImgU input(IF).
> > > > >> However, I prefer using the 2 source pads for output and viewfinder.
> > > > >> It makes more sense because the output and viewfinder are independent
> > > > >> output.
> > > > >>
> > > > >> The whole pipeline in ImgU looks like:
> > > > >> IF --> BDS --> GDC ---> OUTPUT
> > > > >>                   |-----> VF
> > > > >>
> > > > >> The BDS is used to do Bayer downscaling and GDC can do cropping.
> > > > >
> > > > > Does this mean that the main output and the viewfinder output share the
> > > > > same scaler, and that the only difference in size between the two outputs
> > > > > is solely due to cropping ?
> > > >
> > > > Laurent,
> > > > No, output only can do crop and viewfinder support crop and scaling, they
> > > > share same input.
> > >
> > > Then you can't support this with a single subdev for the ImgU, you need at
> > > least two subdevs. I can offer more guidance, but I'll need more information
> > > about the GDC.
> >
> > While the current documentation only defines the functionality of the
> > compose target for sink pads, there are a few sensor drivers supporting it
> > on source pads already. Some drivers such as the OMAP3 ISP also use the
> > format on source pads to configure scaling.
> >
> > The current API certainly allows exposing the compose rectangle also on the
> > source pads, but to make that generic we'd need to amend the API to tell in
> > which order these steps take place. In the meantime the behaviour remains
> > device specific.
> >
>
> My understanding is that what is currently missing is the support
> for viewfinder's ability to scale, as the scaler should get
> programmed by configuring a composing rectangle on a source pad which
> is not supported by the V4L2 APIs at the moment. Is my understanding correct?
>
> As the composing rectangle is set for both 'output' and 'viewfinder'
> through the image format sizes configured on the first sink pad (*),
> the viewfinder output is obtained by cropping-only to the image format
> sizes configured on source pad number 3 (though SUBDEV_S_FMT not through
> SUBDEV_S_SELECTION, as SUBDEV_S_SELECTION is only allowed on sink pad
> 0 in the driver: see "ipu3_subdev_set_selection()").
>
> As you mentioned "device specific behaviour", what is the intended one
> for the ipu3? I assumed the viewfinder scaling/cropping was configured
> on the 'viewfinder' video device node, through the VIDIOC_S_SELECTION
> ioctl, but looking at the code, that doesn't seem to be listed as
> supported in "ipu3_v4l2_ioctl_ops".
>
> How am I supposed to configure scaling on the viewfinder output? Would
> adding support for crop/compose to the 'output' and 'viewfinder' video
> devices be supported by the V4L2 APIs? That would work with the single
> subdevice model that is currently implemented in this patches...

Isn't this what the driver actually implements? My understanding was
that the format on the VF video node determined the scaling settings,
based on the source pad.

That was my understanding on how we should make things work, i.e. try
to put as much as possible into core V4L2 interfaces and only defer to
subdevices or media controller if that's impossible. Laurent didn't
seem to agree with that when we talked last time, though, and I can
see some reasons why actually moving simplifying the video devices as
much as possible and moving as much as possible into subdevices could
make sense (simpler interfaces of endpoints, finer granularity for
feature description, less redundance - only subdevices do processing,
video nodes are only DMAs, etc.).

Best regards,
Tomasz


More information about the libcamera-devel mailing list