[libcamera-devel] [PATCH] libcamera: ipu3: Always use sensor full frame size
Laurent Pinchart
laurent.pinchart at ideasonboard.com
Fri Sep 18 15:48:00 CEST 2020
Hi Jacopo,
On Fri, Sep 18, 2020 at 12:48:56PM +0200, Jacopo Mondi wrote:
> On Fri, Sep 18, 2020 at 09:24:51AM +0100, Kieran Bingham wrote:
> > Hi Jacopo,
> >
> [snip]
>
> > >> - .boundedTo(data_->cio2_.sensor()->resolution());
> > >> -
> > >> /*
> > >> * Generate raw configuration from CIO2.
> > >> *
> > >> - * The output YUV streams will be limited in size to the maximum
> > >> - * frame size requested for the RAW stream.
> > >> + * \todo The image sensor frame size should be calculated as the
> > >> + * smallest possible resolution larger enough to accommodate the
> > >> + * requested stream sizes. However such a selection makes the pipeline
> > >
> > > Smallest possible size may not always be the best, we haven't really
> > > thought about this. I would write "The image sensor output size should
> > > be selected to optimize operation based on the sizes of the requested
> > > streams.".
> >
> > Indeed, we really need to think about how we present this as an option
> > to the users/applications as well.
> >
> > This is /use case/ dependant.
>
> Is it something applications should be in control of ?
Possibly, but not at this point. We'll have to gather use cases first.
Note that applications are already indirectly in control, but only
partly. A pipeline handler will likely bin on the sensor side for high
speed video, and get the full frame out of the sensor for still capture.
So to some extent applications can already influence the decision, but
we don't offer any guarantee.
> I'm not sure we even can always provide to application a way to
> select the sensor size to use in a generic way. We should indeed
> report it, mostly for digital zoom implementation purposes, but the
> selection of the sensor size in some platform (ie RPi) is not even
> available to userspace. For IPU3 you see how fragile is that, I guess
> all platforms have specificities there. Furthermore, and here I might
> be very wrong, I don't see a real reason to do so. See below, mostly
> for sake of discussion on the application space our API should aim to
> cover.
>
> > A mobile platform wants to reduce power for instance, and might want the
> > lowest reasonable sensor size to capture the required results, but other
> > use-cases (perhaps a digital microscope) would want to use as high a
> > resolution as possible from the sensor to capture as much light
> > information as is possible, and deal with any size constraints at the
> > rescaler.
>
> I'm not sure I see a correlation between having longer exposure times being
> prevented by using a smaller frame size.
>
> The frame exposure time is bounded by the frame rate (the higher the
> frame rate the shorter the max exposure time) and the frame rate is
> bounded by the frame size (the larger the frame size the lower the
> frame rate) so it seems to me the smaller the frame is, the more it
> can be exposed without impacting the frame rate. I might be wrong on
> this, but to me this is not even the point.
>
> The the manual control of exposure time and frame duration are already
> enough correlated by how those things are controlled by the (usually
> mode-based and feature limited) sensor driver and having any
> guarantee they work in a generic enough way is hard enuogh. The frame size
> which is input to the ISP processing pipeline is yet another parameter
> that makes even harder to gaurantee what works on one platform makes
> sense on another. I'm not saying we should not allow so, but basically
> that's what an IPA does, and only very specific applications might
> benefit from that and probably a 'generic camera API' is not what they
> want to use ?
>
> That said, we are also considering the need to have a way to control
> with platform specific configuration files at which point of the
> processing pipeline any sub-sampling or scaling happens, if on the
> sensor or on the ISP, and throwing applications controllable
> parameters in the mix makes things quite hard to handle.
>
> I'm not saying I'm totally against providing that control to
> applications, I would love if we could get to that level of detail,
> but to me that seems to be quite low on the list..
>
> > And of course it can also depend on how the resolution selection affects
> > the field of view ...
> >
> > >> + * configuration procedure fail for small resolution (in example:
> >
> > s/resolution/resolutions/
> >
> > > s/in example/for example/
> > >
> > >> + * 640x480 with OV5670) and causes the capture operations to stall for
> > >> + * some streams size combinations (see the commit message of the patch
> > >
> > > s/streams/stream/
> > >
> > >> + * that introduced this comment for more failure examples).
> > >> + *
> > >> + * Until the sensor frame size calculation criteria are not clarified,
> > >
> > > s/are not clarified/are clarified/
> > >
> > > Reviewed-by: Laurent Pinchart <laurent.pinchart at ideasonboard.com>
> > >
> > >> + * always use the largest possible one which guarantees better results
> > >> + * at the expense of the frame rate and CSI-2 bus bandwidth.
> >
> > Reviewed-by: Kieran Bingham <kieran.bingham at ideasonboard.com>
> >
> > >> */
> > >> - cio2Configuration_ = data_->cio2_.generateConfiguration(maxRawSize);
> > >> + cio2Configuration_ = data_->cio2_.generateConfiguration({});
> > >> if (!cio2Configuration_.pixelFormat.isValid())
> > >> return Invalid;
> > >>
--
Regards,
Laurent Pinchart
More information about the libcamera-devel
mailing list