[libcamera-devel] [PATCH 1/2] cam: event_loop: Stop queuing calls when the event loop are exiting
Laurent Pinchart
laurent.pinchart at ideasonboard.com
Tue Jan 19 17:04:14 CET 2021
Hi Niklas,
On Tue, Jan 19, 2021 at 03:07:16PM +0100, Niklas Söderlund wrote:
> On 2021-01-19 14:42:27 +0100, Niklas Söderlund wrote:
> > On 2021-01-19 14:43:17 +0200, Laurent Pinchart wrote:
> > > On Tue, Jan 19, 2021 at 01:31:09PM +0100, Niklas Söderlund wrote:
> > > > Terminating the event loop with EventLoop::exit() does not grantee that
> > > > it will terminate. If the event loops 'call later' queue can be kept
> > > > non-empty by continuously queuing new calls using EventLoop::callLater()
> > > > either from a different thread or from callbacks in the loop itself
> > > > EventLoop::dispatchCalls() will never complete and the loop will run
> > > > forever.
> > > >
> > > > Solve this by not accepting new callbacks once the event loop is
> > > > exiting.
> > > >
> > > > Reported-by: Sebastian Fricke <sebastian.fricke at posteo.net>
> > > > Fixes: f49e93338b6309a6 ("cam: event_loop: Add deferred calls support")
> > > > Signed-off-by: Niklas Söderlund <niklas.soderlund at ragnatech.se>
> > > > ---
> > > > src/cam/event_loop.cpp | 2 +-
> > > > 1 file changed, 1 insertion(+), 1 deletion(-)
> > > >
> > > > diff --git a/src/cam/event_loop.cpp b/src/cam/event_loop.cpp
> > > > index 94c5d1d362455f33..34a55da30c537ac7 100644
> > > > --- a/src/cam/event_loop.cpp
> > > > +++ b/src/cam/event_loop.cpp
> > > > @@ -62,7 +62,7 @@ void EventLoop::interrupt()
> > > >
> > > > void EventLoop::callLater(const std::function<void()> &func)
> > > > {
> > > > - {
> > > > + if (!exit_.load(std::memory_order_acquire)) {
> > > > std::unique_lock<std::mutex> locker(lock_);
> > > > calls_.push_back(func);
> > > > }
> > >
> > > That's a bit of a layering violation. Would it make sense to fix this
> > > differently, by ensuring that EventLoop::dispatchCalls() will only
> > > handle deferred calls that are on the list when the function is called ?
> > > This would also ensure that callLater(), when called from the same
> > > thread, will guarantee that pending events get processed before the
> > > delayed call is processed.
> > >
> > > I mean something along those lines (only compile-tested).
> > >
> > > diff --git a/src/cam/event_loop.cpp b/src/cam/event_loop.cpp
> > > index 94c5d1d36245..320ade525a3f 100644
> > > --- a/src/cam/event_loop.cpp
> > > +++ b/src/cam/event_loop.cpp
> > > @@ -74,7 +74,7 @@ void EventLoop::dispatchCalls()
> > > {
> > > std::unique_lock<std::mutex> locker(lock_);
> > >
> > > - for (auto iter = calls_.begin(); iter != calls_.end(); ) {
> > > + for (auto iter = calls_.begin(), end = calls_.end(); iter != end; ) {
> > > std::function<void()> call = std::move(*iter);
> > >
> > > iter = calls_.erase(iter);
> >
> > I tried something like this but thought it was nastier then I proposed
> > in this patch. I do however agree that what is proposed in my patch is
> > also not the most cleanest of solutions. The change you propose here
> > does not work as intended, it is still venerable to an ever increasing
> > call_ vector.
> >
> > If we would like to go with this solution how about something like this?
> > It would also reduce the lock_ contention as a side effect.
> >
> > diff --git a/src/cam/event_loop.cpp b/src/cam/event_loop.cpp
> > index 94c5d1d362455f33..4bfcc503ac1f3b83 100644
> > --- a/src/cam/event_loop.cpp
> > +++ b/src/cam/event_loop.cpp
> > @@ -72,15 +72,13 @@ void EventLoop::callLater(const std::function<void()> &func)
> >
> > void EventLoop::dispatchCalls()
> > {
> > - std::unique_lock<std::mutex> locker(lock_);
> > + std::list<std::function<void()>> calls;
> >
> > - for (auto iter = calls_.begin(); iter != calls_.end(); ) {
> > - std::function<void()> call = std::move(*iter);
> > + {
> > + std::unique_lock<std::mutex> locker(lock_);
> > + calls = std::move(calls_);
> > + }
> >
> > - iter = calls_.erase(iter);
> > -
> > - locker.unlock();
> > + for (std::function<void()> call : calls)
> > call();
> > - locker.lock();
> > - }
> > }
That looks good too.
> I Think this back as both solutions that modifies dispatchCalls()
> suffers from the same flaw. Events queued before the EventLoop::exit()
> are not executed if they are not already part of the active list of
> calls that are processed.
Is that a problem though ? If we need to ensure that the deferred calls
queue is flushed, maybe we could make dispatchCalls() public and call it
after EventLoop::exit() returns ?
> What do you think of injecting a 'end of call processing' marker into
> the calls_ vector and break the loop in dispatchCalls() if it's
> encounter? Spontaneously I'm thinking of a 'nullptr' marker, but it's
> untested.
What's the problem that needs to be solved here ? There's an inherent
race condition if we queue calls from a different thread, how to handle
it best depends on what the use case is.
--
Regards,
Laurent Pinchart
More information about the libcamera-devel
mailing list