On Sat, 2022-04-16 at 23:27 +0100, Jose Quaresma wrote:
> Richard Purdie <richard.purdie@...> escreveu no dia sábado,
> 16/04/2022 à(s) 22:57:
> > On Sat, 2022-04-16 at 21:24 +0100, Jose Quaresma wrote:
> > > for the FetchConnectionCache use a queue where each thread can
> > > get an unsed connection_cache that is properly initialized before
> > > we fireup the ThreadPoolExecutor.
> > >
> > > for the progress bar we need an adictional task counter
> > > that is protected with thread lock as it runs inside the
> > > ThreadPoolExecutor.
> > >
> > > Fixes [YOCTO #14775] --
> > https://bugzilla.yoctoproject.org/show_bug.cgi?id=14775
> > >
> > > Signed-off-by: Jose Quaresma <quaresma.jose@...>
> > > ---
> > > meta/classes/sstate.bbclass | 44 +++++++++++++++++++++++--------------
> > > 1 file changed, 28 insertions(+), 16 deletions(-)
> > Are there specific issues you see with oe.utils.ThreadedPool that this
> > change
> > addresses? Were you able to reproduce the issue in 14775?
> Looking deeper while testing the patch I think I found another bug in the
> sstate mirror handling.
> The python set() is not thread safe and we use it inside the thread pool, I
> added a new python set class for that in my V2
That might explain things.
> I don't know if it is related to 14775 but it can be, I can't reproduce the
> 14775 on my side, maybe it's better to remove the 14775 mention from my
> commits, what do you think?
I think you shouldn't say it fixes it as we simply don't know that. It may be
related to so perhaps say that instead.
I will do that.
> > I'm a little concerned we swap one implementation where we know roughly what
> > the issues are for another where we dont :/.
> I think there some issues on ThreadedPool in the worker_init and worker_end,
> this functions is called in all workers and it seems to me that the
> right thing to do is calling and reuse the previous ones connection_cache
> otherwise the connection_cache does nothing.
It creates a connection_cache in each thread that is created. Once created in a
given thread, that connection cache is reused there? I'm not sure you can say it
I may have misunderstood this part and you may be right. I have to re-analyze more carefully.
> > I notice that ThreadPoolExecutor can take an initializer but you're doing
> > this
> > using the queue instead. Is that because you suspect some issue with those
> > being
> > setup in the separate threads?
> I am using a queue as it is the easy way I find for reusing the
I think that piece of code was working already?
it may be working fine on OE ThreadPool and I misunderstood that
> > You also mentioned the debug messages not showing. That suggests something
> > is
> > wrong with the event handlers in the new threading model and that errors
> > wouldn't propagate either so we need to check into that.
> This is fixed in V2
What was the issue out of interest?
I don't know but it starts working when I add the thread safe collections.
> > This is definitely an interesting idea but I'm nervous about it :/.
> It would be interesting if you could test it in the autobuilder.
> On my side it is working well now, I will send a V2
The challenge with autobuilder testing of this is that there is only a small
portion of the autobuilder tests which exercise this code (testsdkext for
images). The current issues only occur intermittently so it is hard to know if
any given change fixes anything (or introduces a new race).
One more interesting test which may more quickly find issues would be to make
everything use the http mirror on the autobuilder I guess. We'd need to figure
out the configuration for that though.
What you mean by this is that there are builds on the autobuilder that uses the sstate mirror
and others that use some shared sstate cache filesystem?
As you previously said and as the sstate mirror is available for the community,
I think and I will try to add some tests for that.
I still don't know how to do it but I'll think about it.
Another thing about this RFC series is that I think I need to do it in a way
that it can be backported for dunfell if we need to do that.
I will spend more time on this during this week.
Thanks for your always valuable comments.