Date   

Re: promoting Rust to first class citizen in oe-core

Andreas Müller
 

On Thu, Sep 10, 2020 at 9:52 PM Alexander Kanavin
<alex.kanavin@...> wrote:

Hello all,

I just read this article, called "Supporting Linux kernel development in Rust"
https://lwn.net/Articles/829858/
and it looks like the future is set, and particularly the Yocto project should prepare for it.

Thoughts?
As a gnome in Yocto 'enthusiast' there is not much to say but: yes yes
yes. We have all these rust blockers as librsvg, mozjs. ->
gnome-shell/mutter

Andreas


Re: promoting Rust to first class citizen in oe-core

Otavio Salvador
 

Em qui., 10 de set. de 2020 às 16:51, Alexander Kanavin
<alex.kanavin@...> escreveu:
I just read this article, called "Supporting Linux kernel development in Rust"
https://lwn.net/Articles/829858/
and it looks like the future is set, and particularly the Yocto project should prepare for it.

Thoughts?
I support this for sure. I've been using Rust with Yocto Project for a
while now and it does fit well to be in OE-Core.


--
Otavio Salvador O.S. Systems
http://www.ossystems.com.br http://code.ossystems.com.br
Mobile: +55 (53) 9 9981-7854 Mobile: +1 (347) 903-9750


promoting Rust to first class citizen in oe-core

Alexander Kanavin
 

Hello all,

I just read this article, called "Supporting Linux kernel development in Rust"
and it looks like the future is set, and particularly the Yocto project should prepare for it.

Thoughts?

Alex


Re: Stable release testing - notes from the autobuilder perspective

Tom Rini
 

On Mon, Sep 07, 2020 at 10:30:20PM +0100, Richard Purdie wrote:
On Mon, 2020-09-07 at 17:19 -0400, Tom Rini wrote:
On Mon, Sep 07, 2020 at 10:03:36PM +0100, Richard Purdie wrote:
On Mon, 2020-09-07 at 16:55 -0400, Tom Rini wrote:
The autobuilder is setup for speed so there aren't VMs involved, its
'baremetal'. Containers would be possible but at that point the kernel
isn't the distro kernel and you have permission issues with the qemu
networking for example.
Which issues do you run in to with qemu networking? I honestly don't
know if the U-Boot networking tests we run via qemu under Docker are
more or less complex than what you're running in to.
Its the tun/tap device requirement that tends to be the pain point.
Being able to ssh from the host OS into the qemu target image is a
central requirement of oeqa. Everyone tells me it should use
portmapping and slirp instead to avoid the privs problems and the
container issues which is great but not implemented.
Ah, OK. Yes, we're using "user" networking not tap.

Speed is extremely important as we have about a 6 hour build test time
but a *massive* test range (e.g. all the gcc/glibc test suites on each
arch, build+boot test all the arches under qemu for sysvinit+systemd,
oe-selftest on each distro). I am already tearing my hair out trying to
maintain what we have and deal with the races, adding in containers
into the mix simply isn't something I can face.

We do have older distros in the cluster for a time, e.g. centos7 is
still there although we've replaced the OS on some of the original
centos7 workers as the hardware had disk failures so there aren't as
many of them as there were. Centos7 gives us problems trying to build
master.
The reason I was thinking about containers is that it should remove some
of what you have to face.
Removes some, yes, but creates a whole set of other issues.

Paul may or may not want to chime in on how
workable it ended up being for a particular customer, but leveraging
CROPS to setup build environment of a supported host and then running it
on whatever the available build hardware is, was good. It sounds like
part of the autobuilder problem is that it has to be a specific set of
hand-crafted machines and that in turn feels like we've lost the
thread, so to speak,
The machines are in fact pretty much off the shelf distro installs so
not hand crafted.
Sorry, what I meant by hand-crafted is that for it to work for older
installs, you have to have this particular dance to provide various host
tools, that weren't required at the time.

about having a reproducible build system. 6 hours
even beats my U-Boot world before/after times, so I do get the dread of
"now it might take 5% longer, which is a very real more wallclock time.
But if it means more builders could be available as they're easy to spin
up, that could bring the overall time down.
Here we get onto infrastructure as we're not talking containers on our
workers but on general cloud systems which is a different proposition.

We *heavily* rely on the fast network fabric between the workers and
our nas for sstate (NFS mounted). This is where we get a big chunk of
speed. So "easy to spin up" isn't actually the case for different
reasons.

So this plan is the best practical approach we can come up with to
allow us to be able to build older releases yet not change the
autobuilders too much and cause new sets of problems. I should have
mentioned this, I just assume people kind of know this, sorry.
Since I don't want to put even more on your plate, what kind of is the
reasonable test to try here? Or is it hard to say since it's not just
"MACHINE=qemux86-64 bitbake world" but also "run this and that and
something else" ?
Its quite simple:

MACHINE=qemux86-64 bitbake core-image-sato-sdk -c testimage

and

MACHINE=qemux86-64 bitbake core-image-sato-sdk -c testsdkext

are the two to start with. If those work, the other "nasty" ones are
oe-selftest and the toolchain test suites. Also need to check kvm is
working.

We have gone around in circles on this several times as you're not the
first to suggest it :/.
Thanks for explaining it again. I'll go off and do some tests.

--
Tom


Re: Stable release testing - notes from the autobuilder perspective

Richard Purdie
 

On Mon, 2020-09-07 at 17:19 -0400, Tom Rini wrote:
On Mon, Sep 07, 2020 at 10:03:36PM +0100, Richard Purdie wrote:
On Mon, 2020-09-07 at 16:55 -0400, Tom Rini wrote:
The autobuilder is setup for speed so there aren't VMs involved, its
'baremetal'. Containers would be possible but at that point the kernel
isn't the distro kernel and you have permission issues with the qemu
networking for example.
Which issues do you run in to with qemu networking? I honestly don't
know if the U-Boot networking tests we run via qemu under Docker are
more or less complex than what you're running in to.
Its the tun/tap device requirement that tends to be the pain point.
Being able to ssh from the host OS into the qemu target image is a
central requirement of oeqa. Everyone tells me it should use
portmapping and slirp instead to avoid the privs problems and the
container issues which is great but not implemented.

Speed is extremely important as we have about a 6 hour build test time
but a *massive* test range (e.g. all the gcc/glibc test suites on each
arch, build+boot test all the arches under qemu for sysvinit+systemd,
oe-selftest on each distro). I am already tearing my hair out trying to
maintain what we have and deal with the races, adding in containers
into the mix simply isn't something I can face.

We do have older distros in the cluster for a time, e.g. centos7 is
still there although we've replaced the OS on some of the original
centos7 workers as the hardware had disk failures so there aren't as
many of them as there were. Centos7 gives us problems trying to build
master.
The reason I was thinking about containers is that it should remove some
of what you have to face.
Removes some, yes, but creates a whole set of other issues.

Paul may or may not want to chime in on how
workable it ended up being for a particular customer, but leveraging
CROPS to setup build environment of a supported host and then running it
on whatever the available build hardware is, was good. It sounds like
part of the autobuilder problem is that it has to be a specific set of
hand-crafted machines and that in turn feels like we've lost the
thread, so to speak,
The machines are in fact pretty much off the shelf distro installs so
not hand crafted.

about having a reproducible build system. 6 hours
even beats my U-Boot world before/after times, so I do get the dread of
"now it might take 5% longer, which is a very real more wallclock time.
But if it means more builders could be available as they're easy to spin
up, that could bring the overall time down.
Here we get onto infrastructure as we're not talking containers on our
workers but on general cloud systems which is a different proposition.

We *heavily* rely on the fast network fabric between the workers and
our nas for sstate (NFS mounted). This is where we get a big chunk of
speed. So "easy to spin up" isn't actually the case for different
reasons.

So this plan is the best practical approach we can come up with to
allow us to be able to build older releases yet not change the
autobuilders too much and cause new sets of problems. I should have
mentioned this, I just assume people kind of know this, sorry.
Since I don't want to put even more on your plate, what kind of is the
reasonable test to try here? Or is it hard to say since it's not just
"MACHINE=qemux86-64 bitbake world" but also "run this and that and
something else" ?
Its quite simple:

MACHINE=qemux86-64 bitbake core-image-sato-sdk -c testimage

and

MACHINE=qemux86-64 bitbake core-image-sato-sdk -c testsdkext

are the two to start with. If those work, the other "nasty" ones are
oe-selftest and the toolchain test suites. Also need to check kvm is
working.

We have gone around in circles on this several times as you're not the
first to suggest it :/.

Cheers,

Richard


Re: Stable release testing - notes from the autobuilder perspective

Tom Rini
 

On Mon, Sep 07, 2020 at 10:03:36PM +0100, Richard Purdie wrote:
On Mon, 2020-09-07 at 16:55 -0400, Tom Rini wrote:
On Mon, Sep 07, 2020 at 02:59:41PM -0300, Otavio Salvador wrote:
Hello all,

Em seg., 7 de set. de 2020 às 13:14, Richard Purdie
<richard.purdie@...> escreveu:
...
Any thoughts from anyone on this?
I second this and at least at O.S. Systems we've been using Docker
containers to keep maintenance easier for old releases. I'd be
great
we could alleviate this and reduce its use as much as possible.

The CI builder maintenance is indeed a time-consuming task and as
easier it gets the easier is to convince people to set up them for
their uses and in the end, this helps to improve the quality of
submitted patches and reduces the maintenance effort as well.
Excuse what may be a dumb question, but why are we not just building
pyro for example in a Ubuntu 16.04 or centos7 (or anything else with
official containers available) ? Is the performance hit too much,
even with good volume management? And extend that for other branches
of course. But as we look at why people care about such old releases
(or, supporting a current release into the future) it seems like "our
build environment is a container / VM so we can support this on
modern HW" pops up.
The autobuilder is setup for speed so there aren't VMs involved, its
'baremetal'. Containers would be possible but at that point the kernel
isn't the distro kernel and you have permission issues with the qemu
networking for example.
Which issues do you run in to with qemu networking? I honestly don't
know if the U-Boot networking tests we run via qemu under Docker are
more or less complex than what you're running in to.

Speed is extremely important as we have about a 6 hour build test time
but a *massive* test range (e.g. all the gcc/glibc test suites on each
arch, build+boot test all the arches under qemu for sysvinit+systemd,
oe-selftest on each distro). I am already tearing my hair out trying to
maintain what we have and deal with the races, adding in containers
into the mix simply isn't something I can face.

We do have older distros in the cluster for a time, e.g. centos7 is
still there although we've replaced the OS on some of the original
centos7 workers as the hardware had disk failures so there aren't as
many of them as there were. Centos7 gives us problems trying to build
master.
The reason I was thinking about containers is that it should remove some
of what you have to face. Paul may or may not want to chime in on how
workable it ended up being for a particular customer, but leveraging
CROPS to setup build environment of a supported host and then running it
on whatever the available build hardware is, was good. It sounds like
part of the autobuilder problem is that it has to be a specific set of
hand-crafted machines and that in turn feels like we've lost the
thread, so to speak, about having a reproducible build system. 6 hours
even beats my U-Boot world before/after times, so I do get the dread of
"now it might take 5% longer, which is a very real more wallclock time.
But if it means more builders could be available as they're easy to spin
up, that could bring the overall time down.

So this plan is the best practical approach we can come up with to
allow us to be able to build older releases yet not change the
autobuilders too much and cause new sets of problems. I should have
mentioned this, I just assume people kind of know this, sorry.
Since I don't want to put even more on your plate, what kind of is the
reasonable test to try here? Or is it hard to say since it's not just
"MACHINE=qemux86-64 bitbake world" but also "run this and that and
something else" ?

--
Tom


Re: Stable release testing - notes from the autobuilder perspective

Richard Purdie
 

On Mon, 2020-09-07 at 16:55 -0400, Tom Rini wrote:
On Mon, Sep 07, 2020 at 02:59:41PM -0300, Otavio Salvador wrote:
Hello all,

Em seg., 7 de set. de 2020 às 13:14, Richard Purdie
<richard.purdie@...> escreveu:
...
Any thoughts from anyone on this?
I second this and at least at O.S. Systems we've been using Docker
containers to keep maintenance easier for old releases. I'd be
great
we could alleviate this and reduce its use as much as possible.

The CI builder maintenance is indeed a time-consuming task and as
easier it gets the easier is to convince people to set up them for
their uses and in the end, this helps to improve the quality of
submitted patches and reduces the maintenance effort as well.
Excuse what may be a dumb question, but why are we not just building
pyro for example in a Ubuntu 16.04 or centos7 (or anything else with
official containers available) ? Is the performance hit too much,
even with good volume management? And extend that for other branches
of course. But as we look at why people care about such old releases
(or, supporting a current release into the future) it seems like "our
build environment is a container / VM so we can support this on
modern HW" pops up.
The autobuilder is setup for speed so there aren't VMs involved, its
'baremetal'. Containers would be possible but at that point the kernel
isn't the distro kernel and you have permission issues with the qemu
networking for example.

Speed is extremely important as we have about a 6 hour build test time
but a *massive* test range (e.g. all the gcc/glibc test suites on each
arch, build+boot test all the arches under qemu for sysvinit+systemd,
oe-selftest on each distro). I am already tearing my hair out trying to
maintain what we have and deal with the races, adding in containers
into the mix simply isn't something I can face.

We do have older distros in the cluster for a time, e.g. centos7 is
still there although we've replaced the OS on some of the original
centos7 workers as the hardware had disk failures so there aren't as
many of them as there were. Centos7 gives us problems trying to build
master.

So this plan is the best practical approach we can come up with to
allow us to be able to build older releases yet not change the
autobuilders too much and cause new sets of problems. I should have
mentioned this, I just assume people kind of know this, sorry.

Cheers,

Richard


Re: Stable release testing - notes from the autobuilder perspective

Tom Rini
 

On Mon, Sep 07, 2020 at 02:59:41PM -0300, Otavio Salvador wrote:
Hello all,

Em seg., 7 de set. de 2020 às 13:14, Richard Purdie
<richard.purdie@...> escreveu:
...
Any thoughts from anyone on this?
I second this and at least at O.S. Systems we've been using Docker
containers to keep maintenance easier for old releases. I'd be great
we could alleviate this and reduce its use as much as possible.

The CI builder maintenance is indeed a time-consuming task and as
easier it gets the easier is to convince people to set up them for
their uses and in the end, this helps to improve the quality of
submitted patches and reduces the maintenance effort as well.
Excuse what may be a dumb question, but why are we not just building
pyro for example in a Ubuntu 16.04 or centos7 (or anything else with
official containers available) ? Is the performance hit too much, even
with good volume management? And extend that for other branches of
course. But as we look at why people care about such old releases (or,
supporting a current release into the future) it seems like "our build
environment is a container / VM so we can support this on modern HW"
pops up.

--
Tom


Re: Stable release testing - notes from the autobuilder perspective

Otavio Salvador
 

Hello all,

Em seg., 7 de set. de 2020 às 13:14, Richard Purdie
<richard.purdie@...> escreveu:
...
Any thoughts from anyone on this?
I second this and at least at O.S. Systems we've been using Docker
containers to keep maintenance easier for old releases. I'd be great
we could alleviate this and reduce its use as much as possible.

The CI builder maintenance is indeed a time-consuming task and as
easier it gets the easier is to convince people to set up them for
their uses and in the end, this helps to improve the quality of
submitted patches and reduces the maintenance effort as well.

--
Otavio Salvador O.S. Systems
http://www.ossystems.com.br http://code.ossystems.com.br
Mobile: +55 (53) 9 9981-7854 Mobile: +1 (347) 903-9750


Stable release testing - notes from the autobuilder perspective

Richard Purdie
 

I wanted to write down my findings on trying to getting and keeping
stable branch builds working on the autobuilder. I also have a proposal
in mind for moving this forward.

Jeremy did good work in getting thud nearly building, building upon
work I'd done in getting buildtools-extended-tarball working for older
releases. Its not as simpler a problem as it would first appear.

We have two versions of buildtools tarball. In simple terms, one has
the basic utils needed to run builds without gcc and the other includes
gcc.

Our current policy was to install a buildtools tarball on certain
problematic autobuilders but this doesn't work since a given release
usually has a set of tools its known to work with and it won't work
without tools outside that. We therefore suffer "bitrot" as new workers
are added and older ones replaced with new distro installs.

In particular:
* gcc 10 doesn't work with older releases
* gcc 4.8 and 4.9 don't work with newer releases
* we no longer install makeinfo onto new autobuilder workers
* we no longer install python2 onto new autobuilder workers
* some older autobuilder workers have old versions of python3
* newer autobuilder workers need newer uninative versions
* some things changed like crypt() being moved out of glibc

This means that for a given release we want to use the standard
buildtools tarball on "old" systems and the extended buildtools tarball
on "new" systems that didn't exist at the time the release was made.

My thoughts are that we should:

a) Remove all the current buildtools installs from the autobuilder

b) teach autobuilder-helper to install buildtools tarballs in all the
older release branches

c) backport most of the autobuilder-helper changes to older releases so
its easier to maintain things

d) backport buildtools-extended-tarball to older releases

e) backport the necessary fixes to older releases to allow them to
build on the current infrastructure with buildtools.

Dunfell is in a good state and ok.

Zeus needs poky:zeus-next yocto-autobuilder-helper:contrib/rpurdie/zeus

Thud has branches available that need to update against the zeus
changes I've figured out which should get that working too.

Pyro has example code at poky-contrib:rpurdie/pyro to allow a
buildtools tarball that old to be built.

As things stand the branches are all just going to bitrot so if we can
get these branches to build cleanly, it would seem to make sense to me
to merge this approximate set of changes in the hope that stable
maintenance in case of any major security fix (for example) becomes
much more possible.

Any thoughts from anyone on this?

Cheers,

Richard


Re: Support for OpenRC

Rich Persaud
 

On Sep 5, 2020, at 13:55, Khem Raj <raj.khem@...> wrote:

On Sat, Sep 5, 2020 at 8:28 AM Richard Purdie
<richard.purdie@...> wrote:

On Sat, 2020-09-05 at 08:15 -0700, Achara, Jagdish P wrote:
Hi,

Currently we have the option to choose either sysvinit or systemd .
Would, at some point, openrc be included in this list of options to
choose from?

It comes down to the demand for it, whether there are people willing
the maintain it, how much of the system its planned to support and so
on. It has implications for the testing matrix for example.

The hope is individual layers could add support for things like this
and that would let people use it and let us gauge demand too.

init systems are quite taxing and intrusive to implement and hence I
agree with testing complexity
increase and in general higher maintenance work. meta-openrc seems a
good solution for now,
ideally, recipes should provide openRC scripts via bbappends and
perhaps create it as a DISTRO_FEATURE
but I think that's a good starting point to start with openrc and if
there are many users showing interest in
future we should definitely review it.

Devuan has also been testing in support of multiple init systems, which should improve upstream package readiness.  So far they have:

  - sysvinit
  - openrc
  - runit


Rich


Re: Support for OpenRC

Khem Raj
 

On Sat, Sep 5, 2020 at 8:28 AM Richard Purdie
<richard.purdie@...> wrote:

On Sat, 2020-09-05 at 08:15 -0700, Achara, Jagdish P wrote:
Hi,

Currently we have the option to choose either sysvinit or systemd .
Would, at some point, openrc be included in this list of options to
choose from?
It comes down to the demand for it, whether there are people willing
the maintain it, how much of the system its planned to support and so
on. It has implications for the testing matrix for example.

The hope is individual layers could add support for things like this
and that would let people use it and let us gauge demand too.
init systems are quite taxing and intrusive to implement and hence I
agree with testing complexity
increase and in general higher maintenance work. meta-openrc seems a
good solution for now,
ideally, recipes should provide openRC scripts via bbappends and
perhaps create it as a DISTRO_FEATURE
but I think that's a good starting point to start with openrc and if
there are many users showing interest in
future we should definitely review it.


Cheers,

Richard


Re: Support for OpenRC

 

On Sat, 5 Sep 2020 at 16:28, Richard Purdie
<richard.purdie@...> wrote:

On Sat, 2020-09-05 at 08:15 -0700, Achara, Jagdish P wrote:
Hi,

Currently we have the option to choose either sysvinit or systemd .
Would, at some point, openrc be included in this list of options to
choose from?
It comes down to the demand for it, whether there are people willing
the maintain it, how much of the system its planned to support and so
on. It has implications for the testing matrix for example.

The hope is individual layers could add support for things like this
and that would let people use it and let us gauge demand too.
This does exist in https://github.com/jsbronder/meta-openrc, I haven't
tested it myself though.

--
Paul Barker
Konsulko Group


Re: Support for OpenRC

Richard Purdie
 

On Sat, 2020-09-05 at 08:15 -0700, Achara, Jagdish P wrote:
Hi,

Currently we have the option to choose either sysvinit or systemd .
Would, at some point, openrc be included in this list of options to
choose from?
It comes down to the demand for it, whether there are people willing
the maintain it, how much of the system its planned to support and so
on. It has implications for the testing matrix for example.

The hope is individual layers could add support for things like this
and that would let people use it and let us gauge demand too.

Cheers,

Richard


Support for OpenRC

Achara, Jagdish P <jagdishpachara@...>
 

Hi,

Currently we have the option to choose either sysvinit or systemd . Would, at some point, openrc be included in this list of options to choose from?

Jagdish


OpenEmbedded Happy Hour July 29 9pm/2100 UTC

Denys Dmytriyenko
 

Just a reminder about our upcoming OpenEmbedded Happy Hour on July 29 for
Oceania/Asia timezones @ 2100/9pm UTC (5pm EDT):

https://www.openembedded.org/wiki/Calendar
https://www.timeanddate.com/worldclock/fixedtime.html?msg=OpenEmbedded+Happy+Hour+July+29&iso=20200729T21

--
Denys


Re: Yocto Project Future Direction(s)

Rich Persaud
 

On Jul 28, 2020, at 10:32, Richard Purdie <richard.purdie@...> wrote:

Hi,

The YP TSC has been discussing the topic of future development
directions for a while. We're written up a summary of those onto the
wiki:

https://wiki.yoctoproject.org/wiki/Future_Directions

Thanks for making this available!  

These have been shared and discussed with the YP members. Most of these
topics are resource constrained, we believe them to be valuable but we
don't have the right people with the time to spend to make them happen.

Our aim is that if/as/when there are resources available we'd move
forward in these areas. We wanted to try and combine together the TSC's
thoughts, the current status and tribal knowledge on these topic areas
into one place.

It is an evolving document. If anyone does want to discuss any of these
areas further or contribute, please do!

Would backward compatibility (e.g. via buildtools-extended-tarball) be covered by the "Other future topics" section on multiple toolchains?  

Is it worth adding hash equivalency to one of the roadmap sections, or is that an implict capability required by several roadmap items?

On Layer setup/config, it may be useful to work with other communities by proposing an enhancement to upstream git, applying the experience from multiple OE approaches as case studies and sources of requirements. While it would take longer, it would reduce the risk of "now we have N+1 problems".

For QA automation, Code Submission and Usability topics, there was good discussion at LPC 2019 (https://lwn.net/Articles/799134/) and a subsequent mailing list,  https://lore.kernel.org/workflows/.  There are upcoming safety, security & testing tracks for LPC 2020, https://www.linuxplumbersconf.org/event/7/page/80-accepted-microconferences, for collaboration with upstream efforts.

Rich


Yocto Project Future Direction(s)

Richard Purdie
 

Hi,

The YP TSC has been discussing the topic of future development
directions for a while. We're written up a summary of those onto the
wiki:

https://wiki.yoctoproject.org/wiki/Future_Directions

These have been shared and discussed with the YP members. Most of these
topics are resource constrained, we believe them to be valuable but we
don't have the right people with the time to spend to make them happen.

Our aim is that if/as/when there are resources available we'd move
forward in these areas. We wanted to try and combine together the TSC's
thoughts, the current status and tribal knowledge on these topic areas
into one place.

It is an evolving document. If anyone does want to discuss any of these
areas further or contribute, please do!

Cheers,

Richard
(on behalf of the YP TSC)


Inclusive Language summary from the OE TSC

Richard Purdie
 

The OE TSC recognises there are issues related to inclusive language
which the project needs to address and that we need a plan for doing so
moving forward. It is unclear how much change the project members wish
to see or can cope with at this point in time, nor how much help is
available to make changes. It is noted that whilst steps were proposed
in the email thread discussion, those have as yet not been acted upon.

There are some steps the TSC believes the project can take:

a) Going forward all new code and new variables should use inclusive
language. We will rely on the usual peer review process of changes to
help catch issues and request the communities help in doing so but this
becomes standard policy with immediate effect.

b) We defer any potential "master" branch name change until upstream
git's direction becomes clearer. This is one of the most invasive
potential changes and if we do change it, we need to get it right and
make a decision based upon tooling support and general wider community
consensus.

c) We start looking at the function names and patch filenames for
problematic language and accept patches to change those straight away.
This area is much less invasive and lower risk.

d) We create a list of the potentially problematic variable names on
the wiki so we can understand scope and what kinds of work is needed to
form a better plan, including understanding the potential migration
paths for changes.

e) We decide not to port any of these changes to the current LTS and
focus on these changes for the next project releases and future LTS due
to limited resources and for current LTS stability.

f) We aim to ensure the OE and YP TSCs are aligned on our approach to
address this and changes in OE and YP match

This is intended as an initial response/path forward and may need to
adapt over time as circumstances dictate. It gives us a place to start
from and move forward.

Richard on behalf of:

OpenEmbedded TSC

This was also discussed and agreed by:

Yocto Project TSC
OpenEmbedded Board


Re: Pull requests on GitHub repository mirrors

Richard Purdie
 

Hi Paul,

On Mon, 2020-07-20 at 19:58 +0100, Paul Barker wrote:
I took a look at our mirrored repositories on GitHub under
https://github.com/openembedded and considered the experience of new
potential contributors and others from outside our project community.
As many projects use GitHub Pull Requests to handle contributions,
and
the "Pull Requests" feature of GitHub can't be disabled, people often
wrongly conclude that the way to contribute to the project is to open
a pull request. This is evident from the pull requests which have
been
opened over the years on
https://github.com/openembedded/openembedded-core/pulls and
https://github.com/openembedded/bitbake/pulls.

To someone unfamiliar with our project and our workflows, this can
give a bad impression. Potential contributors may be lost as their
pull requests usually don't get replied to. People trying to gauge
the
activity of the project can get a bad impression from the number of
old pull requests left open with no comments.

I propose that we set up the "Repo Lockdown" GitHub App
(https://github.com/apps/repo-lockdown) for these repositories. With
this installed we can create a config repository in
https://github.com/openembedded/.github to set a helpful comment
which
will then automatically posted as a reply to all new pull requests on
our mirrored repositories. This comment can redirect the contributor
to the correct submission process for OpenEmbedded (i.e. the mailing
lists) and provide a link to our contribution guidelines. The pull
request would then automatically be closed.

This is a one-time setup task and looks to be an easy win - it
shouldn't take up any of our time going forward but would help
improve
how things look to folks who are only used to GitHub/GitLab web
interface based workflows. Hopefully that will bring in some new
contributors over time.

I'm happy to set this up and clean up the currently open pull
requests
on these repositories but this would require admin permissions to
https://github.com/openembedded. If the OE TSC is happy to grant me
the relevant access I'll sort this out and try to help out with any
other admin required for the GitHub mirrors in the future (time
permitting).
The OE TSC had a meeting today and discussed this, we agreed it seemed
like a reasonable idea in principle and that you could go ahead and set
it up as it should be better than the current experience people have.
We think Philip can help with the permissions side of things on github.

Cheers,

Richard

481 - 500 of 1651