Date   

Verifiable Builds

Richard Purdie
 

Being able to verify build artefacts is a hot topic at the moment. I
think we need to promote that the project can do this. I did that a
little here on the reproducible builds list:

https://lists.reproducible-builds.org/pipermail/rb-general/2021-January/002175.html

Thought it may be of interest to others using OE. As a community
we need to raise awareness of the project's capacilities in this
regard!

Cheers,

Richard


LTS - What it and and what it is not

Richard Purdie
 

I (and the YP TSC) are seeing various interesting things happening with
the LTS. I think in general its good for the project, it is helping
users and it is being positively received. We are starting to see
various "pressures" on how it is being used as master and dunfell
diverge naturally over time and I want to spell these issues out for
people so that can try and avoid being drawn into traps.

Most of the issues are around the fact that our LTS now does not have
the latest components in it. In particular, it doesn't have the latest
toolchain (gcc 10 isn't there), the latest kernel (no 5.10) or the
latest graphics stack (no new wayland features).

From an engineering perspective, most things are ultimately possible
with a build system but that doesn't however mean they're a good idea.
In isolation, it is possible to change+fix some of these things but
usually only for some smaller subset of the ecosystem. Its unlikely
that gcc 10 patches are going to be accepted in all dunfell layers for
example (and quite rightly too). What that means is that you then
fragment the ecosystem into your island where it works and the rest of
it where it stands a good chance it will not.

In some cases this may be fine but people do need to be extremely aware
that once on your own island, you lose the benefit of co-
travellers/compatibility and you have to do your own testing and
validation.

I think that people need to think long and hard about whether they are
maintaining an existing release of something, or actively developing
something which isn't yet in a stable lifecycle and therefore may not
be a good fit for the LTS.

In many cases older releases of a BSP, or a software stack can "fall"
off onto the LTS, but the LTS isn't a way to allow current development
of the latest and greatest without interruption of upstream changes.
This means that LTS isn't a way to stop testing/developing against
master.

We probably do need to develop some documentation about what the LTS
is/isn't and how best to use it, so any help with doing that would be
much appreciated. Please do help us try and educate people that whilst
the LTS is a good thing, it doesn't solve every problem and how to spot
the signs where it may not be the right solution!

Cheers,

Richard


OpenEmbedded Happy Hour January 27 5pm/1700 UTC

Denys Dmytriyenko
 

Hi,

Just a reminder about our upcoming OpenEmbedded Happy Hour on January 27 for
Europe/US timezones @ 1700/5pm UTC (12pm ET):

https://www.openembedded.org/wiki/Calendar
https://www.timeanddate.com/worldclock/fixedtime.html?msg=OpenEmbedded+Happy+Hour+January+27&iso=20210127T17

--
Regards,
Denys Dmytriyenko <denis@...>
PGP: 0x420902729A92C964 - https://denix.org/0x420902729A92C964
Fingerprint: 25FC E4A5 8A72 2F69 1186 6D76 4209 0272 9A92 C964


Canceled: OpenEmbedded Happy Hour December 30

Denys Dmytriyenko
 

All,

FYI, our OpenEmbedded Happy Hour is being canceled for December 30 due to the
Holiday season. We will resume the normal schedule in January.

Thank you and Happy Holidays!

--
Regards,
Denys Dmytriyenko <denis@...>
PGP: 0x420902729A92C964 - https://denix.org/0x420902729A92C964
Fingerprint: 25FC E4A5 8A72 2F69 1186 6D76 4209 0272 9A92 C964


Re: [OE-core] OpenEmbedded Happy Hour November 25 9pm/2100 UTC

Denys Dmytriyenko
 

Just a reminder, Happy Hour is in 1 hour.

9pm UTC, 4pm EST (not EDT) or use time conversion link below for your
location. See you there!

--
Denys

On Thu, Nov 19, 2020 at 01:12:35PM -0500, Denys Dmytriyenko wrote:
Hi,

Just a reminder about our upcoming OpenEmbedded Happy Hour on November 25 for
Oceania/Asia timezones @ 2100/9pm UTC (4pm EDT):

https://www.openembedded.org/wiki/Calendar
https://www.timeanddate.com/worldclock/fixedtime.html?msg=OpenEmbedded+Happy+Hour+November+25&iso=20201125T21

--
Denys


OpenEmbedded Happy Hour November 25 9pm/2100 UTC

Denys Dmytriyenko
 

Hi,

Just a reminder about our upcoming OpenEmbedded Happy Hour on November 25 for
Oceania/Asia timezones @ 2100/9pm UTC (4pm EDT):

https://www.openembedded.org/wiki/Calendar
https://www.timeanddate.com/worldclock/fixedtime.html?msg=OpenEmbedded+Happy+Hour+November+25&iso=20201125T21

--
Denys


OpenEmbedded Happy Hour September 30 9pm/2100 UTC

Denys Dmytriyenko
 

Just a reminder about our upcoming OpenEmbedded Happy Hour on September 30 for
Oceania/Asia timezones @ 2100/9pm UTC (5pm EDT):

https://www.openembedded.org/wiki/Calendar
https://www.timeanddate.com/worldclock/fixedtime.html?msg=OpenEmbedded+Happy+Hour+September+30&iso=20200930T21

--
Denys


Re: promoting Rust to first class citizen in oe-core

Randy MacLeod
 

Added Jan from Pengutronix since Richard said he might be interested
in the rust in oe-core work.

See below if you want to kick the tires using my poky-contrib branch.

On 2020-09-14 12:24 p.m., Randy MacLeod wrote:
My filters hide this email in a folder even though I was CCed. Updated now.

On 2020-09-12 10:19 p.m., Khem Raj wrote:
On Sat, Sep 12, 2020 at 12:07 PM Alexander Kanavin
<alex.kanavin@...> wrote:
On Thu, 10 Sep 2020 at 22:24, Richard Purdie <richard.purdie@...> wrote:
This has been talked about a lot but there is work to be done to get
this into core. Not many people seem willing to step up and do that
work so progress has been slow.

The hardest part may be getting the crate fetcher into bitbake in an
acceptable form.
I was stuck on the librsvg build error for a while so
I haven't yet looked into how the cargo fetcher works.

The cargo bitbake tool can generate bitbake recipes from a toml file:

   https://github.com/meta-rust/cargo-bitbake

It certainly lists all the crates in the generated recipe's SRC_URI
and it looks like they are fetched during do_fetch but I haven't
done the fetch and then disabled the network to be sure yet.

I'm all in favour too, as long as it really is sorted to be a first
class citizen.
That's why I specifically CCd Randy: he's done some work towards this, so I was hoping for some kind of current update or maybe remaining items where help is needed.
Rust and to a certain extent go has a bit different dynamics, where
the language tools are pretty much inherently cross compilers. they
provide easy installers and updaters for tools
and they release very often, they also have their own package
management systems, the programs are quite standalone in the ( like
static programs) so there is not much need for them
from system.  end-users update compilers very often than not they are
using the latest compilers due to the above reasons.
these are real concerns when you consider timed releases schedules
like yocto and now we have LTS too.
Yes, I've been mulling that over as well.
Please could do app development using their
distro's rust or the 'rustup' toolchain by specifying

the target:

$ cargo build --target TRIPLE

and then for production releases, use bitbake.
I agree with Richard's sentiment that we need robust fetcher
integration as a starting point and perhaps full knowledge of
dependency management to offer a compelling solution. I would like to
see it used when in core and we need answers for the above topics,
currently, meta-rust e.g. follows a release cadence of its own which
is good fit for developers, maybe not as much for release
engineering.


After months of neglect, I did update the merge of
meta-rust to oe-core over the long weekend. Of course now
different things are broken than before!

Here's the oneline summary:

d3d419e11b (HEAD -> rust-wip-sept-5) librsvg: update to 2.49.5
dd921fee61 librsvg: Update from 2.40.20 to 2.46.4
ea484c5069 ripgrep: add temporarily
6a00ee0909 Add rust 1.46.0
7332db1316 rust: use PARALLEL_MAKE instead of BB_NUMBER_THREADS
8fab4132ee rust.inc: whitelist BB_NUMBER_THREADS in do_compile
dbf873714f Bump to Rust version 1.43
b40c54e810 Revert "cargo: fix progress output"
d223ab9b58 cargo: fix progress output
06e16e3475 rust.inc: cut build time in half
71dd219d97 rust.inc: run bootstrap.py in parallel
9e92ceda37 rust.inc: make max-atomic-width an integer
60ab501447 rust-native shouldn't depend on TARGET variables
afed138555 rustfmt: Upgrade to 1.4.2
50cde902c9 Avoid extra sh process from shell wrapper
0851bb0f1d Update 0001-Disable-http2.patch for cargo 1.41.0
787d064ca1 Update to Rust 1.41.0
756b950b5e rust: add a language demo image to test reproducibility
f543ee0909 cargo: Refresh http2 disable patch
e6ea4ca57c Update 0001-Disable-http2.patch for Cargo shipped with Rust 1.40.0
4189c968df Update to Rust and Cargo 1.40.0.
18af1ae487 rust: Use Python3 native for build
76de2d7175 rust: Improve TUNE_FEATURE parsing
043446750a Update to Rust and Cargo 1.39.0
5a962934f4 rust: mv README.md to recipes-devtools/rust/README-rust.md
b7f42be3f3 meta-rust: move code to oe-core from meta-rust layer
6d4d6b888f Add libgit2, libssh2 from meta-oe for rust
bdca4796ff (origin/master, origin/HEAD, master) weston-init: Enable RDP screen share


I'll get things in somewhat better order and push what I have to poky-contrib
or wherever so that other people can join in the fun or see what's going on.


Well, it's not in 'better order' but it builds and runs rust-hello-world and ripgrep.

I've pushed what I have to poky-contrib in case anyone wants to debug

the librsvg build over the weekend! :)

 * [new branch]              rmacleod/rust-wip-sept-5 -> rmacleod/rust-wip-sept-5

http://git.yoctoproject.org/cgit/cgit.cgi/poky-contrib/log/?h=rmacleod/rust-wip-sept-5

../Randy

The latest error when doing 'bitbake librsvg' on the newer librsvg is:

ERROR: librsvg-2.49.5-r0 do_compile: Execution of '/ala-lpggp31/rmacleod/src/distro/yocto/b/rust-sep-5/tmp-glibc/work/core2-64-oe-linux/librsvg/2.49.5-r0/temp/run.do_co:
error: more than one source location specified for `source.crates-io`
WARNING: exit code 101 from a shell command.

but bitbake rust-hello-world  or bitbake ripgrep works fine.


After I (we?) get librsvg to build again, I believe there were
problems with gstreamer as well.

../Randy

Alex


-- 
# Randy MacLeod
# Wind River Linux


    


-- 
# Randy MacLeod
# Wind River Linux


Re: Inclusive Language summary from the OE TSC

Richard Purdie
 

On Wed, 2020-09-16 at 14:13 -0700, akuster808 wrote:

On 7/27/20 6:09 AM, Richard Purdie wrote:
The OE TSC recognises there are issues related to inclusive
language
which the project needs to address and that we need a plan for
doing so
moving forward. It is unclear how much change the project members
wish
to see or can cope with at this point in time, nor how much help is
available to make changes. It is noted that whilst steps were
proposed
in the email thread discussion, those have as yet not been acted
upon.

There are some steps the TSC believes the project can take:

a) Going forward all new code and new variables should use
inclusive
language. We will rely on the usual peer review process of changes
to
help catch issues and request the communities help in doing so but
this
becomes standard policy with immediate effect.

b) We defer any potential "master" branch name change until
upstream
git's direction becomes clearer. This is one of the most invasive
potential changes and if we do change it, we need to get it right
and
make a decision based upon tooling support and general wider
community
consensus.

c) We start looking at the function names and patch filenames for
problematic language and accept patches to change those straight
away.
This area is much less invasive and lower risk.

d) We create a list of the potentially problematic variable names
on
the wiki so we can understand scope and what kinds of work is
needed to
form a better plan, including understanding the potential migration
paths for changes.
Where do we stand on a plan? I noticed a patch already got applied to
change names. The name change in that patch, is that "wording" change
we should adopt?
Was there a patch applied?

Cheers,

Richard


Re: Inclusive Language summary from the OE TSC

Armin Kuster
 

On 7/27/20 6:09 AM, Richard Purdie wrote:
The OE TSC recognises there are issues related to inclusive language
which the project needs to address and that we need a plan for doing so
moving forward. It is unclear how much change the project members wish
to see or can cope with at this point in time, nor how much help is
available to make changes. It is noted that whilst steps were proposed
in the email thread discussion, those have as yet not been acted upon.

There are some steps the TSC believes the project can take:

a) Going forward all new code and new variables should use inclusive
language. We will rely on the usual peer review process of changes to
help catch issues and request the communities help in doing so but this
becomes standard policy with immediate effect.

b) We defer any potential "master" branch name change until upstream
git's direction becomes clearer. This is one of the most invasive
potential changes and if we do change it, we need to get it right and
make a decision based upon tooling support and general wider community
consensus.

c) We start looking at the function names and patch filenames for
problematic language and accept patches to change those straight away.
This area is much less invasive and lower risk.

d) We create a list of the potentially problematic variable names on
the wiki so we can understand scope and what kinds of work is needed to
form a better plan, including understanding the potential migration
paths for changes.
Where do we stand on a plan? I noticed a patch already got applied to
change names. The name change in that patch, is that "wording" change we
should adopt?

-armin

e) We decide not to port any of these changes to the current LTS and
focus on these changes for the next project releases and future LTS due
to limited resources and for current LTS stability.

f) We aim to ensure the OE and YP TSCs are aligned on our approach to
address this and changes in OE and YP match

This is intended as an initial response/path forward and may need to
adapt over time as circumstances dictate. It gives us a place to start
from and move forward.

Richard on behalf of:

OpenEmbedded TSC

This was also discussed and agreed by:

Yocto Project TSC
OpenEmbedded Board







Re: promoting Rust to first class citizen in oe-core

Randy MacLeod
 

My filters hide this email in a folder even though I was CCed. Updated now.

On 2020-09-12 10:19 p.m., Khem Raj wrote:
On Sat, Sep 12, 2020 at 12:07 PM Alexander Kanavin
<alex.kanavin@...> wrote:
On Thu, 10 Sep 2020 at 22:24, Richard Purdie <richard.purdie@...> wrote:
This has been talked about a lot but there is work to be done to get
this into core. Not many people seem willing to step up and do that
work so progress has been slow.

The hardest part may be getting the crate fetcher into bitbake in an
acceptable form.
I was stuck on the librsvg build error for a while so
I haven't yet looked into how the cargo fetcher works.

The cargo bitbake tool can generate bitbake recipes from a toml file:

   https://github.com/meta-rust/cargo-bitbake

It certainly lists all the crates in the generated recipe's SRC_URI
and it looks like they are fetched during do_fetch but I haven't
done the fetch and then disabled the network to be sure yet.


I'm all in favour too, as long as it really is sorted to be a first
class citizen.

That's why I specifically CCd Randy: he's done some work towards this, so I was hoping for some kind of current update or maybe remaining items where help is needed.

Rust and to a certain extent go has a bit different dynamics, where
the language tools are pretty much inherently cross compilers. they
provide easy installers and updaters for tools
and they release very often, they also have their own package
management systems, the programs are quite standalone in the ( like
static programs) so there is not much need for them
from system.  end-users update compilers very often than not they are
using the latest compilers due to the above reasons.
these are real concerns when you consider timed releases schedules
like yocto and now we have LTS too.
Yes, I've been mulling that over as well.
Please could do app development using their
distro's rust or the 'rustup' toolchain by specifying

the target:

$ cargo build --target TRIPLE

and then for production releases, use bitbake.

I agree with Richard's sentiment that we need robust fetcher
integration as a starting point and perhaps full knowledge of
dependency management to offer a compelling solution. I would like to
see it used when in core and we need answers for the above topics,
currently, meta-rust e.g. follows a release cadence of its own which
is good fit for developers, maybe not as much for release
engineering.


After months of neglect, I did update the merge of
meta-rust to oe-core over the long weekend. Of course now
different things are broken than before!

Here's the oneline summary:

d3d419e11b (HEAD -> rust-wip-sept-5) librsvg: update to 2.49.5
dd921fee61 librsvg: Update from 2.40.20 to 2.46.4
ea484c5069 ripgrep: add temporarily
6a00ee0909 Add rust 1.46.0
7332db1316 rust: use PARALLEL_MAKE instead of BB_NUMBER_THREADS
8fab4132ee rust.inc: whitelist BB_NUMBER_THREADS in do_compile
dbf873714f Bump to Rust version 1.43
b40c54e810 Revert "cargo: fix progress output"
d223ab9b58 cargo: fix progress output
06e16e3475 rust.inc: cut build time in half
71dd219d97 rust.inc: run bootstrap.py in parallel
9e92ceda37 rust.inc: make max-atomic-width an integer
60ab501447 rust-native shouldn't depend on TARGET variables
afed138555 rustfmt: Upgrade to 1.4.2
50cde902c9 Avoid extra sh process from shell wrapper
0851bb0f1d Update 0001-Disable-http2.patch for cargo 1.41.0
787d064ca1 Update to Rust 1.41.0
756b950b5e rust: add a language demo image to test reproducibility
f543ee0909 cargo: Refresh http2 disable patch
e6ea4ca57c Update 0001-Disable-http2.patch for Cargo shipped with Rust 1.40.0
4189c968df Update to Rust and Cargo 1.40.0.
18af1ae487 rust: Use Python3 native for build
76de2d7175 rust: Improve TUNE_FEATURE parsing
043446750a Update to Rust and Cargo 1.39.0
5a962934f4 rust: mv README.md to recipes-devtools/rust/README-rust.md
b7f42be3f3 meta-rust: move code to oe-core from meta-rust layer
6d4d6b888f Add libgit2, libssh2 from meta-oe for rust
bdca4796ff (origin/master, origin/HEAD, master) weston-init: Enable RDP screen share


I'll get things in somewhat better order and push what I have to poky-contrib
or wherever so that other people can join in the fun or see what's going on.

The latest error when doing 'bitbake librsvg' on the newer librsvg is:

ERROR: librsvg-2.49.5-r0 do_compile: Execution of '/ala-lpggp31/rmacleod/src/distro/yocto/b/rust-sep-5/tmp-glibc/work/core2-64-oe-linux/librsvg/2.49.5-r0/temp/run.do_co:
error: more than one source location specified for `source.crates-io`
WARNING: exit code 101 from a shell command.

but bitbake rust-hello-world  or bitbake ripgrep works fine.


After I (we?) get librsvg to build again, I believe there were
problems with gstreamer as well.

../Randy


Alex


-- 
# Randy MacLeod
# Wind River Linux


Re: promoting Rust to first class citizen in oe-core

Khem Raj
 

On Sat, Sep 12, 2020 at 12:07 PM Alexander Kanavin
<alex.kanavin@...> wrote:

On Thu, 10 Sep 2020 at 22:24, Richard Purdie <richard.purdie@...> wrote:

This has been talked about a lot but there is work to be done to get
this into core. Not many people seem willing to step up and do that
work so progress has been slow.

The hardest part may be getting the crate fetcher into bitbake in an
acceptable form.

I'm all in favour too, as long as it really is sorted to be a first
class citizen.

That's why I specifically CCd Randy: he's done some work towards this, so I was hoping for some kind of current update or maybe remaining items where help is needed.
Rust and to a certain extent go has a bit different dynamics, where
the language tools are pretty much inherently cross compilers. they
provide easy installers and updaters for tools
and they release very often, they also have their own package
management systems, the programs are quite standalone in the ( like
static programs) so there is not much need for them
from system. end-users update compilers very often than not they are
using the latest compilers due to the above reasons.
these are real concerns when you consider timed releases schedules
like yocto and now we have LTS too.

I agree with Richard's sentiment that we need robust fetcher
integration as a starting point and perhaps full knowledge of
dependency management to offer a compelling solution. I would like to
see it used when in core and we need answers for the above topics,
currently, meta-rust e.g. follows a release cadence of its own which
is good fit for developers, maybe not as much for release
engineering.

Alex


Re: promoting Rust to first class citizen in oe-core

Alexander Kanavin
 

On Thu, 10 Sep 2020 at 22:24, Richard Purdie <richard.purdie@...> wrote:
This has been talked about a lot but there is work to be done to get
this into core. Not many people seem willing to step up and do that
work so progress has been slow.

The hardest part may be getting the crate fetcher into bitbake in an
acceptable form.

I'm all in favour too, as long as it really is sorted to be a first
class citizen.

That's why I specifically CCd Randy: he's done some work towards this, so I was hoping for some kind of current update or maybe remaining items where help is needed.

Alex


Re: promoting Rust to first class citizen in oe-core

Richard Purdie
 

On Thu, 2020-09-10 at 22:03 +0200, Andreas Müller wrote:
On Thu, Sep 10, 2020 at 9:52 PM Alexander Kanavin
<alex.kanavin@...> wrote:
Hello all,

I just read this article, called "Supporting Linux kernel
development in Rust"
https://lwn.net/Articles/829858/
and it looks like the future is set, and particularly the Yocto
project should prepare for it.

Thoughts?
As a gnome in Yocto 'enthusiast' there is not much to say but: yes
yes
yes. We have all these rust blockers as librsvg, mozjs. ->
gnome-shell/mutter
This has been talked about a lot but there is work to be done to get
this into core. Not many people seem willing to step up and do that
work so progress has been slow.

The hardest part may be getting the crate fetcher into bitbake in an
acceptable form.

I'm all in favour too, as long as it really is sorted to be a first
class citizen.

Cheers,

Richard


Re: promoting Rust to first class citizen in oe-core

Andreas Müller
 

On Thu, Sep 10, 2020 at 9:52 PM Alexander Kanavin
<alex.kanavin@...> wrote:

Hello all,

I just read this article, called "Supporting Linux kernel development in Rust"
https://lwn.net/Articles/829858/
and it looks like the future is set, and particularly the Yocto project should prepare for it.

Thoughts?
As a gnome in Yocto 'enthusiast' there is not much to say but: yes yes
yes. We have all these rust blockers as librsvg, mozjs. ->
gnome-shell/mutter

Andreas


Re: promoting Rust to first class citizen in oe-core

Otavio Salvador
 

Em qui., 10 de set. de 2020 às 16:51, Alexander Kanavin
<alex.kanavin@...> escreveu:
I just read this article, called "Supporting Linux kernel development in Rust"
https://lwn.net/Articles/829858/
and it looks like the future is set, and particularly the Yocto project should prepare for it.

Thoughts?
I support this for sure. I've been using Rust with Yocto Project for a
while now and it does fit well to be in OE-Core.


--
Otavio Salvador O.S. Systems
http://www.ossystems.com.br http://code.ossystems.com.br
Mobile: +55 (53) 9 9981-7854 Mobile: +1 (347) 903-9750


promoting Rust to first class citizen in oe-core

Alexander Kanavin
 

Hello all,

I just read this article, called "Supporting Linux kernel development in Rust"
and it looks like the future is set, and particularly the Yocto project should prepare for it.

Thoughts?

Alex


Re: Stable release testing - notes from the autobuilder perspective

Tom Rini
 

On Mon, Sep 07, 2020 at 10:30:20PM +0100, Richard Purdie wrote:
On Mon, 2020-09-07 at 17:19 -0400, Tom Rini wrote:
On Mon, Sep 07, 2020 at 10:03:36PM +0100, Richard Purdie wrote:
On Mon, 2020-09-07 at 16:55 -0400, Tom Rini wrote:
The autobuilder is setup for speed so there aren't VMs involved, its
'baremetal'. Containers would be possible but at that point the kernel
isn't the distro kernel and you have permission issues with the qemu
networking for example.
Which issues do you run in to with qemu networking? I honestly don't
know if the U-Boot networking tests we run via qemu under Docker are
more or less complex than what you're running in to.
Its the tun/tap device requirement that tends to be the pain point.
Being able to ssh from the host OS into the qemu target image is a
central requirement of oeqa. Everyone tells me it should use
portmapping and slirp instead to avoid the privs problems and the
container issues which is great but not implemented.
Ah, OK. Yes, we're using "user" networking not tap.

Speed is extremely important as we have about a 6 hour build test time
but a *massive* test range (e.g. all the gcc/glibc test suites on each
arch, build+boot test all the arches under qemu for sysvinit+systemd,
oe-selftest on each distro). I am already tearing my hair out trying to
maintain what we have and deal with the races, adding in containers
into the mix simply isn't something I can face.

We do have older distros in the cluster for a time, e.g. centos7 is
still there although we've replaced the OS on some of the original
centos7 workers as the hardware had disk failures so there aren't as
many of them as there were. Centos7 gives us problems trying to build
master.
The reason I was thinking about containers is that it should remove some
of what you have to face.
Removes some, yes, but creates a whole set of other issues.

Paul may or may not want to chime in on how
workable it ended up being for a particular customer, but leveraging
CROPS to setup build environment of a supported host and then running it
on whatever the available build hardware is, was good. It sounds like
part of the autobuilder problem is that it has to be a specific set of
hand-crafted machines and that in turn feels like we've lost the
thread, so to speak,
The machines are in fact pretty much off the shelf distro installs so
not hand crafted.
Sorry, what I meant by hand-crafted is that for it to work for older
installs, you have to have this particular dance to provide various host
tools, that weren't required at the time.

about having a reproducible build system. 6 hours
even beats my U-Boot world before/after times, so I do get the dread of
"now it might take 5% longer, which is a very real more wallclock time.
But if it means more builders could be available as they're easy to spin
up, that could bring the overall time down.
Here we get onto infrastructure as we're not talking containers on our
workers but on general cloud systems which is a different proposition.

We *heavily* rely on the fast network fabric between the workers and
our nas for sstate (NFS mounted). This is where we get a big chunk of
speed. So "easy to spin up" isn't actually the case for different
reasons.

So this plan is the best practical approach we can come up with to
allow us to be able to build older releases yet not change the
autobuilders too much and cause new sets of problems. I should have
mentioned this, I just assume people kind of know this, sorry.
Since I don't want to put even more on your plate, what kind of is the
reasonable test to try here? Or is it hard to say since it's not just
"MACHINE=qemux86-64 bitbake world" but also "run this and that and
something else" ?
Its quite simple:

MACHINE=qemux86-64 bitbake core-image-sato-sdk -c testimage

and

MACHINE=qemux86-64 bitbake core-image-sato-sdk -c testsdkext

are the two to start with. If those work, the other "nasty" ones are
oe-selftest and the toolchain test suites. Also need to check kvm is
working.

We have gone around in circles on this several times as you're not the
first to suggest it :/.
Thanks for explaining it again. I'll go off and do some tests.

--
Tom


Re: Stable release testing - notes from the autobuilder perspective

Richard Purdie
 

On Mon, 2020-09-07 at 17:19 -0400, Tom Rini wrote:
On Mon, Sep 07, 2020 at 10:03:36PM +0100, Richard Purdie wrote:
On Mon, 2020-09-07 at 16:55 -0400, Tom Rini wrote:
The autobuilder is setup for speed so there aren't VMs involved, its
'baremetal'. Containers would be possible but at that point the kernel
isn't the distro kernel and you have permission issues with the qemu
networking for example.
Which issues do you run in to with qemu networking? I honestly don't
know if the U-Boot networking tests we run via qemu under Docker are
more or less complex than what you're running in to.
Its the tun/tap device requirement that tends to be the pain point.
Being able to ssh from the host OS into the qemu target image is a
central requirement of oeqa. Everyone tells me it should use
portmapping and slirp instead to avoid the privs problems and the
container issues which is great but not implemented.

Speed is extremely important as we have about a 6 hour build test time
but a *massive* test range (e.g. all the gcc/glibc test suites on each
arch, build+boot test all the arches under qemu for sysvinit+systemd,
oe-selftest on each distro). I am already tearing my hair out trying to
maintain what we have and deal with the races, adding in containers
into the mix simply isn't something I can face.

We do have older distros in the cluster for a time, e.g. centos7 is
still there although we've replaced the OS on some of the original
centos7 workers as the hardware had disk failures so there aren't as
many of them as there were. Centos7 gives us problems trying to build
master.
The reason I was thinking about containers is that it should remove some
of what you have to face.
Removes some, yes, but creates a whole set of other issues.

Paul may or may not want to chime in on how
workable it ended up being for a particular customer, but leveraging
CROPS to setup build environment of a supported host and then running it
on whatever the available build hardware is, was good. It sounds like
part of the autobuilder problem is that it has to be a specific set of
hand-crafted machines and that in turn feels like we've lost the
thread, so to speak,
The machines are in fact pretty much off the shelf distro installs so
not hand crafted.

about having a reproducible build system. 6 hours
even beats my U-Boot world before/after times, so I do get the dread of
"now it might take 5% longer, which is a very real more wallclock time.
But if it means more builders could be available as they're easy to spin
up, that could bring the overall time down.
Here we get onto infrastructure as we're not talking containers on our
workers but on general cloud systems which is a different proposition.

We *heavily* rely on the fast network fabric between the workers and
our nas for sstate (NFS mounted). This is where we get a big chunk of
speed. So "easy to spin up" isn't actually the case for different
reasons.

So this plan is the best practical approach we can come up with to
allow us to be able to build older releases yet not change the
autobuilders too much and cause new sets of problems. I should have
mentioned this, I just assume people kind of know this, sorry.
Since I don't want to put even more on your plate, what kind of is the
reasonable test to try here? Or is it hard to say since it's not just
"MACHINE=qemux86-64 bitbake world" but also "run this and that and
something else" ?
Its quite simple:

MACHINE=qemux86-64 bitbake core-image-sato-sdk -c testimage

and

MACHINE=qemux86-64 bitbake core-image-sato-sdk -c testsdkext

are the two to start with. If those work, the other "nasty" ones are
oe-selftest and the toolchain test suites. Also need to check kvm is
working.

We have gone around in circles on this several times as you're not the
first to suggest it :/.

Cheers,

Richard


Re: Stable release testing - notes from the autobuilder perspective

Tom Rini
 

On Mon, Sep 07, 2020 at 10:03:36PM +0100, Richard Purdie wrote:
On Mon, 2020-09-07 at 16:55 -0400, Tom Rini wrote:
On Mon, Sep 07, 2020 at 02:59:41PM -0300, Otavio Salvador wrote:
Hello all,

Em seg., 7 de set. de 2020 às 13:14, Richard Purdie
<richard.purdie@...> escreveu:
...
Any thoughts from anyone on this?
I second this and at least at O.S. Systems we've been using Docker
containers to keep maintenance easier for old releases. I'd be
great
we could alleviate this and reduce its use as much as possible.

The CI builder maintenance is indeed a time-consuming task and as
easier it gets the easier is to convince people to set up them for
their uses and in the end, this helps to improve the quality of
submitted patches and reduces the maintenance effort as well.
Excuse what may be a dumb question, but why are we not just building
pyro for example in a Ubuntu 16.04 or centos7 (or anything else with
official containers available) ? Is the performance hit too much,
even with good volume management? And extend that for other branches
of course. But as we look at why people care about such old releases
(or, supporting a current release into the future) it seems like "our
build environment is a container / VM so we can support this on
modern HW" pops up.
The autobuilder is setup for speed so there aren't VMs involved, its
'baremetal'. Containers would be possible but at that point the kernel
isn't the distro kernel and you have permission issues with the qemu
networking for example.
Which issues do you run in to with qemu networking? I honestly don't
know if the U-Boot networking tests we run via qemu under Docker are
more or less complex than what you're running in to.

Speed is extremely important as we have about a 6 hour build test time
but a *massive* test range (e.g. all the gcc/glibc test suites on each
arch, build+boot test all the arches under qemu for sysvinit+systemd,
oe-selftest on each distro). I am already tearing my hair out trying to
maintain what we have and deal with the races, adding in containers
into the mix simply isn't something I can face.

We do have older distros in the cluster for a time, e.g. centos7 is
still there although we've replaced the OS on some of the original
centos7 workers as the hardware had disk failures so there aren't as
many of them as there were. Centos7 gives us problems trying to build
master.
The reason I was thinking about containers is that it should remove some
of what you have to face. Paul may or may not want to chime in on how
workable it ended up being for a particular customer, but leveraging
CROPS to setup build environment of a supported host and then running it
on whatever the available build hardware is, was good. It sounds like
part of the autobuilder problem is that it has to be a specific set of
hand-crafted machines and that in turn feels like we've lost the
thread, so to speak, about having a reproducible build system. 6 hours
even beats my U-Boot world before/after times, so I do get the dread of
"now it might take 5% longer, which is a very real more wallclock time.
But if it means more builders could be available as they're easy to spin
up, that could bring the overall time down.

So this plan is the best practical approach we can come up with to
allow us to be able to build older releases yet not change the
autobuilders too much and cause new sets of problems. I should have
mentioned this, I just assume people kind of know this, sorry.
Since I don't want to put even more on your plate, what kind of is the
reasonable test to try here? Or is it hard to say since it's not just
"MACHINE=qemux86-64 bitbake world" but also "run this and that and
something else" ?

--
Tom

501 - 520 of 1685