On Mon, 2022-01-17 at 13:50 +0100, Stefan Herbrechtsmeier wrote:
Hi Mark,
Am 14.01.2022 um 21:09 schrieb Mark Asselstine:
On 2022-01-14 14:38, Stefan Herbrechtsmeier wrote:
Hi Mark,
Am 14.01.2022 um 17:58 schrieb Mark Asselstine:
On 2022-01-14 11:35, Stefan Herbrechtsmeier wrote:
Am 14.01.2022 um 16:22 schrieb Mark Asselstine:
On 2022-01-14 10:05, Stefan Herbrechtsmeier wrote:
Am 14.01.2022 um 15:15 schrieb Mark Asselstine via
lists.openembedded.org:
On 2022-01-14 07:18, Alexander Kanavin wrote:
If we do seriously embark on making npm/go better, the first
step could be to make npm/go write out a reproducible manifest
for the licenses and sub-packages that can be verified against
two recipe checksums during fetch, and ensures no further
network access is necessary. That alone would make it a viable
fetcher. Those manifests could contain all needed information
for further processing (e.g. versions, and what depends on what
etc.) And yes, it's a bundled self-contained approach, but that
matches how the rest of the world is using npm.
I can't speak to npm but for go this was where I wanted to see
things go. Just as work was done to avoid unexpected downloads of
Python eggs I always felt the key to improving go integration was
some for of automated SRC_URI generation. Once this would be
available it could be leveraged for licensing and such.
Stefan, by the way the reason (a) is not possible is that
multiple go applications can use a shared 'library' but different
versions (or even different git commit ids).
Why is this simpler? The recipes need to list every information
about its dependencies. That means you repeat a lot of code and
need to change a lot of files if you update a single dependency.
We went through this with go recipes in meta-virt. It didn't work.
You end up producing a lot of Yocto Project specific files
containing information which is already available in other forms.
Throw in the multiple versions issue I described before and you get
a mess.
I assume you want to use the version the project recommend and not a
single major version. What makes Go so special that the reasons for
a single major version are irrelevant? Why don't we use multiple
version for C/C++ projects?
Sure, go projects can opt to only use released versions of
dependencies, but this doesn't always happen. For now, and possibly
into the future, we have to accept this is not the case.
Not using the versions recommended in a go project will result in a
level in invalidation of their testing and validation as well as CVE
tracking.
Who does this work and how long he do it?
This work is done at the project level when we use the upstream
configuration verbatim. Only if we decide to deviate in the YP would
there be additional work. Which is why I propose we stick to what we
know the project is using, testing with and shipping in their releases.
[snip]
But why OE and distributions like Debian use a single project version
instead of individual dependency versions? I think we have good reasons
to use a single version and this reasons are independent of the language
or an existing package manager.
I really miss a comment from a npm user and a TSC member because Alex
and you propose fundamental changes in OE.
The TSC had a meeting today and talked about this a little. I'm going to give my
opinion and the others on the TSC can chime in if they want to add or differ.
Firstly, the c) option of highlighting issues with upstreams and working with
them is important and we need to do that. I'm taking it as a given we will talk
and try and work with them.
In parallel, we need to make solutions which work today. In many ways there
aren't perfect answers here and a) or b) may be appropriate in different cases.
What we as in OE care about in particular is that:
* recipes precisely define the thing they're building such that it is uniquely
identified
* builds from mirrors/archives work, you don't need an online upstream to make a
build (and hence it can be reproduced from archive)
From that perspective I don't really care if the SRC_URI is long and ugly in a
recipe as long as it precisely defines what is being built so it is reproducible
and offline builds/caches work.
Obviously individual recipes are nice to have in some cases but in the npm case
where that results in 5000 recipes, it simply won't work with bitbake the way
bitbake works today. We have no plans that will let us scale bitbake to 5000
recipes so we will need to look at the other solutions.
Using language specific tools and language specific fetchers is ok and we are
seeing that with the npm shrinkrap and cargo plugins and this is likely the
direction we'll have to move going forward.
I appreciate there are challenges both ways but does that give an idea of the
direction the TSC envisages?
Cheers,
Richard