Re: WORKDIR fetcher interaction issue


Alberto Pianon
 

Il 2022-12-29 16:06 Richard Purdie ha scritto:
On Thu, 2022-12-29 at 08:50 -0600, Joshua Watt wrote:
On Thu, Dec 29, 2022 at 7:56 AM Richard Purdie
<richard.purdie@...> wrote:

I was asked about a WORKDIR/fetcher interaction problem and the bugs it
results in. I've tried to write down my thoughts.

The unpack task writes it's output to WORKDIR as base.bbclass says:

fetcher = bb.fetch2.Fetch(src_uri, d)
fetcher.unpack(d.getVar('WORKDIR')

We historically dealt with tarballs which usually have a NAME-VERSION
directory within them, so when you extract them, they go into a sub
directory which tar creates. We usually call that subdirectory "S".

When we wrote the git fetcher, we emulated this by using a "git"
directory to extract into rather than WORKDIR.

For local files, there is no sub directory so they go into WORKDIR.
This includes patches, which do_patch looks for in WORKDIR and applies
them from there.

What issues does this cause? If you have an existing WORKDIR and run a
build with:

SRC_URI = "file://a file://b"

then change it to:

SRC_URI = "file://a"

and rebuild the recipe, the fetch and unpack tasks will rerun and their
hashes will change but the file "b" is still in WORKDIR. Nothing in the
codebase knows that it should delete "b" from there. If you have code
which does "if exists(b)", which is common, it will break.

There are variations on this, such as a conditional append on some
override to SRC_URI but the fundamental problem is one of cleanup when
unpack is to rerun.

The naive approach is then to think "lets just delete WORKDIR" when
running do_unpack. There is the small problem of WORKDIR/temp with logs
in. There is also the pseudo database and other things tasks could have
done. Basically, whilst tempting, it doesn't work out well in practise
particularly as that whilst unpack might rerun, not all other tasks
might.

I did also try a couple of other ideas. We could fetch into a
subdirectory, then either copy or symlink into place depending on which
set of performance/usabiity challenges you want to deal with. You could
involve a manifest of the files and then move into position so later
you'd know which ones to delete.

Part of the problem is that in some cases recipes do:

S = "${WORKDIR}"

for simplicity. This means that you also can't wipe out S as it might
point at WORKDIR.

SPDX users have requested a json file of file and checksums after the
unpack and before do_patch. Such a manifest could also be useful for
attempting cleanup of an existing WORKDIR so I suspect the solution
probably lies in that direction, probably unpacking into a subdir,
indexing it, then moving into position.
By "moving it into position" do you mean moving the files from the
clean subdirectory to the locations they would occupy today?
If so.... I don't understand why that's strictly necessary. It seems
like almost all of the complexity of this will be to support a
use-case we don't really like anyway (S = "${WORKDIR}"). Manifests are
great and all, but it causes a lot of problems if they get out of sync
and I suspect that would happen more often than we would like, e.g.
with devtool, make config, manual editing, etc. If we can keep it
simple and not rely on external state (e.g. a manifest) I think it
will be a lot easier to maintain in the long run.
Dropping S = "${WORKDIR}" doesn't solve the problem being described
here, it just removes something which complicates current code and
makes that problem harder to solve. Even not supporting S =
"${WORKDIR}", do_unpack still unpacks to WORKDIR with the S directory
created by the tarball.
Cheers,
Richard
Hi Richard,

I'm not sure I'm one of the SPDX guys you intended to refer to :) but I definitely support the idea of having a separate manifest with file relpaths and checksums (and also download loacation) for each item in SRC_URI before do_patch (and before putting everything together into S).

That would be a game changer as to software composition analysis and IP compliance, and it would help also the creation of a SPDX license calculation tool I was asked to contribute[^1]: such manifest would allow to analyze real upstream sources and match them with license metadata coming from existing online resources[^2].

I understand Joshua's concerns about using such manifest to handle source cleanup in case of SRC_URI modifications, and I don't have an answer for that (that is not my field). By the way, IMHO the requirement by SPDX users would be a strong enough motivation to implement such manifest generation.

I would be glad to contribute, if you decide to do it

Cheers,

Alberto

[^1]: https://bugzilla.yoctoproject.org/show_bug.cgi?id=4517#c2
[^2]: Apart from the well-known ones (ClearlyDefined, Software Heritage, Debian, etc.) there's an interesting new project by osadl.org, which may become a good source of trusted license metadata also for Yocto: have a look at osselot.org

Join {openembedded-architecture@lists.openembedded.org to automatically receive all group messages.