We historically dealt with tarballs which usually have a NAME-VERSION directory within them, so when you extract them, they go into a sub directory which tar creates. We usually call that subdirectory "S".
When we wrote the git fetcher, we emulated this by using a "git" directory to extract into rather than WORKDIR.
For local files, there is no sub directory so they go into WORKDIR. This includes patches, which do_patch looks for in WORKDIR and applies them from there.
What issues does this cause? If you have an existing WORKDIR and run a build with:
SRC_URI = "file://a file://b"
then change it to:
SRC_URI = "file://a"
and rebuild the recipe, the fetch and unpack tasks will rerun and their hashes will change but the file "b" is still in WORKDIR. Nothing in the codebase knows that it should delete "b" from there. If you have code which does "if exists(b)", which is common, it will break.
There are variations on this, such as a conditional append on some override to SRC_URI but the fundamental problem is one of cleanup when unpack is to rerun.
The naive approach is then to think "lets just delete WORKDIR" when running do_unpack. There is the small problem of WORKDIR/temp with logs in. There is also the pseudo database and other things tasks could have done. Basically, whilst tempting, it doesn't work out well in practise particularly as that whilst unpack might rerun, not all other tasks might.
I did also try a couple of other ideas. We could fetch into a subdirectory, then either copy or symlink into place depending on which set of performance/usabiity challenges you want to deal with. You could involve a manifest of the files and then move into position so later you'd know which ones to delete.
Part of the problem is that in some cases recipes do:
S = "${WORKDIR}"
for simplicity. This means that you also can't wipe out S as it might point at WORKDIR.
SPDX users have requested a json file of file and checksums after the unpack and before do_patch. Such a manifest could also be useful for attempting cleanup of an existing WORKDIR so I suspect the solution probably lies in that direction, probably unpacking into a subdir, indexing it, then moving into position.
By "moving it into position" do you mean moving the files from the clean subdirectory to the locations they would occupy today?
If so.... I don't understand why that's strictly necessary. It seems like almost all of the complexity of this will be to support a use-case we don't really like anyway (S = "${WORKDIR}"). Manifests are great and all, but it causes a lot of problems if they get out of sync and I suspect that would happen more often than we would like, e.g. with devtool, make config, manual editing, etc. If we can keep it simple and not rely on external state (e.g. a manifest) I think it will be a lot easier to maintain in the long run.
Dropping S = "${WORKDIR}" doesn't solve the problem being described here, it just removes something which complicates current code and makes that problem harder to solve. Even not supporting S = "${WORKDIR}", do_unpack still unpacks to WORKDIR with the S directory created by the tarball.