Re: Adding more information to the SBOM


Richard Purdie
 

On Fri, 2022-09-16 at 17:18 +0200, Alberto Pianon wrote:
Il 2022-09-15 14:16 Richard Purdie wrote:

For the source issues above it basically it comes down to how much
"pain" we want to push onto all users for the sake of adding in this
data. Unfortunately it is data which many won't need or use and
different legal departments do have different requirements.
We didn't paint the overall picture sufficiently well, therefore our
requirements may come across as coming from a particularly pedantic
legal department; my fault :)

Oniro is not "yet another commercial Yocto project", we are not a legal
department (even if we are experienced FLOSS lawyers and auditors, the
most prominent of whom is Carlo Piana -- cc'ed -- former general counsel
of FSFE and member of OSI Board).

Our rather ambitious goal is not limited to Oniro, and consists in doing
compliance in the open source way and both setting an example and
providing guidance and material for others to benefit from our effort.
Our work will therefore be shared (and possibly improved by others) not
only with Oniro-based projects but also with any Yocto project. Among
other things, the most relevant bit of work that we want to share is
**fully reviewed license information** and other legal metadata about a
whole bunch of open source components commonly used in Yocto projects.
I certainly love the goal. I presume you're going to share your review
criteria somehow? There must be some further set of steps,
documentation and results beyond what we're discussing here?

I think the challenge will be whether you can publish that review with
sufficient "proof" that other legal departments can leverage it. I
wouldn't underestimate how different the requirements and process can
be between different people/teams/companies.

To do that in a **scalable and fully automated way**, we need that Yocto
collects some information that is currently disposed of (or simply not
collected) at build time.

Oniro Project Leader, Davide Ricci - cc'ed - strongly encouraged us to
seek for feedback from you in order to find out the best way to do it.

Maybe organizing a call would be more convenient than discussing
background and requirements here, if you (and others) are available.
I don't mind having a call but the discussion in this current form may
have an important element we shouldn't overlook, which is that it isn't
just me you need to convince on some of this.

If, for example, we should radically change the unpack/patch process,
we need to have a good explanation for why people need to take that
build time/space/resource hit. If we conclude that on a call, the case
to the wider community would still have to be made.

Experience
with archiver.bbclass shows that multiple codepaths doing these things
is a nightmare to keep working, particularly for corner cases which do
interesting things with the code (externalsrc, gcc shared workdir, the
kernel and more).

I had a look at this and was a bit puzzled by some of it.

I can see the issues you'd have if you want to separate the unpatched
source from the patches and know which files had patches applied as
that is hard to track. There would be significiant overhead in trying
to process and store that information in the unpack/patch steps and the
archiver class does some of that already. It is messy, hard and doens't
perform well. I'm reluctant to force everyone to do it as a result but
that can also result in multiple code paths and when you have that, the
result is that one breaks :(.

I also can see the issue with multiple sources in SRC_URI, although you
should be able to map those back if you assume subtrees are "owned" by
given SRC_URI entries. I suspect there may be a SPDX format limit in
documenting that piece?
I'm replying in reverse order:

- there is a SPDX format limit, but it is by design: a SPDX package
entity is a single sw distribution unit, so it may have only one
downloadLocation; if you have more than one downloadLocation, you must
have more than one SPDX package, according to SPDX specs;
I think we may need to talk to the SPDX people about that as I'm not
convinced it always holds that you can divide software into such units.
Certainly you can construct a situation where there are two
repositories, each containing a source file which are only ever linked
together as one binary.

- I understand that my solution is a bit hacky; but IMHO any other
*post-mortem* solution would be far more hacky; the real solution
would be collecting required information directly in do_fetch and
do_unpack
Agreed, this needs to be done at unpack/patch time. Don't underestimate
the impact of this on general users though as many won't appreciate
slowing down their builds generating this information :/.

There is also a pile of information some legal departments want which
you've not mentioned here, such as build scripts and configuration
information. Some previous discussions with other parts of the wider
open source community rejected Yocto Projects efforts as insufficient
since we didn't mandate and capture all of this too (the archiver could
optionally do some of it iirc). Is this just the first step and we're
going to continue dumping more data? Or is this sufficient and all any
legal department should need?

- I also understand that we should reduce pain, otherwise nobody would
use our solution; the simplest and cleanest way I can think about is
collecting just package (in the SPDX sense) files' relative paths and
checksums at every stage (fetch, unpack, patch, package), and leave
data processing (i.e. mapping upstream source packages -> recipe's
WORKDIR package -> debug source package -> binary packages -> binary
image) to a separate tool, that may use (just a thought) a graph
database to process things more efficiently.
I'd suggest stepping back and working out whether the SPDX requirement
of a "single download location" some of this stems from really makes
sense.

Where I became puzzled is where you say "Information about debug
sources for each actual binary file is then taken from
tmp/pkgdata/<machine>/extended/*.json.zstd". This is the data we added
and use for the spdx class so you shouldn't need to reinvent that
piece. It should be the exact same data the spdx class uses.
you're right, but in the context of a POC it was easier to extract them
directly from json files than from SPDX data :) It's just a POC to show
that required information may be retrieved in some way, implementation
details do not matter
Fair enough, I just want to be clear we don't want to duplicate this.


I was also puzzled about the difference between rpm and the other
package backends. The exact same files are packaged by all the package
backends so the checksums from do_package should be fine.
Here I may miss some piece of information. I looked at files in
tmp/pkgdata but I couldn't find package file checksums anywhere: that is
why I parsed rpm packages. But if such checksums were already available
somewhere in tmp/pkgdata, it wouldn't be necessary to parse rpm packages
at all... Could you point me to what I'm (maybe) missing here? Thanks!
In some ways this is quite simple, it is because at do_package time,
the output packages don't exist, only their content. The final output
packages are generated in do_package_write_{ipk|deb|rpm}.

You'd probably have to add a stage to the package_write tasks which
wrote out more checksum data since the checksums are only known at the
end of those tasks. I would question whether adding this additional
checksum into the SPDX output actually helps much in the real world
though. I guess it means you could look an RPM up against it's checksum
but is that something people need to do?

Cheers,

Richard

Join {openembedded-architecture@lists.openembedded.org to automatically receive all group messages.