[oe] Checksums in Bitbake
fransmeulenbroeks at gmail.com
Wed Mar 24 20:17:43 UTC 2010
2010/3/24 Chris Larson <clarson at kergoth.com>:
> On Wed, Mar 24, 2010 at 8:15 AM, Frans Meulenbroeks <
> fransmeulenbroeks at gmail.com> wrote:
>> Interesting ideas.
>> I need to let this digest a little bit.
>> Some initial thougths
>> The checksum should also depend on the checksum of the underlying
>> packages. E.g. if A depends on B and the checksum of B changes it
>> should trigger a rebuild of A.
> I don't think this is a very good idea, personally. As an option, perhaps,
> but we do things the way we do for a reason, just because a dep of mine is
> rebuilt doesn't automatically require that I be rebuilt. I'd suggest moving
> to an alternative which encodes the library ABI and incorporates that into
> the hashes of things that depend upon it, but we can certainly do what you
> want as an optional feature.
If a dep is rebuild there is a reason for it (bug fix, packaging
changed, changes in exported files etc etc).
This might impact the using recipe.
If baking a file does not result in a rebuild when a dependency is
changed, probably a warning should be given.
Encoding the library ABI is only part of the job. You'd also have to
take the .h files a package exports into account as constants in it
could be changed.
And even the using package could change its behaviour (e.g. because
configure runs differently).
Note also that if we abandon PR we do not really have an easy
mechanism to force recompilation of a package (if a depenency changed
and we want to force a rebuild).
> A first crude approach would be to have a hash of the concatenation of
>> the unfolded recipe (so with all includes/requires expanded) and the
>> hashes of the recipes it depends on). Of course this is very rough as
>> even changing whitespace in a recipe will lead to a recompile.
>> A different approach would be to let it depend on PV + PR. That'll put
>> the developer in control (with all related issues, like the developer
>> not bumping PR).
>> And yet a different one would be to use variables and functions from the
>> I have mixed feelings on whether checksums also would depend on global
>> vars (e.g. code generated by the classes or variables in e.g.
>> On the one hand it seems pretty neat, on the other hand I worry about
>> performance (calculating the checksum).
> Global variables should absolutely be included, imo. The reason for going
> with a blacklist rather than a whitelist approach is to, as richard says,
> make it less error prone. It ensures that the failure mode is something
> being rebuilt, rather than using possibly incorrect binaries. I'd rather it
> take a bit longer to build than result in questionable output. If
> calculating the checksum time becomes a concern, which I doubt, you could
> hash the configuration metadata at ConfigParsed time and incorporate that
> hash into the hash generated of the recipe. This could increase the
> likelihood of collisions, but I'm not too worried. Let's get things
> working, and determine the bottlenecks at that point.
Agree, but as changes in vars are less likely we could consider having
something to DISTRO_PR.
My nightmare is that if I am going to build console-image (about 3000
tasks) that it goes to check 3000 times if my TMPDIR is not changed.
Some caching will definitely be needed
(btw if we have a checksum per file and have a rule that if the
checksum is newer than the file, it is ok and need not be recomputed
that could help, but it might bring back some of the issues we now
have with stamp).
> Christopher Larson
> clarson at kergoth dot com
> Founder - BitBake, OpenEmbedded, OpenZaurus
> Maintainer - Tslib
> Senior Software Engineer, Mentor Graphics
> Openembedded-devel mailing list
> Openembedded-devel at lists.openembedded.org
More information about the Openembedded-devel