- == Derived Archive Rebuilds ==
- Two ways to do rebuilds:
- * Launchpad
- * Some way involving a cluster managed by a friendly debian guy
- Problems:
- * rebuild tests compete with ppa builds
- * builder hardware, pretty good though
- * soyuz scaling issues
- * process-upload serialization (should be fixed soon)
- * would like to do this for ARM
- * cross compiling will not work
- * hardware is slow (OO.org takes ~2 days)
- * qemu on even very fast hardware is too slow
- * the qemu-with-a-cross-compiler underneath approach has legs, but isn't ready yet
- Use cases for rebuilds:
- * verifying the tool chain works
- * lucid all builds with its own toolchain
- * how much does the new version of gcc break?
- * we want to rebuild the archive with a new tool chain
- * here we keep the results
- * can be because of optimizations, or a bad compiler flag in previous builds
- * needs something like a binNMU if the results go back into the same archive
- * important to be able to measure the differences
- "scorched earth" rebuilds to do with build dependency loops -- session later in the week
- The context here is doing rebuilds for Launchpad-managed archives.
- Iterated rebuids are useful.
- Some part of the process involves running a script on a DC machine that takes an
- hour -- difficult to expose through Launchpad.
- Diskless archives?
- The variance of ARM architecture leads towards the requirement of assigning
- builds to particular builders. Builder pools are related, but existing spec
- does not cover this. In general, there is a need to store more data about
- buildds.
- Something about checking if a build uses swap, and general performance
- monitoring of builds.
- We could always build into a derived archive, and possibly copy back into the
- source archives.
- == ACTIONS ==
- * implement binNMUs in Launchpad
- * API exposure for copy-archive in Launchpad
- * implementing derived archives would help too