PackageTesting

Differences between revisions 5 and 6
Revision 5 as of 2009-05-29 14:35:29
Size: 6103
Editor: 80
Comment:
Revision 6 as of 2009-06-01 18:57:21
Size: 8221
Editor: pool-173-76-105-89
Comment:
Deletions are marked like this. Additions are marked like this.
Line 10: Line 10:
Setting up regular piuparts and lintian runs. Packages in Ubuntu should be tested automatically to ensure that they meet basic standards. Packages should at least be installable and uninstallable and files installed should not conflict with those in other packages.

Various tools exist to ensure package quality and should be evaluated for regular package checks.
Line 14: Line 16:
This section should include a paragraph describing the end-user impact of this change. It is meant to be included in the release notes of the first release in which it is implemented. (Not all of these will actually be included in the release notes, at the release manager's discretion; but writing them is a useful exercise.)

It is mandatory.
Ubuntu is now automatically testing all packages in the Ubuntu repositories to ensure that they will install, upgrade, and remove cleanly and without error.
Line 20: Line 20:
This should cover the _why_: why is this change being proposed, what justifies it, where we see this justified. Packages should always be cleanly installable and uninstallable. If a package cannot be installed or removed by the user without encountering an error, this makes the package broken and is a serious concern from a usability standpoint.

Automated testing to ensure that packages meet basic requirements will improve the quality of the software distributed by Ubuntu and ensure a clean user experience.
Line 24: Line 26:
 * Aaron uses Synaptic to install an application from one of the Ubuntu repositories. This package obsoletes an already installed package. Because the packages have been tested to ensure clean installation and uninstallation, the change from one package to another proceeds as expected and Aaron is able to use his newly installed software.
 * Bob uses Update Manager to upgrade his packages to the latest version. No installation or update errors are encountered as the packages have been checked to ensure that they upgrade cleanly.
 * Carrie is the packager for an application. She uploads a new version of the package and is notified if the package fails automated testing. The new version of the package is not placed into the repository until the errors in the package are fixed.
Line 25: Line 31:

 * Packages can be tested as part of the build process.
 * Packages can be rejected after the build if they fail the package test.
 * Packagers can be notified when package tests fail.
Line 28: Line 38:
You can have subsections that better describe specific parts of the issue. As a final step during the build of packages, a series of package tests will be run on the freshly built package. If these tests fail, the package will not be added to the archive. Instead, the uploader will be notified that the package tests failed (and will be provided with the output from the tests).
Line 32: Line 42:
This section should describe a plan of action (the "how") to implement the changes discussed. Could include subsections like: === Initial Step / Feasibility Test ===
Line 34: Line 44:
=== UI Changes === As a first step, QA will set up a server with a chroot environment and [[https://code.launchpad.net/~lifeless/conflictchecker/trunk|conflictchecker]]. The packages already in the archive will be tested with conflictchecker and a list of conflicts will be created.
Line 36: Line 46:
Should cover changes required to the UI, or specific UI that is required to implement this === Automated Implementation ===
Line 38: Line 48:
=== Code Changes === If the results of the initial step prove to be useful and the initial test is working properly, QA will work with the Foundations team to develop a plan for conducting tests after package builds, possibly using virtual machines in the cloud.
Line 40: Line 50:
Code changes should include an overview of what needs to change, and in some cases even the specific details. === Test Extension ===
Line 42: Line 52:
=== Migration ===

Include:
 * data migration, if any
 * redirects from old URLs to new ones, if any
 * how users will be pointed to the new way of doing things, if necessary.
After the automated environment is working to test packages for file conflicts, further tests may be added. [[http://wiki.debian.org/piuparts|piuparts]] is the most likely candidate for use but the specific tool(s) to be used will be determined during this phase.
Line 51: Line 56:
It's important that we are able to test new features, and demonstrate them to users. Use this section to describe a short plan that anybody can follow that demonstrates the feature is working. This can then be used during testing, and to show off after release. Please add an entry to http://testcases.qa.ubuntu.com/Coverage/NewFeatures for tracking test coverage. Testing will be performed in phases as the system is implemented. As the true value of automated package testing is unknown, the initial phase itself is a test of feasibility for the specification as a whole.
Line 53: Line 58:
This need not be added or completed until the specification is nearing beta.  1. Initial testing will be done on an isolated server in a chroot environment. This will include testing the packages in the archive for file conflicts.
 1. During the second phase, testing will be performed by manually monitoring the tests on initial package builds.
 1. After an initial period of manual monitoring, audits of the test logs should be performed to ensure that the package tests are performing properly.
Line 57: Line 64:
This should highlight any issues that should be addressed in further specifications, and not problems with the specification itself; since any specification with problems cannot be approved.  * Further development of more in-depth package tests should be addressed in a separate specification: https://blueprints.edge.launchpad.net/ubuntu/+spec/server-karmic-automate-pkg-testing-in-the-cloud

Summary

Packages in Ubuntu should be tested automatically to ensure that they meet basic standards. Packages should at least be installable and uninstallable and files installed should not conflict with those in other packages.

Various tools exist to ensure package quality and should be evaluated for regular package checks.

Release Note

Ubuntu is now automatically testing all packages in the Ubuntu repositories to ensure that they will install, upgrade, and remove cleanly and without error.

Rationale

Packages should always be cleanly installable and uninstallable. If a package cannot be installed or removed by the user without encountering an error, this makes the package broken and is a serious concern from a usability standpoint.

Automated testing to ensure that packages meet basic requirements will improve the quality of the software distributed by Ubuntu and ensure a clean user experience.

User stories

  • Aaron uses Synaptic to install an application from one of the Ubuntu repositories. This package obsoletes an already installed package. Because the packages have been tested to ensure clean installation and uninstallation, the change from one package to another proceeds as expected and Aaron is able to use his newly installed software.
  • Bob uses Update Manager to upgrade his packages to the latest version. No installation or update errors are encountered as the packages have been checked to ensure that they upgrade cleanly.
  • Carrie is the packager for an application. She uploads a new version of the package and is notified if the package fails automated testing. The new version of the package is not placed into the repository until the errors in the package are fixed.

Assumptions

  • Packages can be tested as part of the build process.
  • Packages can be rejected after the build if they fail the package test.
  • Packagers can be notified when package tests fail.

Design

As a final step during the build of packages, a series of package tests will be run on the freshly built package. If these tests fail, the package will not be added to the archive. Instead, the uploader will be notified that the package tests failed (and will be provided with the output from the tests).

Implementation

Initial Step / Feasibility Test

As a first step, QA will set up a server with a chroot environment and conflictchecker. The packages already in the archive will be tested with conflictchecker and a list of conflicts will be created.

Automated Implementation

If the results of the initial step prove to be useful and the initial test is working properly, QA will work with the Foundations team to develop a plan for conducting tests after package builds, possibly using virtual machines in the cloud.

Test Extension

After the automated environment is working to test packages for file conflicts, further tests may be added. piuparts is the most likely candidate for use but the specific tool(s) to be used will be determined during this phase.

Test/Demo Plan

Testing will be performed in phases as the system is implemented. As the true value of automated package testing is unknown, the initial phase itself is a test of feasibility for the specification as a whole.

  1. Initial testing will be done on an isolated server in a chroot environment. This will include testing the packages in the archive for file conflicts.
  2. During the second phase, testing will be performed by manually monitoring the tests on initial package builds.
  3. After an initial period of manual monitoring, audits of the test logs should be performed to ensure that the package tests are performing properly.

Unresolved issues

BoF agenda and discussion

Goal

  • Catch package issues more quickly

Tools

  • lintian
    • checks for package bugs
    • will not catch post-install script errors
    • already done by Debian, so might not give us a large bang for buck
  • apport
    • reports bugs when install scripts fail or crash
    • we have some data on this, but nobody has really fixed any of it
  • conflictchecker

  • piuparts
  • autopackagetest

Want to move piuparts sessions into the cloud

  • currently chroot-based
  • need to do this to do things like run services, do upgrades, etc.

Historically we get most bang for our buck by doing large-scale system testing

  • Install a system, do install/upgrade of a lot of packages, test
  • Is it now time to start testing at a package level?

Would be great to pool resources and do all this package testing together... Currently just have people doing it ad-hoc

When should we run these tools?

  • Package testing probably does not fall into Checkbox
  • Could this be tied into the build process? As part of build process, test install and uninstall of package
    • Need to run conflict-checker on packages
  • At times we need to touch every package in the archive
    • E.g. "If you call a certain function, how many applications use that function?"
    • Can we use a similar process?
      • Need a local mirror... use a machine to run this in the datacenter
  • Should run lintian on everything, whenever a package gets updated
    • Can specify lintian verbosity
    • we should probably start with just errors and leave warnings off to minimize data firehose
  • Should we set up some VMs and install a ton of packages?
    • we'd have to set the debconf level to critical or be overwhelmed by debconf questions
      • we'll still probably get some questions, so someone will have to monitor it

When do we not want to run these?

  • Do not want to make it a requirement to go into the archive
    • Only block a package when it fails when we are coming up on a milestone?
    • Even if we don't block in a milestone, we should notify the developer

Where do we run these?

  • A QA server to be determined
    • will probably run in a VM

Proposals

lintian
  • Run lintian on all packages in archive (once)
    • or how about only packages with Ubuntu delta, as we assume Debian tests all of theirs
  • After that run it whenever a package is checked in
    • watch archive for package updates
    • download and run lintian

Upgrade Testing
  • Create a VM of the previous release (once) and clone it every day
  • Upgrade to latest version of previous release
  • Dist-upgrade to latest release

First Step
  • Henrik Omma and Ronald McCollam will coordinate and learn how to use lintian

    • Coordinate with Steve Beattie and Juanje Ojeda from Guadalinex (juanje on Launchpad)
    • Further discussions reveal that conflictchecker would have a greater impact and would be a better start
  • Ronald McCollam will begin running these tests and evaluate how much value they provide

  • run tests as described in "lintian" proposal provide output on a webpage somewhere
    • Marc Tardif requests having conflict-checker as well
  • NB: There is similar work going on here: https://blueprints.edge.launchpad.net/ubuntu/+spec/server-karmic-automate-pkg-testing-in-the-cloud

    • we should coordinate work to avoid duplicating effort

Next Steps
  • include piuparts or autopkgtest
    • these tools may need some updates or work and bugfixes before they are useful
    • another issue here will be receiving a number of false positives
    • piuparts is/will be better maintained than autopkgtest for the immediate future and gives results that are more immediately relevant


CategorySpec

QATeam/Specs/PackageTesting (last edited 2009-06-10 10:40:46 by cpc4-oxfd8-0-0-cust39)