SharingTestingInfrastructure
⇤ ← Revision 1 as of 2010-05-10 14:39:55
4595
Comment:
|
5200
|
Deletions are marked like this. | Additions are marked like this. |
Line 60: | Line 60: |
{{{ Notes from OEM and Colin Watson: * More stress testing before release * Improving our testing procedures * Using more automated tests * Further brainstorming Some other automatic tests we have and that may need improvements: Package install testing (working, but needs features kill auto-kill for hanging installs): http://people.canonical.com/~mvo/automatic-upgrade-testing/auto-install-tester/ Automatic upgrade testing (no gui): http://people.canonical.com/~mvo/automatic-upgrade-testing/current/ Notes from Evan: http://people.canonical.com/~scott/daily-installer/ }}} {{{ |
|
Line 62: | Line 80: |
{{{ |
Launchpad Entry: foo
Created:
Contributors:
Packages affected:
Summary
This should provide an overview of the issue/functionality/change proposed here. Focus here on what will actually be DONE, summarising that so that other people don't have to read the whole spec. See also CategorySpec for examples.
Release Note
This section should include a paragraph describing the end-user impact of this change. It is meant to be included in the release notes of the first release in which it is implemented. (Not all of these will actually be included in the release notes, at the release manager's discretion; but writing them is a useful exercise.)
It is mandatory.
Rationale
This should cover the _why_: why is this change being proposed, what justifies it, where we see this justified.
User stories
Assumptions
Design
You can have subsections that better describe specific parts of the issue.
Implementation
This section should describe a plan of action (the "how") to implement the changes discussed. Could include subsections like:
UI Changes
Should cover changes required to the UI, or specific UI that is required to implement this
Code Changes
Code changes should include an overview of what needs to change, and in some cases even the specific details.
Migration
Include:
- data migration, if any
- redirects from old URLs to new ones, if any
- how users will be pointed to the new way of doing things, if necessary.
Test/Demo Plan
It's important that we are able to test new features, and demonstrate them to users. Use this section to describe a short plan that anybody can follow that demonstrates the feature is working. This can then be used during testing, and to show off after release. Please add an entry to http://testcases.qa.ubuntu.com/Coverage/NewFeatures for tracking test coverage.
This need not be added or completed until the specification is nearing beta.
Unresolved issues
This should highlight any issues that should be addressed in further specifications, and not problems with the specification itself; since any specification with problems cannot be approved.
BoF agenda and discussion
Notes from OEM and Colin Watson: * More stress testing before release * Improving our testing procedures * Using more automated tests * Further brainstorming Some other automatic tests we have and that may need improvements: Package install testing (working, but needs features kill auto-kill for hanging installs): http://people.canonical.com/~mvo/automatic-upgrade-testing/auto-install-tester/ Automatic upgrade testing (no gui): http://people.canonical.com/~mvo/automatic-upgrade-testing/current/ Notes from Evan: http://people.canonical.com/~scott/daily-installer/
Upgrade and Install Testing =========================== * upgrade tests is catching a fair amount of bugs * upgrade testing should also test universe (it does not currently because of scaling issues) * testing that the upgraded system "works" is fairly limited at the moment * some (relatively small number of) packages fail to install even on a clean system; these are easy to fix * does anyone still use the weather report? steve and leann probably still do * testing in the cloud? * release upgrader has an EC2 mode * ... but kvm tends to be higher-value, at least for desktop-oriented tests * feasibility of EC2 testing for install? * use EC2 for the automatic install/remove test * try to create autotest profile for ubiquity (via IS) - machine name is pommerac * ACTION: look at hudson or any other continuous integration testing frameworks * use a single instance of hudson to publish all of our tests * need to be able to easily plug in new machines to existing test frameworks * good candidate for sitting between all of our systems and report back? possibly * can it store test output? * talk to Robert Collins * ACTION: look into handover of Soren's testing system to QA (Carlos)? * ACTION: mdz to work with QA to task someone with overseeing our test process * no one has been guiding this effort from a larger, architectural perspective and this needs to be done * ACTION: figure out what to do about failed packages * list the systems for which we already have a central testing framework in place * python testing * pyflakes.vim is made of awesome, for those who don't already use it * pychecker has a lot of false positives. pyflakes is apparently much better and maintained by a Canonical employee ACTION: james_w and cjwatson * Launchpad PPA with permissions matching Ubuntu archive permissions * Bot monitors this PPA and runs tests on packages in it, and copies them into the primary archive if they work * piuparts is more configurable now and we should look into it again to see if it can be used to provide a good set of tests without too much false positivies
FoundationsTeam/Specs/SharingTestingInfrastructure (last edited 2010-06-10 07:28:35 by 84)