TestcasesWithUTAH

Dev Week -- Adding test cases with UTAH -- gema -- Thu, Aug 30th, 2012

   1 [16:01] <gema> Hello everyone!
   2 [16:01] <gema> I am Gema Gomez-Solano and I work in the Platform QA Team in Canonical
   3 [16:02] <gema> and I am going to talk about UTAH, or the Ubuntu Test Automation Harness,
   4 [16:02] <gema> it is a testing tool we are creating because we were lacking coherence in our testing
   5 [16:02] <gema> we’d like to share it with other groups in case anyone is having the same kind of problems
   6 [16:03] <gema> UTAH is a tool that is meant to help with end to end test automation.
   7 [16:03] <gema> you can find the project in launchpad: https://launchpad.net/utah
   8 [16:03] <gema> ok, so let's get started with what UTAH does
   9 [16:04] <gema> it takes care of provisioning a machine from any iso or existing provisioned machine, installs either a VM or a physical machine with it and then it runs the tests and report the results back
  10 [16:04] <gema> before this project we were doing a lot of development work that was tailored to solve a particular problem, we had to do installs for ISO testing, for Kernel SRU testing, for upgrades...
  11 [16:05] <gema> we are interested in using hardware as well as VMs, 32 and 64 bits, ARM
  12 [16:05] <gema> so we had many scripts to solve all these problems separately, furthermore, this was happening not only in the QA Team, but also in the development teams.
  13 [16:06] <gema> there was a lot of redundant code being created everywhere
  14 [16:06] <gema> the creation of this code was getting on the way of the real development of automated tests, because at the end of the day, how you provision a machine may or may not be relevant to your testing
  15 [16:07] <gema> so we thought of this project to unify all the scattered work we were doing
  16 [16:07] <gema> the test cases can be written in any programming language and then be wrapped with UTAH to be able to run them in an fully automated fashion
  17 === var is now known as Guest76298
  18 [16:07] <gema> you can also choose to schedule things with jenkins, which is the scheduler of choice of the QA Team, or you can choose to schedule runs with cron or just kick off a run manually whenever you need it
  19 [16:07] <gema> we are expecting UTAH to help also reproducing bugs,
  20 [16:08] <gema> whenever we find a bug with automated testing, we should be able to give a preseed and  a runlist to the developer and he should be able to reproduce the problem at home,
  21 [16:08] <gema> without having to worry too much about installing a system
  22 [16:08] <gema> so we are trying to make testing repeatable
  23 [16:08] <gema> I will describe a bit how the testing is meant to be structured in UTAH
  24 [16:09] <gema> Testing structure
  25 [16:09] <gema> Test cases are the smallest execution unit in UTAH, we prefer them to be in separate files, so that we can choose to execute a particular binary, but if your binary takes parameters and executes different test cases based on the parameter, that works also well.
  26 [16:10] <gema> Test cases can have set up and tear down, so that you have control over your test environment
  27 [16:10] <gema> Test cases also need to leave the system in the same state they found it, this is good practice and avoids problems down the line.
  28 [16:11] <gema> So do not make test cases dependant on each other so that they have to run in any specific order!
  29 [16:11] <gema> Test cases  can be grouped in test suites.
  30 === carlos is now known as Guest27302
  31 [16:11] <gema> Test suites are group of test cases that target a particular piece of functionality and that makes sense to *maintain together*
  32 [16:12] <gema> i.e. the code is going to be maintained by a particular developer who is the expert on that part
  33 [16:13] <gema> for instance, file system testing may make a nice test suite, and then we could choose to
  34 [16:13] <gema> execute only some of those for smoke testing and all of them if we wanted full regression testing
  35 [16:13] <gema> We are going to use test suites to group test cases that are functionally similar, or that are targetting a particular piece of functionality
  36 [16:14] <gema> We also have runlists, which is a list of test cases and/or test suites that are run together
  37 [16:14] <gema> For instance, for smoke testing, which is the daily testing we do every day to validate Ubuntu images, we will have a runlist.
  38 [16:14] <gema> We may need more than one smoke testing runlist if we decide to run different test cases on different images for any reason, or on different architectures.
  39 [16:15] <gema> runlists are groups of tests that makes sense to *run together*.
  40 [16:15] <gema> runlists may be different for quantal and precise, for instance. We may want to run different test cases in one case and the other, and runlists help us achieve that
  41 [16:15] <gema> do you guys have any questions/comments?
  42 [16:16] <gema> ok, you can interrupt me any time, I will continue with what UTAH provides to the tester
  43 [16:17] <gema> So we've spoken already about provisioning, and how UTAH can do that for you
  44 [16:17] <gema> Test cases can be written in any language, as long as they are binaries that report success or error in a traditional way.
  45 [16:17] <gema> UTAH captures and reports stderr and stdout, so any output really will do, as long as you are able to parse it afterwards
  46 [16:18] <gema> UTAH can handle reboots, so if for instance a test case needs to reboot in the middle of the execution, UTAH will reboot the machine and continue executing from then onwards
  47 [16:18] <gema> Test cases have timeouts, test suites have timeouts and you can even define a timeout for the overall execution
  48 [16:18] <gema> this is particularly important in full automation, because you don’t want your servers to be stuck on neverending jobs whenever something goes bad with the testing
  49 [16:19] <gema> Runlists can cherrypick test cases from test suites, and they can either include test cases or exclude some
  50 [16:19] <gema> we have added the options to help with maintainability of runlists
  51 [16:20] <gema> What is the status of the development
  52 [16:20] <gema> We are using UTAH already for bootspeed testing
  53 [16:20] <gema> so any bootspeed testing results you see from now on, come from tests executed with the harness
  54 [16:20] <ClassBot> There are 10 minutes remaining in the current session.
  55 [16:20] <gema> we are slowly migrating also ISO testing, so that our smoke testing runs with the tool and we can concentrate on adding more coverage easily
  56 [16:21] <gema> before the scripts were not so flexible and we need to maintain several copies of the same test cases for different configurations, which is not ideal
  57 [16:21] <gema> we are trying to avoid that going forward
  58 [16:22] <gema> the most difficult part of migrating from our current system, is to decide how to structure the test cases in the new world so that they grow in the right direction
  59 [16:22] <gema> we are also working on provisioning ARM, so that we can finally run ARM tests fully automated in the lab
  60 [16:22] <gema> this will be coming soon
  61 [16:23] <gema> any questions anyone?
  62 [16:24] <ClassBot> TheLordOfTime asked: How does one get UTAH so we can use it?
  63 [16:24] <gema> good question, TheLordOfTime !
  64 [16:25] <gema> the documentation on how to install and get utah is available online
  65 [16:25] <gema> http://utah.readthedocs.org/en/latest/index.html
  66 [16:25] <gema> and you can also join our mailing list
  67 [16:25] <gema> https://lists.canonical.com/mailman/listinfo/ubuntu-utah-devel
  68 [16:25] <gema> in case you run into problems
  69 [16:25] <ClassBot> There are 5 minutes remaining in the current session.
  70 [16:25] <gema> it is early days and we are getting it ready for everyone now
  71 [16:26] <gema> there doesn't seem to be any more questions
  72 [16:28] <ClassBot> jsjgruber-l85-p asked: Is it possible to have Utah test GUI applications?
  73 [16:28] <gema> good question, right now that is not posible, but we are working on enabling it
  74 [16:28] <gema> the reason it is not possible is because utah connects to the host to run the tests via ssh
  75 [16:29] <gema> and we haven't decided how to start the x session and test from there
  76 [16:29] <gema> if you have suggestions join the list and share them
  77 [16:30] <gema> we are about to finish this session, but I am still here for the coming 30 mins

MeetingLogs/devweek1208/TestcasesWithUTAH (last edited 2012-08-31 09:37:17 by dholbach)