AutomatedProblemReports

Differences between revisions 45 and 61 (spanning 16 versions)
Revision 45 as of 2006-01-23 13:00:21
Size: 11952
Editor: ip-217-204-123-1
Comment: Removed spam/uninformed rant
Revision 61 as of 2008-08-06 16:26:25
Size: 12879
Editor: localhost
Comment: converted to 1.6 markup
Deletions are marked like this. Additions are marked like this.
Line 4: Line 4:
 * '''Created''': [[Date(2005-10-27T20:51:43Z)]] by JaneWeideman
 * '''Contributors''': JaneWeideman, MartinPitt, MichaelVogt
 * '''Created''': <<Date(2005-10-27T20:51:43Z)>> by JaneWeideman
 * '''Contributors''': MartinPitt, MichaelVogt, RobertCollins, SimonLaw
Line 10: Line 10:
We need to streamline the process of collecting data for common end-user problems, so that they can be prioritized and addressed.

This would ideally mean that crashes of userspace applications and the kernel, as well as packaging-related failures, are detected automatically so the user gets an easy-to-use frontend for adding information to the problem report, and can send the report to our database.
Crashes of userspace applications should be detected automatically so the user gets an easy to use frontend for adding information to the problem report and is offered to send the report to our database.
Line 16: Line 14:
Currently, many classes of problems (esp. program crashes) remain unreported or unfixed because: Currently, many program crashes remain unreported or unfixed because:
Line 31: Line 29:
 * Martin wants to add a new TODO item in Evolution, which causes Evolution to crash. Instead of blaming everything to GTK bugs, he wants to provide Sebastien with as much information about the crash as possible, in order to help him fix the bug.
 * Stuart runs a PostgreSQL server in the data center. If the current postmaster process crashes, he wants to be notified about it and wants to get information about the crash.
 * Joe is a non-technically inclined Ubuntu user. His gaim application randomly crashes. He is willing to help us find the problem, but he does not have the skills and time to build a debug version, run it under gdb, and try to reproduce the crash.
 * Stuart runs a PostgreSQL server in the data center where no users usually are logged in. If the current postmaster process crashes, he wants to be notified about it and wants to get information about the crash.
Line 36: Line 34:
=== Debug symbol extraction ===

In order to produce good backtraces, we need to extract and store debug symbols from standard builds, and store them in a centralized repository for use in analyzing these reports.

We will use deb files as container for debug symbols. Compared to flat files, they offer the following advantages:

 * They can be arranged in a proper pool structure with a Packages file etc., so that existing tools to mirror, download, and ship debs can be reused. (However, we will not put them into the regular distribution. They should either live on a separate server (debug.ubuntu.com) or at least in a different suite (like "breezy-debug").
 * Users can actually install them if they want to.
Line 47: Line 36:
There are two ways how to detect a crash: There are three ways how to detect a crash:
Line 51: Line 40:
 * Change the default libc signal handler to call the crash handler.
Line 52: Line 42:
The library solution does not require any changes to the existing system, but is less robust than the kernel approach, since it requires to handle the crash in a corrupted environment. According to Ben Collins, the kernel hook is relatively easy to implement, so we should aim for this solution. If it should not work for some reason, we can always fall back to the library solution, which is already implemented and tested (and found to not produce stack trace reliably). The library solution does not require any changes to the existing system, but is less robust than the kernel approach, since it requires to handle the crash in a corrupted environment. Ben Collins already implemented the kernel hook, so we will use this solution and keep the others as fallback just for the case we encounter problems with the kernel hook approach. The preload library solution already implemented and tested.

=== Data collection ===

The process spawned from the crash signal handler collects all useful information and puts them into a report in `/var/crash/`. The file permissions will ensure that only the process owner can read the file, so that no sensitive data is made available to other user.

This process limits the number of crash reports for a particular executable to avoid filling up the disk with reports from repeatedly crashing respawning processes.
Line 56: Line 52:
There is no single way of presenting the collected debug information, so we have to try a list of possible actions after a crash: Depending on the environment, we can potentially provide different crash handler frontends. As a first small implementation for Gnome, a daemon in the desktop will watch `/var/crash/` with inotify; if it detects a crash report it can read, it creates a notification which points to the file and asks to file a bug.
Line 58: Line 54:
 0. If the owner of the crashed process is currently logged in and the process has `$DISPLAY` defined, a pygtk frontend will be invoked.
 0. If the owner of the crashed process is currently logged in and the process has no `$DISPLAY` defined, but the process has an attached terminal, a console frontend will be invoked.
 0. If `/usr/sbin/sendmail` exists, a mail is sent to the process owner, containing the info file and asking for forwarding it to an appropriate email address. Since Breezy does not install even a local MTA by default, we cannot rely on this, though.
 0. Dump the report into syslog with no further action.
=== Stack trace generation ===
Line 63: Line 56:
In the future we should consider automatic processing of the generated reports by Launchpad. For now, both the interactive interface and the automatically sent mails should just ask the user to file a bug and include the generated report. Debug symbols are very big and we want to avoid requiring to download them on the client machine. So we need a server which processes the incoming reports, generates a backtrace from the report data, stack frame, and debug symbols, and adds the stacktrace to the generated report. If the original report was retrieved from a bug report, the stack trace is added as an attachment to this bug report.
Line 66: Line 59:

=== Debug symbol extraction ===

dh_strip already offers to generate a debug package with the extracted symbols. However, it requires the debug package to be mentioned in debian/control, which we do not want to do permanently. Since modifying debhelper is considered bad and we just eliminated a similar modification to dh_builddeb, we will create a new package `pkgstripdebug`, which diverts `dh_strip` to change its behaviour. This package needs to be installed into the buildd chroots, similar to `pkgstriptranslations`. The diverted `dh_strip` does the following:

 0. Create a debug package in debian/ for all packages `dh_strip` is asked to act on.
  * The package name is the original one plus `-dbgsym` appended.
  * Packages which are `Architecture: all`, or end with `-dbg` are excluded.
  * Dependencies are `Depends: `''Original package name''` (= ${Source-Version})`.
  * If there already is a -dbg package, `Conflict:` and `Replaces:` on it.
  * Point out the purpose and the original package name in the package description.
 0. Find all ELF files and call `objcopy --only-keep-debug` on them, and put the symbols into `/usr/lib/debug/`''original path'' into the -dbgsym package. `dh_strip` has a similar feature, but has a different semantics in different compatibility levels, and generally interacts too much with the packaging to use it in a robust and generic way.
 0. Create a deb and register it with `dpkg-distaddfile` for `Section: raw-debug`, so that the launchpad installer can put them into a proper place.
 0. Call the original `dh_strip` with the same parameters.

Fedora uses a similar process and apparently they developed something better than `objcopy`, which produces much smaller debug info files. This should be investigated, see [http://bugzilla.ubuntu.com/8149 Ubuntu #8149] for some further information.
Line 87: Line 64:
 * Executable name
 * Signal name
 * proc information (`/proc/pid/{cmdline,environ,maps,status}`)
 * Package name and version
 * Stack trace.
 * Execution status (`/proc/$$`)
 * Packaging information (package, version, dependencies)
 * Crash information (executable path, signal, backtrace, memory status)
 * Environment information (OS version, time, `uname`, etc.)
Line 93: Line 69:
To get a human readable backtrace, the handler looks for available debug symbols in `/usr/lib/debug/`. If none are present, the graphical crash handler should offer to download the dbgsym deb from the Ubuntu server. All data is written into a file in RFC822 format and presented to the user (see below). For details about particular fields, see next section.

All data is written into a file in debcontrol format and put into `/var/crash/`''Executable``Path''`.txt`. (With slashes being converted to underscores).

A cronjob will regularly clean up reports which are older than a week.
Line 97: Line 77:
A rfc822 encoded file with the information about the problem. Three different problem exists, program crash, packaging problem and kernel crash. We only support the first type for now, but the file format should support future improvements. The file should contain enough information to make analyzing the problem possible. A possible list of fields includes: Three different problem types exist: program crash, packaging problem and kernel crash. We only support the first type for now, but the file format should support future improvements. The file should contain enough information to help developers analyzing the problem. A possible list of fields includes:
Line 100: Line 80:
 * `Date`
* `Architecture`
 * `DistroRelease`
 * `Locale`
 * `RunningKernel
`
 * `PackageAffected`
 * `Dependencies `(with Versions)
 * `Date` (localtime)
 * `DistroRelease` (`lsb_release -sir`)
 * `Uname`
 * `Package` (name and version)
 * `Sour
cePackage` (only the name)
 * `Dependencies `(with versions)
Line 108: Line 87:
 * `Backtrace` (Problem``Type: Kernel or Crash)  * `StackFrame` (base64 encoded, Problem``Type: Crash, optional)
 * `CoreDump` (bzip2'ed, base64 encoded, Problem``Type: Crash, optional)
 * `Stacktrace` (`bt full`, Problem``Type: Kernel or Crash)
 * `ThreadStacktrace` (`thread apply all bt full`, Problem``Type: Crash)
Line 110: Line 92:
 * `ExecutableName` (Problem``Type: Crash)
 * `SignalName` (Problem``Type: Crash)
 * `CmdArguments` (Problem``Type: Crash, from `/proc/$pid/cmdline`)
 * `Enviroment` (Problem``Type: Crash, from `/proc/$pid/environ`)
 * `ProcStatus` (Problem``Type: Crash, from `/proc/$pid/status)
 * `ExecutablePath` (Problem``Type: Crash)
 * `Signal` (Problem``Type: Crash)
 * `ProcCmdline` (Problem``Type: Crash, from `/proc/$pid/cmdline`)
 * `ProcEnviron` (Problem``Type: Crash, from `/proc/$pid/environ`)
 * `ProcStatus` (Problem``Type: Crash, from `/proc/$pid/status`)
 * `ProcMaps` (Problem``Type: Crash, from `/proc/$pid/maps`)

=== Enriching the stack trace with symbols ===

To get a human readable backtrace, gdb looks for available debug symbols in `/usr/lib/debug/` (which is where the -dbgsym packages put them). If they are not present, the graphical crash handler can offer to download the dbgsym deb from the Ubuntu server. Alternatively, a Launchpad service would construct the backtrace from the stack frame data and the debug symbols in the archive.
Line 120: Line 107:
== Outstanding Issues == == Future improvements ==
Line 122: Line 109:
=== Future improvements ===  * Improve the crash handler frontend:
  * offer to open the bug reporting tool (or just do it) in 'crash' mode
  * offer to download debug symbols and start gdb
  * provide mail frontend for server: mail is sent to the process owner, pointing to the report
  * provide Nagios frontend for server
Line 124: Line 115:
 * Handling of kernel crashes.
 * Handling of package installation/removal/upgrade errors.
Line 128: Line 117:
 * Add a power-user option to directly call ggdb or gdb-in-a-terminal
== Superseded discussion ==

This does not form part of the spec but is retained here for information and reference.
Line 132: Line 124:
The [http://www.cs.wisc.edu/cbi/ Cooperative bug isolation] project was mentioned in this BoF, and there was some ongoing discussion about whether to adopt it in Ubuntu. CBI focuses on compiling applications with a modified toolchain to enrich them with code augmentations and debug information. However, this enlarges packages considerably, which would affect the number of packages we could ship on a CD. On the other hand, the solution that is proposed here works for all packages, does not enlarge packages, and does not require a modified toolchain. On the downside, our solution requires network access to get usable backtraces, but this can be mitigated by caching downloaded debug symbol files.

=== Kernel crash detection ===

Many kernel oopses find their way through `klogd` into the kernel log file. At boot time, we should detect if there is a kernel oops log in `/var/log/kern.log`, use `ksymoops` to make the dump actually readable and write the trace into an RFC822 format file which is then presented to the user (see below).

There is the kernel crashdump project at http://lkcd.sourceforge.net/ that should be investigated.
The [[http://www.cs.wisc.edu/cbi/|Cooperative bug isolation]] project was mentioned in this BoF, and there was some ongoing discussion about whether to adopt it in Ubuntu. CBI focuses on compiling applications with a modified toolchain to enrich them with code augmentations and debug information. However, this enlarges packages considerably, which would affect the number of packages we could ship on a CD. On the other hand, the solution that is proposed here works for all packages, does not enlarge packages, and does not require a modified toolchain. On the downside, our solution requires network access to get usable backtraces, but this can be mitigated by caching downloaded debug symbol files.
Line 148: Line 134:
=== Turning addresses into functions later ===

Symbols in packages won't retrieve the names of static functions or inline functions; only full debugging data contains that information. Unfortunately this is a lot of extra cruft to add to a user's system, see the considerations in ''Stack trace generation'' above. We can generate a backtrace as a list of addresses on the client machine, and along with the maps file and library versions have enough information that we can get the function names out of the debugging data on our end; but this is not entirely straightforward either.

Because `gdb` doesn't take maps and a debugging file and associate addresses properly, we need to improvise a little. We can probably just load the program in the debugger and compare its maps to the collected one, then adjust our collected addresses according to the base change of the library they're in. This should work because by subtracting the base address of the library from the address, we get an offset of the address in the library; and then by adding the base address of the library on our end to that offset, we get the address of the same point in the library in our traced process, allowing us to ask the debugger what line of code is relevant here.

This method is pretty much applying a relocation to our collected address, it's the same thing the dynamic linker does when it loads a library.

=== Sensitive information ===
Sensitive information may be included in:

 * `Stack dumps`: The stack frame or a small stack dump may contain GPG keys or passwords.
 * `core dumps`: Contain the contents of memory, so anything can be here.
 * `/proc/$pid/cmdline`: Some dirty scripts do `mysql -u root -pmypasswd`, among other things.
 * `/proc/$pid/environ`: Leaks user names at the very least; we say ''invalid username/password combination'' instead of ''bad username'' or ''bad password'' for a reason. Other nasty stuff may be in the environment in rare cases.

A backtrace is fine; return addresses are not sensitive. The other stuff needs to be handled carefully and needs user intervention.

=== Signals ===
Besides `SIGSEGV`, `SIGBUS`, and `SIGFPE`, there are two more signals to trap.

 * `SIGILL`: Illegal instruction, indicating something like `mono` generated bad code, or someone trashed program memory, or someone got the program executing data (crack attack).
 * `SIGKILL` to self: Distinctly odd, but useful. Stack smashes are clean exits and not detected; but we can modify `__stack_chk_fail()` in `glibc` to `kill(getpid(),SIGKILL)` and create an easily detectable stack smash crash.

Stack smashes can possibly call the crash detector directly; see AutomatedSecurityVulnerabilityDetection for an explanation. This can also be used to report heap corruption, since `glibc` knows how to bail when `malloc()` or `free()` see something ugly.

Introduction

Crashes of userspace applications should be detected automatically so the user gets an easy to use frontend for adding information to the problem report and is offered to send the report to our database.

Rationale

Currently, many program crashes remain unreported or unfixed because:

  • many crashes are not easily reproducible (after e. g. installing a debug version)
  • end users do not know how to prepare a report that is really useful for developers
  • and we have no easy frontend which allow users to submit detailed problem reports.

If the process of data collection is automated and detailed information about a crash can be collected at the very time a crash occurs, this will help the developers to be notified about problems and give them much of the information they need to deal with it.

We hope that this will lead to a much better level of quality assurance in the future.

Scope

This specification deals with detecting crashes of processes running in the user's session. Crashes of system processes are covered to some degree. Kernel and package failures will be dealt with in separate specifications.

Use Cases

  • Joe is a non-technically inclined Ubuntu user. His gaim application randomly crashes. He is willing to help us find the problem, but he does not have the skills and time to build a debug version, run it under gdb, and try to reproduce the crash.
  • Stuart runs a PostgreSQL server in the data center where no users usually are logged in. If the current postmaster process crashes, he wants to be notified about it and wants to get information about the crash.

Design

Process crash detection

There are three ways how to detect a crash:

  • Create a small library libcrashrep.so whose init function installs a signal handler for the most common types of crashes (segmentation violation, floating point error, and bus error). The handler will catch all signals that the application does not handle itself. When a crash is detected, the library calls an external program. The library is put into /etc/ld.so.preload.

  • Extend the kernel to call an userspace program when a process exits with one of the mentioned signals. The program should be configued in /proc/sys/proc/process_crash_handler (or a similar file).

  • Change the default libc signal handler to call the crash handler.

The library solution does not require any changes to the existing system, but is less robust than the kernel approach, since it requires to handle the crash in a corrupted environment. Ben Collins already implemented the kernel hook, so we will use this solution and keep the others as fallback just for the case we encounter problems with the kernel hook approach. The preload library solution already implemented and tested.

Data collection

The process spawned from the crash signal handler collects all useful information and puts them into a report in /var/crash/. The file permissions will ensure that only the process owner can read the file, so that no sensitive data is made available to other user.

This process limits the number of crash reports for a particular executable to avoid filling up the disk with reports from repeatedly crashing respawning processes.

Presenting the information

Depending on the environment, we can potentially provide different crash handler frontends. As a first small implementation for Gnome, a daemon in the desktop will watch /var/crash/ with inotify; if it detects a crash report it can read, it creates a notification which points to the file and asks to file a bug.

Stack trace generation

Debug symbols are very big and we want to avoid requiring to download them on the client machine. So we need a server which processes the incoming reports, generates a backtrace from the report data, stack frame, and debug symbols, and adds the stacktrace to the generated report. If the original report was retrieved from a bug report, the stack trace is added as an attachment to this bug report.

Implementation

Process crash detection

The crash handler collects the following information about the crash:

  • Execution status (/proc/$$)

  • Packaging information (package, version, dependencies)
  • Crash information (executable path, signal, backtrace, memory status)
  • Environment information (OS version, time, uname, etc.)

For details about particular fields, see next section.

All data is written into a file in debcontrol format and put into /var/crash/ExecutablePath.txt. (With slashes being converted to underscores).

A cronjob will regularly clean up reports which are older than a week.

Problem information file format

Three different problem types exist: program crash, packaging problem and kernel crash. We only support the first type for now, but the file format should support future improvements. The file should contain enough information to help developers analyzing the problem. A possible list of fields includes:

  • ProblemType: [Crash|Packaging|Kernel] 

  • Date (localtime)

  • DistroRelease (lsb_release -sir)

  • Uname

  • Package (name and version)

  • SourcePackage (only the name)

  • Dependencies (with versions)

  • UserNotes

  • StackFrame (base64 encoded, ProblemType: Crash, optional)

  • CoreDump (bzip2'ed, base64 encoded, ProblemType: Crash, optional)

  • Stacktrace (bt full, ProblemType: Kernel or Crash)

  • ThreadStacktrace (thread apply all bt full, ProblemType: Crash)

  • PackageError (ProblemType: Packaging, dependency problem or dpkg output)

  • ExecutablePath (ProblemType: Crash)

  • Signal (ProblemType: Crash)

  • ProcCmdline (ProblemType: Crash, from /proc/$pid/cmdline)

  • ProcEnviron (ProblemType: Crash, from /proc/$pid/environ)

  • ProcStatus (ProblemType: Crash, from /proc/$pid/status)

  • ProcMaps (ProblemType: Crash, from /proc/$pid/maps)

Enriching the stack trace with symbols

To get a human readable backtrace, gdb looks for available debug symbols in /usr/lib/debug/ (which is where the -dbgsym packages put them). If they are not present, the graphical crash handler can offer to download the dbgsym deb from the Ubuntu server. Alternatively, a Launchpad service would construct the backtrace from the stack frame data and the debug symbols in the archive.

Data Preservation and Migration

Those processes will not alter the user's data in any way.

Future improvements

  • Improve the crash handler frontend:
    • offer to open the bug reporting tool (or just do it) in 'crash' mode
    • offer to download debug symbols and start gdb
    • provide mail frontend for server: mail is sent to the process owner, pointing to the report
    • provide Nagios frontend for server
  • Automated crash reporting to Launchpad (taking privacy issues into account).
  • Duplicate recognition based on the package and backtrace.
  • Offer to save the core file somewhere, so that the user can further assist the people who try to fix the bug

Superseded discussion

This does not form part of the spec but is retained here for information and reference.

CBI

The Cooperative bug isolation project was mentioned in this BoF, and there was some ongoing discussion about whether to adopt it in Ubuntu. CBI focuses on compiling applications with a modified toolchain to enrich them with code augmentations and debug information. However, this enlarges packages considerably, which would affect the number of packages we could ship on a CD. On the other hand, the solution that is proposed here works for all packages, does not enlarge packages, and does not require a modified toolchain. On the downside, our solution requires network access to get usable backtraces, but this can be mitigated by caching downloaded debug symbol files.

Package installation failures

For package system failures, code needs to be written so that apt can report dependency problems (apt-get install $foo fails) and package installation/removal/upgrade to a external application. Before reporting a problem apt needs to check that the installed dependencies on the system are all right (apt-get install -f runs successfully). A option in apt should control if apt reports the problems or not (so that users/developer running on a unstable distribution can turn it off). The report should include the sources.list of the user to identify problems with 3rd party repositories. In some cases the output of apt-get install -o Debug::pkgProblemResolver=true is useful as well. The list of installed packages is useful sometimes too, but it can easily get huge, so it's probably not feasible to include it in a report.

Providing minimal symbols in binaries

A possible alternative to creating separate debug packages for everything is to include some symbols in binary packages. The primary problem for upstream developers receiving backtraces are functions listed as (???) instead of giving the function name. Additional information such as source code file and line number, although interesting, is less important. Including symbols for every function directly in the binary file would provide the former, without increasing the binary size as much as including full debugging information. This can be implemented by using the -g option to strip instead of what is currently used. Some discussion is necessary to determine the optimal strip flags.

Turning addresses into functions later

Symbols in packages won't retrieve the names of static functions or inline functions; only full debugging data contains that information. Unfortunately this is a lot of extra cruft to add to a user's system, see the considerations in Stack trace generation above. We can generate a backtrace as a list of addresses on the client machine, and along with the maps file and library versions have enough information that we can get the function names out of the debugging data on our end; but this is not entirely straightforward either.

Because gdb doesn't take maps and a debugging file and associate addresses properly, we need to improvise a little. We can probably just load the program in the debugger and compare its maps to the collected one, then adjust our collected addresses according to the base change of the library they're in. This should work because by subtracting the base address of the library from the address, we get an offset of the address in the library; and then by adding the base address of the library on our end to that offset, we get the address of the same point in the library in our traced process, allowing us to ask the debugger what line of code is relevant here.

This method is pretty much applying a relocation to our collected address, it's the same thing the dynamic linker does when it loads a library.

Sensitive information

Sensitive information may be included in:

  • Stack dumps: The stack frame or a small stack dump may contain GPG keys or passwords.

  • core dumps: Contain the contents of memory, so anything can be here.

  • /proc/$pid/cmdline: Some dirty scripts do mysql -u root -pmypasswd, among other things.

  • /proc/$pid/environ: Leaks user names at the very least; we say invalid username/password combination instead of bad username or bad password for a reason. Other nasty stuff may be in the environment in rare cases.

A backtrace is fine; return addresses are not sensitive. The other stuff needs to be handled carefully and needs user intervention.

Signals

Besides SIGSEGV, SIGBUS, and SIGFPE, there are two more signals to trap.

  • SIGILL: Illegal instruction, indicating something like mono generated bad code, or someone trashed program memory, or someone got the program executing data (crack attack).

  • SIGKILL to self: Distinctly odd, but useful. Stack smashes are clean exits and not detected; but we can modify __stack_chk_fail() in glibc to kill(getpid(),SIGKILL) and create an easily detectable stack smash crash.

Stack smashes can possibly call the crash detector directly; see AutomatedSecurityVulnerabilityDetection for an explanation. This can also be used to report heap corruption, since glibc knows how to bail when malloc() or free() see something ugly.


CategorySpec

AutomatedProblemReports (last edited 2008-08-06 16:26:25 by localhost)