Crashes of userspace applications should be detected automatically so the user gets an easy to use frontend for adding information to the problem report and is offered to send the report to our database.


Currently, many program crashes remain unreported or unfixed because:

If the process of data collection is automated and detailed information about a crash can be collected at the very time a crash occurs, this will help the developers to be notified about problems and give them much of the information they need to deal with it.

We hope that this will lead to a much better level of quality assurance in the future.


This specification deals with detecting crashes of processes running in the user's session. Crashes of system processes are covered to some degree. Kernel and package failures will be dealt with in separate specifications.

Use Cases


Process crash detection

There are three ways how to detect a crash:

The library solution does not require any changes to the existing system, but is less robust than the kernel approach, since it requires to handle the crash in a corrupted environment. Ben Collins already implemented the kernel hook, so we will use this solution and keep the others as fallback just for the case we encounter problems with the kernel hook approach. The preload library solution already implemented and tested.

Data collection

The process spawned from the crash signal handler collects all useful information and puts them into a report in /var/crash/. The file permissions will ensure that only the process owner can read the file, so that no sensitive data is made available to other user.

This process limits the number of crash reports for a particular executable to avoid filling up the disk with reports from repeatedly crashing respawning processes.

Presenting the information

Depending on the environment, we can potentially provide different crash handler frontends. As a first small implementation for Gnome, a daemon in the desktop will watch /var/crash/ with inotify; if it detects a crash report it can read, it creates a notification which points to the file and asks to file a bug.

Stack trace generation

Debug symbols are very big and we want to avoid requiring to download them on the client machine. So we need a server which processes the incoming reports, generates a backtrace from the report data, stack frame, and debug symbols, and adds the stacktrace to the generated report. If the original report was retrieved from a bug report, the stack trace is added as an attachment to this bug report.


Process crash detection

The crash handler collects the following information about the crash:

For details about particular fields, see next section.

All data is written into a file in debcontrol format and put into /var/crash/ExecutablePath.txt. (With slashes being converted to underscores).

A cronjob will regularly clean up reports which are older than a week.

Problem information file format

Three different problem types exist: program crash, packaging problem and kernel crash. We only support the first type for now, but the file format should support future improvements. The file should contain enough information to help developers analyzing the problem. A possible list of fields includes:

Enriching the stack trace with symbols

To get a human readable backtrace, gdb looks for available debug symbols in /usr/lib/debug/ (which is where the -dbgsym packages put them). If they are not present, the graphical crash handler can offer to download the dbgsym deb from the Ubuntu server. Alternatively, a Launchpad service would construct the backtrace from the stack frame data and the debug symbols in the archive.

Data Preservation and Migration

Those processes will not alter the user's data in any way.

Future improvements

Superseded discussion

This does not form part of the spec but is retained here for information and reference.


The Cooperative bug isolation project was mentioned in this BoF, and there was some ongoing discussion about whether to adopt it in Ubuntu. CBI focuses on compiling applications with a modified toolchain to enrich them with code augmentations and debug information. However, this enlarges packages considerably, which would affect the number of packages we could ship on a CD. On the other hand, the solution that is proposed here works for all packages, does not enlarge packages, and does not require a modified toolchain. On the downside, our solution requires network access to get usable backtraces, but this can be mitigated by caching downloaded debug symbol files.

Package installation failures

For package system failures, code needs to be written so that apt can report dependency problems (apt-get install $foo fails) and package installation/removal/upgrade to a external application. Before reporting a problem apt needs to check that the installed dependencies on the system are all right (apt-get install -f runs successfully). A option in apt should control if apt reports the problems or not (so that users/developer running on a unstable distribution can turn it off). The report should include the sources.list of the user to identify problems with 3rd party repositories. In some cases the output of apt-get install -o Debug::pkgProblemResolver=true is useful as well. The list of installed packages is useful sometimes too, but it can easily get huge, so it's probably not feasible to include it in a report.

Providing minimal symbols in binaries

A possible alternative to creating separate debug packages for everything is to include some symbols in binary packages. The primary problem for upstream developers receiving backtraces are functions listed as (???) instead of giving the function name. Additional information such as source code file and line number, although interesting, is less important. Including symbols for every function directly in the binary file would provide the former, without increasing the binary size as much as including full debugging information. This can be implemented by using the -g option to strip instead of what is currently used. Some discussion is necessary to determine the optimal strip flags.

Turning addresses into functions later

Symbols in packages won't retrieve the names of static functions or inline functions; only full debugging data contains that information. Unfortunately this is a lot of extra cruft to add to a user's system, see the considerations in Stack trace generation above. We can generate a backtrace as a list of addresses on the client machine, and along with the maps file and library versions have enough information that we can get the function names out of the debugging data on our end; but this is not entirely straightforward either.

Because gdb doesn't take maps and a debugging file and associate addresses properly, we need to improvise a little. We can probably just load the program in the debugger and compare its maps to the collected one, then adjust our collected addresses according to the base change of the library they're in. This should work because by subtracting the base address of the library from the address, we get an offset of the address in the library; and then by adding the base address of the library on our end to that offset, we get the address of the same point in the library in our traced process, allowing us to ask the debugger what line of code is relevant here.

This method is pretty much applying a relocation to our collected address, it's the same thing the dynamic linker does when it loads a library.

Sensitive information

Sensitive information may be included in:

A backtrace is fine; return addresses are not sensitive. The other stuff needs to be handled carefully and needs user intervention.


Besides SIGSEGV, SIGBUS, and SIGFPE, there are two more signals to trap.

Stack smashes can possibly call the crash detector directly; see AutomatedSecurityVulnerabilityDetection for an explanation. This can also be used to report heap corruption, since glibc knows how to bail when malloc() or free() see something ugly.


AutomatedProblemReports (last edited 2008-08-06 16:26:25 by localhost)