This document describe the main proposals for automating the build and tests of tor projects.
- Build tool
-
This proposal describe a new build process for TBB, PTTBB and packages. An advanced prototype of a TBB / PTTBB build process is available to show how that would work. This new process would make it possible to automate the build of bundles in Jenkins.
- VM / Containers tools
-
This proposal describe an tool that would help us create and manage virtual machines. It would be used during the build process, and during the automated testing.
- Builds Database
-
This proposal describes a signed builds database. This allows us to automate the upload, download and verification of build signatures. We can then integrate it into the build process so that the parts that have already been built by enough trusted people are not built again.
- Test Helpers
-
This proposal describes some tools that we should make to be able to automate the testing of some of the Tor components.
Build tool
Goals
It would be useful to have a tool that can be used for building Tor Bundle Browser, and packages for any of the Tor components.
Tor Bundle Browser
In the current build process for the Tor Bundle Browser, gitias is used to :
-
create and start an Ubuntu VM in KVM or LXC
-
install build dependencies in the VM
-
copy the sources files in the VM and start the build
The parts that are not done by gitian but by a shellscript based wrapper around gitian:
-
initial clone of the git repository containing the sources (fetch-inputs.sh)
-
git tag gpg signature check (verify-tags.sh)
-
download tarballs from mirrors, and verify gpg signature or sha256sum (fetch-inputs.sh)
Improvement that can be done from the current process:
-
some informations are duplicated in different places, such as git URLs (both in gitian descriptor and in fetch-inputs.sh)
-
linux, mac and windows gitian descriptors are separate, so part of them are duplicating some informations. It would be better to be able to share the parts that are common on all OSs.
-
build process is not very modular. It is not possible or simple to build only some components.
-
the build process can currently only be run from Ubuntu
-
the current process requires using git tags if gpg signature verification is wanted. It could be useful to be able to use gpg signed commits instead.
To solve this, we can have a tool that handle both the tasks currently done by gitian, and the tasks currently done by shellscript wrappers:
-
store in a single descriptor the build instructions, build dependencies, git URL, tarball URLs, gpg keyring file, sha256sum, etc … and have all the operations done by the tool. This should remove the need for wrappers.
-
create a separate descriptor for each software that we build, so that they can be built separately. The tool should be able to manage dependencies between separate builds to create a bundle of different software.
-
a templating system allows us to have some variables and use them at different places. This allows us to have a common build descriptor for all OSs, and use conditional templating instructions where the process differ between the different OSs.
A prototype to show a possible build process is available, with more details below.
Packages
In addition to building the Tor Browser Bundle, we also need to build some rpm or Debian packages for different components. Some of them are also included in the Tor Browser Bundle.
To be able to do continuous packaging, we need a tool that would allow us to produce an rpm or Debian package from a git commit. In the jenkins/tools.git repository, some shell scripts are used to produce Tor Debian packages from the latest commit.
To avoid duplicating build instructions, URLs, gpg keyring and other infos, it would be useful if a single tool was able to produce the Tor Browser Bundle and also individual packages for some of the components.
Prototype and current status
I have created recently a tool that could be used to build Bundle, and Debian and rpm packages : Reproducible Build Manager (rbm).
It currently has the following features:
-
build software from a git repository, with tag signature verification and/or commit signature verification
-
download files from an URL, and check gpg signature, or sha256sum
-
use build result from the build of an other component
-
build on a remote host, VM or chroot
-
build rpm or Debian packages, or tarballs
-
define variables in the descriptors, and use them anywhere using template instructions
-
override some variables on command line by selecting build targets.
It is currently missing the following features from gitian:
-
creation and start/stop of the Ubuntu build VMs. We can keep the gitian scripts for that, and improve them later.
-
definition of the build dependencies in the descriptor and installation. This should not be a lot of work to add this feature.
Prototype
A prototype for a new build process for the Tor Browser Bundle using this tool is available at the following URL: https://github.com/boklm/rbm-tor
It currently includes the following components: libevent, tor, tor-browser, torbutton, tor-launcher, https-everywhere, all built separately and combined in the bundle.
It does not produce a functional build yet, but is enough to show what the build process would look like.
Installation
Debian packages for rbm should be available soon. Until they are available you can do a manual install:
-
clone the git repository in some directory
-
install the following perl modules: YAML::XS, Getopt::Long, Template, IO::CaptureOutput, File::Slurp, String::ShellQuote, Sort::Versions, Data::Dump
On Debian this can be done with:
$ apt-get install libyaml-libyaml-perl libtemplate-perl \
libio-captureoutput-perl libfile-slurp-perl \
libstring-shellquote-perl libsort-versions-perl \
libdata-dump-perl
$ git clone https://github.com/boklm/rbm ~/rbm
In the commands from the following examples, replace rbm with the path to the rbm executable (for instance ~/rbm/rbm).
How it works
To understand how it works, we can look at different files from the prototype and see what they do.
Files and directories list
The following files and directories are present in the repository:
.
├── rbm.conf - The main configuration file
├── git_clones - directory containing git clones
├── keyring
│ ├── libevent.gpg - The gpg keyring for the libevent project
│ ├── tor.gpg - The gpg keyring for the tor project
│ └── ... - other keyring files
├── out - Output directory for build files
└── projects
├── libevent
│ ├── build - build instructions for libevent
│ └── config - configuration for libevent
├── tbb
│ ├── build - build instructions for the bundle
│ └── config - configuration for the bundle
├── tor
│ ├── build - build instructions for tor
│ └── config - configuration for tor
└── ... - other projects directories
rbm.conf
The rbm.conf file is used to indicate the root of the rbm repository. It also contains the configuration options shared by all projects. All those options can be overriden by each project configuration.
libevent
The file projects/libevent/config contains the configuration for the libevent build:
git_url: https://github.com/libevent/libevent.git
version: '[% c("abbrev") %]'
var:
dest_filename: 'libevent-[% c("version") %]-[% c("lsb_release/id") %]-[% c("lsb_release/release") %]-[% c("arch") %].tar'
targets:
notarget: dev
stable:
git_hash: release-2.0.21-stable
tag_gpg_id: 1
dev:
git_hash: master
The following options are defined:
- git_url
-
This option contains the URL of the git repository that should be cloned to create the sources tarball.
- version
-
This option contains the string that is used as version number in the sources tarball. In this case, we’re saying that we want to use abbrev as version string, which is the abbreviated hash of the commit that has been selected.
- var/dest_filename
-
This option is used in the build file, and defines the name of the output file. The name of this option is arbitrary, so to avoid a conflict with an option used by a future version of rbm, we add the var prefix. The reason to use a variable to define this file name is to allow other projects which need this file (in this case, this libevent build is used by tor), to know the name of the file that is produced. In this case, we can see that the filename contains the following informations: abbreviated commit hash, distribution name and release, architecture.
- targets
-
We can see that different targets are defined. Targets allows us to set different options depending on the target selected on command line. In this case, we defined the stable and dev targets. This page has more details about how the targets work.
- git_hash
-
This option is used to select the git commit from which to create the sources tarball. This can be anything that git calls a tree-ish (hashes, tags, branch names).
- tag_gpg_id
-
This option indicates that the gpg signature of the tag pointed by git_hash should be checked, using the keyring file libevent.gpg (as defined by the keyring option in rbm.conf). We can only set this option for the stable target, because the dev target does not use a tag. The libevent commits are not currently signed, but if they were we could set the commit_gpg_id option to check signature of the commit in the master branch.
The file projects/libevent/config contains the build instructions for libevent:
[%- USE date -%]
#!/bin/sh
set -e
[% c('var/reproducible_init') %]
tar xvf [% project %]-[% c('version') %].tar.[% c('compress_tar') %]
cd [% project %]-[% c('version') %]
./autogen.sh
[% c('var/touch_directory', { directory => '.' }) %]
mkdir -p inst/libevent
./configure --disable-static --prefix="$(pwd)/inst/libevent"
make -j$(nproc)
make install
cd inst
[% c('var/touch_directory', { directory => '.' }) %]
[% c('tar', { tar_src => 'libevent', tar_args => '-cf ' _ dest_dir _ '/' _ c('var/dest_filename') }) %]
We can see that it uses the following options:
- var/reproducible_init
-
This option is defined in rbm.conf, and export the environment variables to enable libfaketime. The time used by libfaketime is defined by the option timestamp, which by default is the time of the selected git commit.
- var/touch_directory
-
This option sets the modification time of all files in the selected directory, using the time from timestamp option.
- tar
-
This option is defined in rbm (you can run rbm showconf to see it), and is used to create a deterministic tar file. It is doing something similar to the script gitian/build-helpers/dtar.sh from the current tbb build process.
Following are some examples of commands that we can use to build libevent.
Creating a sources tarball for the stable target. In this example we can see that the gpg signature of the git tag is checked:
$ rbm tar libevent --target stable
Tag release-2.0.21-stable is signed with key B35BF85BF19489D04E28C33C21194EBB165733EA
Created /home/boklm/work/rbm-tor/out/libevent-64177777165d.tar.gz
We can do the same for the devel target, and we will get the sources from the master branch instead:
$ rbm tar libevent --target dev
Created /home/boklm/work/rbm-tor/out/libevent-4c8ebcd3598a.tar.gz
If what we want is a build, we use the build command instead of tar.
$ ~/work/mkpkg/rbm build libevent --target dev
Created /home/boklm/tmp/rbm-keaW4/libevent-4c8ebcd3598a.tar.gz
[some build logs]
$ ls -l out/libevent-4c8ebcd3598a-Mageia-3-x86_64.tar
-rw-r--r-- 1 boklm boklm 3430400 Jan 2 14:43 out/libevent-4c8ebcd3598a-Mageia-3-x86_64.tar
$ tar tvf out/libevent-4c8ebcd3598a-Mageia-3-x86_64.tar
drwx------ root/root 0 2013-12-24 21:02 libevent/
drwx------ root/root 0 2013-12-24 21:02 libevent/lib/
lrwxrwxrwx root/root 0 2013-12-24 21:02 libevent/lib/libevent_pthreads.so -> libevent_pthreads-2.1.so.3.0.0
lrwxrwxrwx root/root 0 2013-12-24 21:02 libevent/lib/libevent_openssl-2.1.so.3 -> libevent_openssl-2.1.so.3.0.0
lrwxrwxrwx root/root 0 2013-12-24 21:02 libevent/lib/libevent_openssl.so -> libevent_openssl-2.1.so.3.0.0
lrwxrwxrwx root/root 0 2013-12-24 21:02 libevent/lib/libevent-2.1.so.3 -> libevent-2.1.so.3.0.0
lrwxrwxrwx root/root 0 2013-12-24 21:02 libevent/lib/libevent_core.so -> libevent_core-2.1.so.3.0.0
lrwxrwxrwx root/root 0 2013-12-24 21:02 libevent/lib/libevent_core-2.1.so.3 -> libevent_core-2.1.so.3.0.0
-rwx------ root/root 117534 2013-12-24 21:02 libevent/lib/libevent_openssl-2.1.so.3.0.0
-rwx------ root/root 1018 2013-12-24 21:02 libevent/lib/libevent_core.la
-rwx------ root/root 1042 2013-12-24 21:02 libevent/lib/libevent_pthreads.la
-rwx------ root/root 25459 2013-12-24 21:02 libevent/lib/libevent_pthreads-2.1.so.3.0.0
lrwxrwxrwx root/root 0 2013-12-24 21:02 libevent/lib/libevent_pthreads-2.1.so.3 -> libevent_pthreads-2.1.so.3.0.0
-rwx------ root/root 861887 2013-12-24 21:02 libevent/lib/libevent_core-2.1.so.3.0.0
lrwxrwxrwx root/root 0 2013-12-24 21:02 libevent/lib/libevent_extra.so -> libevent_extra-2.1.so.3.0.0
-rwx------ root/root 1024 2013-12-24 21:02 libevent/lib/libevent_extra.la
-rwx------ root/root 1364542 2013-12-24 21:02 libevent/lib/libevent-2.1.so.3.0.0
[...]
We can do the same on the stable target:
$ ~/work/mkpkg/rbm build libevent --target stable
Tag release-2.0.21-stable is signed with key B35BF85BF19489D04E28C33C21194EBB165733EA
Created /home/boklm/tmp/rbm-0d1Cy/libevent-64177777165d.tar.gz
[...]
autoreconf: running: aclocal --force -I m4
aclocal: warning: autoconf input should be named 'configure.ac', not 'configure.in'
configure.in:15: error: 'AM_CONFIG_HEADER': this macro is obsolete.
You should use the 'AC_CONFIG_HEADERS' macro instead.
/usr/share/aclocal-1.13/obsolete-err.m4:12: AM_CONFIG_HEADER is expanded from...
configure.in:15: the top level
autom4te: /usr/bin/m4 failed with exit status: 1
aclocal: error: echo failed with exit status: 1
autoreconf: aclocal failed with exit status: 1
Error: Error running build
In this example I am using Mageia, and the version of autoconf we have there doesn’t work with this version of libevent, so the build fails. However we can run the build inside a Ubuntu Lucid chroot (more details about running builds in a chroot or VM below):
$ rbm build libevent --target chroot-ubuntu-lucid --target stable
Tag release-2.0.21-stable is signed with key B35BF85BF19489D04E28C33C21194EBB165733EA
Created /home/boklm/tmp/rbm-6PNXA/libevent-64177777165d.tar.gz
[...]
$ ls out/libevent-64177777165d-Ubuntu-10.04-x86_64.tar
out/libevent-64177777165d-Ubuntu-10.04-x86_64.tar
$ tar tvf out/libevent-64177777165d-Ubuntu-10.04-x86_64.tar
drwx------ root/root 0 2012-11-18 07:39 libevent/
drwx------ root/root 0 2012-11-18 07:39 libevent/lib/
lrwxrwxrwx root/root 0 2012-11-18 07:39 libevent/lib/libevent_pthreads.so -> libevent_pthreads-2.0.so.5.1.9
lrwxrwxrwx root/root 0 2012-11-18 07:39 libevent/lib/libevent_openssl.so -> libevent_openssl-2.0.so.5.1.9
lrwxrwxrwx root/root 0 2012-11-18 07:39 libevent/lib/libevent_core.so -> libevent_core-2.0.so.5.1.9
-rwx------ root/root 942988 2012-11-18 07:39 libevent/lib/libevent-2.0.so.5.1.9
-rwx------ root/root 398207 2012-11-18 07:39 libevent/lib/libevent_extra-2.0.so.5.1.9
-rwx------ root/root 1032 2012-11-18 07:39 libevent/lib/libevent_core.la
-rwx------ root/root 1056 2012-11-18 07:39 libevent/lib/libevent_pthreads.la
-rwx------ root/root 569248 2012-11-18 07:39 libevent/lib/libevent_core-2.0.so.5.1.9
-rwx------ root/root 21158 2012-11-18 07:39 libevent/lib/libevent_pthreads-2.0.so.5.1.9
-rwx------ root/root 87749 2012-11-18 07:39 libevent/lib/libevent_openssl-2.0.so.5.1.9
lrwxrwxrwx root/root 0 2012-11-18 07:39 libevent/lib/libevent_extra.so -> libevent_extra-2.0.so.5.1.9
-rwx------ root/root 1038 2012-11-18 07:39 libevent/lib/libevent_extra.la
[...]
We can notice that the modification time of all files in the archive is 2012-11-18, which is the time of the commit 2.0.21-stable.
libevent build is then used in the build of tor. We can se it in input_file of the tor config:
input_files:
- filename: '[% pc("libevent", "var/dest_filename") %]'
name: libevent
project: libevent
pkg_type: build
The libevent build file name contains the abbreviated git hash, distribution name, release and architecture, so if any of those change, libevent will be rebuilt automatically. Otherwise, the existing build will be used.
Python
Python is an example of tool that we build using a tarball downloaded from an URL.
The config file for python contains this:
version: 2.7.5
timestamp: 1
var:
dest_filename: 'python-[% c("version") %]-[% c("lsb_release/id") %]-[% c("lsb_release/release") %]-[% c("arch") %].tar.xz'
input_files:
- name: python
filename: 'Python-[% c("version") %].tar.bz2'
URL: 'http://www.python.org/ftp/python/[% c("version") %]/[% c("filename") %]'
sha256sum: 3b477554864e616a041ee4d7cef9849751770bc7c39adaf78a94ea145c488059
We can see that the filename includes the python version. If we update the version number, then the new tarball will be downloaded automatically. If the version number doesn’t change, then the tarball will be downloaded only once, and reused each time.
This Python build is used during the tor-browser build, only if we are running a distribution that provides an older version of Python (Ubuntu Lucid). We can see it in the tor-browser config file:
git_url: https://git.torproject.org/tor-browser.git
version: '[% c("abbrev") %]'
var:
dest_filename: 'tor-browser-[% c("abbrev") %]-[% c("lsb_release/id") %]-[% c("lsb_release/release") %]-[% c("arch") %].tar'
dest_filename_debug: 'tor-browser-[% c("abbrev") %]-debug-[% c("lsb_release/id") %]-[% c("lsb_release/release") %]-[% c("arch") %].tar'
targets:
dev:
git_hash: tor-browser-24.2.0esr-3.5.1-build1
tag_gpg_id: 1
stable:
git_hash: tor-browser-24.2.0esr-3.5.1-build1
tag_gpg_id: 1
input_files:
- filename: '[% pc("python", "var/dest_filename") %]'
name: python
project: python
pkg_type: build
enable: '[% c("build_python") %]'
distributions:
- lsb_release:
id: Ubuntu
codename: lucid
var:
build_python: 1
link_yasm: 1
In this file, we can see that the options var/build_python and var/link_yasm are set to 1 if the selected distribution is Ubuntu with codename lucid (more details about how to set distribution specific options can be found on this page). The option var/build_python is used in the input_files to enable or disable the python build. It is also used in the tor-browser build file, to decide if a python tarball should be extracted:
[% IF c('var/build_python') %]
tar xvf [% c('input_files_by_name/python') %]
export PATH="$rootdir/python/bin:$PATH"
[% END %]
Pluggable transports
This prototype does not include the pluggable transports, except pyptlib, to show how they can be included optionally.
The rbm.conf file contains an enable_pt target:
targets:
include_pt:
var:
include_pt: 1
The option var/include_pt is then used in tbb include_files config, to optionaly include pyptlib as input:
- filename: '[% pc("pyptlib", "var/dest_filename") %]'
name: pyptlib
project: pyptlib
pkg_type: build
enable: '[% c("var/include_pt") %]'
In the tbb build script, it is used to optionaly extract pyptlib in the Tor directory:
[% IF c('var/include_pt') %]
pushd "$instdir/Tor"
tar xvf "$rootdir"/[% c('input_files_by_name/pyptlib') %]
popd
[% END %]
We can now use the include_pt target to include pyptlib in the bundle:
$ rbm build tbb --target dev --target include_pt
Created /home/boklm/tmp/rbm-uLDRA/tbb-x.dev.tar.gz
Using file /home/boklm/work/rbm-tor/projects/tbb/tor-573ee36eae63-Mageia-3-x86_64.tar
Using file /home/boklm/work/rbm-tor/out/tor-browser-a451d459fa5e-Mageia-3-x86_64.tar
Using file /home/boklm/work/rbm-tor/out/tor-launcher-1e34dd95dc7f.xpi
Using file /home/boklm/work/rbm-tor/out/https-everywhere-4be8003e7a6a.xpi
Using file /home/boklm/work/rbm-tor/projects/tbb/torbutton-2c825593cbcb.xpi
Using file /home/boklm/work/rbm-tor/out/pyptlib-97be8ad7c74e.tar
[...]
$ tar tvf out/tbb-x.dev-Mageia-3-x86_64.tar.xz | grep pypt | wc -l
45
If we’re not using the include_pt target, pyptlib is not included:
$ ~/work/mkpkg/rbm build tbb --target dev
Created /home/boklm/tmp/rbm-Zvgm4/tbb-x.dev.tar.gz
Using file /home/boklm/work/rbm-tor/projects/tbb/tor-573ee36eae63-Mageia-3-x86_64.tar
Using file /home/boklm/work/rbm-tor/out/tor-browser-a451d459fa5e-Mageia-3-x86_64.tar
Using file /home/boklm/work/rbm-tor/out/tor-launcher-1e34dd95dc7f.xpi
Using file /home/boklm/work/rbm-tor/out/https-everywhere-4be8003e7a6a.xpi
Using file /home/boklm/work/rbm-tor/projects/tbb/torbutton-2c825593cbcb.xpi
[...]
$ tar tvf out/tbb-x.dev-Mageia-3-x86_64.tar.xz | grep pypt | wc -l
0
Build in a chroot or VM
In order to produce reproducible builds, we need to build all software in a fixed environment. To do that, the current build process is using Ubuntu Lucid (for Linux builds) and Ubuntu Precise (for Windows and Mac OS X builds) inside a VM or LXC container.
The creation of a VM image or chroot directory is not yet automated in this prototype. However, there is support building in a chroot or in a VM, if you create it manually.
This page has details about how to do that.
If you want to build a bundle in an Ubuntu lucid chroot, first create the chroot in some directory, using debootstrap, and create in this chroot the user that will be used to build the software.
You can then add the following configuration in your main configuration file /etc/rbm.conf (this would also work if it was added to the repository configuration file, but as it contains configuration specific to your local system, it’s better to have it in your system’s configuration file):
targets:
chroot-ubuntu-lucid:
lsb_release:
id: Ubuntu
release: 10.04
codename: lucid
chroot_path: /home/chroots/ubuntu-lucid
chroot_user: build
remote:
build:
exec: '[% c("remote_chroot/exec") %]'
We are setting the following options:
- lsb_release
-
we are saying that the distribution on which we are building is Ubuntu lucid. If you are not setting this, the output from the lsb_release command will be used.
- chroot_path
-
The path to the chroot you want to use. This is used by the remote_chroot/exec option.
- chroot_user
-
The user inside the chroot that should be used to build. This is used by the remote_chroot/exec option.
- remote/build/exec
-
This is setting the command that should be used to execute something inside the chroot. We are using the remote_chroot option (you can see the content of this option with rbm showconf). This command is using sudo chroot, so you should have sudo access to do that.
After adding those changes to the configuration, you should be able to build any software inside this chroot by adding the arguments --target chroot-ubuntu-lucid to your rbm commands. For instance if you want to build a bundle in the dev target, you can run:
$ rbm build tbb --target chroot-ubuntu-lucid --target dev
Debugging tips
When building some project, rbm will create a temporary directory, copy all the input files and a build script in that directory, then run the build script. After the build finish (or fails), that temporary directory is removed, which can be annoying if you want to understand why the build failed.
To avoid this, you can enable the debug option. Add the following line to /etc/rbm.conf:
debug: 1
Or use the --debug command line argument. If the debug option is enabled and a build fails, rbm will open an interactive shell in the temporary build directory, so you can inspect the build script and other files, and maybe continue the build. When you exit that shell, the temporary directory will be removed.
When you are writting rbm project descriptors, you can use some template directives, target specific options, distribution specific options, etc … It can sometime be difficult to understand what will be the value of an option, or the content of a build script. To check the value of an option, you can use the showconf rbm command. This option takes a project’s name, an option name, and the same options as other build commands.
For instance if you want to check the abbreviated git commit in dev and stable targets for libevent, you could run the following commands:
$ rbm showconf libevent abbrev --target dev
4c8ebcd3598a
$ rbm showconf libevent abbrev --target stable
64177777165d
If you want to compare the build script of tbb with and without the include_pt target:
$ rbm showconf tbb build --target dev --target include_pt
[...]
$ rbm showconf tbb build --target dev
[...]
Pubishing builds
rbm can be used to publish the builds. The publish command will build the selected project, and if the build suceeds run the script from the publish option to publish the build result. The prototype has an example publish script, in the file projects/common/publish:
#!/bin/bash
set -e
pubdir='[% c('var/publish_dir', {error_if_undef => 1}) %]/[% project %]/[% c('version') %]'
mkdir -p "$pubdir"
mv * "$pubdir"
rm -f "$pubdir/build"
We can see that it uses the option var/publish_dir to select the directory where the files should be put, so we can define it in /etc/rbm.conf like this:
var:
publish_dir: /tmp/publish
We can now run the following command if we want to publish a build of libevent:
$ rbm publish libevent --target dev
[...]
$ find /tmp/publish/libevent/
/tmp/publish/libevent/
/tmp/publish/libevent/4c8ebcd3598a
/tmp/publish/libevent/4c8ebcd3598a/libevent-4c8ebcd3598a-Mageia-3-x86_64.tar
rbm build can be the command that jenkins use to build and publish new builds.
This publish script could be extended to compress tarballs, send emails or update the builds database.
Testing builds
rbm does not have yet a test command, but it could be added, with a script to be run after the build succesfully ended, and before the publish script is run.
Using the VM tools described below, those test scripts could create and start some VM or containers in which the project that was built and useful test helpers are installed, and some tests are run.
This would be used by Jenkins to run the tests, but is also available for developers who want to run the tests on their own computer.
Summary of improvements
The main improvements in this prototype from the current build process are:
-
all components are built separately, and include in their output file name the commit hash or version, architecture and OS (if architecture dependant). This allows us to keep previous builds if the commit/version/architecture/OS didn’t change. So we can rebuild a bundle very quickly when the browser didn’t change.
-
the gitian replacment has features to download tarballs and verify them with sha256sum or gpg signature, so this can replace the fetch-inputs.sh script.
-
we can remove the linux/windows/macosx descriptors duplication, and instead use template directives for the parts that differ between those builds (it’s still possible to use separate files if they differ completly).
-
we can define variables based on selected OS. This allows for instance to build python 2.7 when building on Ubuntu Lucid, but avoid building it on other distros which already provide python 2.7.
-
we can define variables based on "targets" that are set on command line. For instance in the prototype using "--target enable_pt" instructs to include the portable transports (only pyptlib in this prototype) in the bundle.
-
we can easily switch from building in a VM to building locally
-
build descriptors can depend on the result of another build descriptor. This remove needs for scripts like mkbundle-*.sh.
And I think those improvements should make it easier to rebuild a new bundle automatically when any of the components of the bundle receives a new commit, and then run tests on this bundle.
What is currently missing
The following features are not yet implemented in this prototype but will be needed:
- package dependencies installation
-
We should be able to list the required build dependencies, so that they are installed automatically (and optionally removed at the end of the build). I have some ideas about how to do that so that it can work on multiple distributions, but have not implemented it yet.
- VM or chroot creation
-
While rbm can be used to build inside a chroot, or a VM with ssh, the current prototype does not handle the creation of a VM image or chroot. So we need a tool to automate that part, similar to the gitian/make-vms.sh script.
Future work
Here are some ideas for future work to be done on this tool.
- Integration of tests
-
We can make it possible to run some tests from this tool after a build. Those tests can instanciate some VMs to run the tests inside those VMs.
- Integration with builds database
-
See the proposal about a builds database. We can integrate rbm so that builds can be automatically uploaded to this database. We can also make it possible to download some input files instead of build them, if the files are available in the build database with enought signatures from trusted people.
- Reproducible build testing
-
We can add an option to build software using a library that randomize readdir(3) and python dict.iterkeys to make non-reproducibility issues more reproducible, as discussed on tor-dev.
VM / Containers tools
A tool that allows us to automatically create a VM image for a selected distribution, configure it to run some tor software and start it would be useful.
It could be used in the following situtations:
- Reproducible Builds
-
In order to make reproducible builds of Tor Browser Bundle, we need to build it in a reproducible environment. To do that, we run the build using a virtual machine or container of a Linux distribution (currently Ubuntu Lucid for Linux builds, and Ubuntu Precise for Windows and Mac OS X cross compiled builds). So we need to be able to create VM images, and start / stop them.
- Automated Testing
-
We can do some unit testing on the host where we are building the software. However more advanced testing require some virtual machines or containers, especially if we need to test how different instances interact.
- Tor Cloud images
-
Tor Cloud is a tool that helps in the creation of an Amazon EC2 image to host a tor bridge. If we have a tool that allows us to easily create and configure VM images which include tor software, then we could use that tool to create preconfigured VM images for various cloud providers.
Description of the tool
It would be useful if we had a tool where we can use a command similar to this:
$ vmtool create server1 --type kvm --os debian-wheezy --target tor-bridge \
--tor-control-port 9151
To create a kvm image based on debian-wheezy with a tor daemon configured as bridge with 9151 as the control port.
The VM could then be started, stopped, removed with:
$ vmtool start server1
$ vmtool stop server1
$ vmtool remove server1
How we can implement it
Most of those features are already implemented in various other tools. We just need a small wrapper around all those tools to provide all those features from a single command.
The following features can be implemented with those tools:
- Image Creation
-
We can create a chroot directory of a minimal install using deboostrap, yum or urpmi, depending on the selected distribution. For the distributions that don’t have a similar tool, we can store an archive of a minimal chroot. We then need to convert this chroot directory to an image if we want to use something else than LXC.
- Image Configuration
-
We need to take the minimal install that we have, and install some software and configure them according to the type of image that was requested. To do this we can use a configuration management tool such as Ansible. We’ll need to implement the deployement of all tor projects in an Ansible repository. This repository will be useful to configure the VM images, but can also used by people who want to deploy some Tor software on their machines.
- VM Management
-
Most of this can be handled using a tool such as libvirt.
Builds Database
Having reproducible builds allow us to have more than one person building software and producing identical files. The people who built the software gpg sign the resulting hashs and upload them somewhere. The users can download those signed files and check that a build has been signed by different people.
The process of checking that a build has been signed by different people is currently manual.
We can automate this process by creating a build signatures database, and some tools to interact with this database.
How it can work
The database will be a web service that receive a gpg signed json file containing :
-
a filename
-
sha256sum of the file
-
optionally one or more URLs where the file can be downloaded
The database server will index those entries and allow they to be queried by key id, filename, sha256sum, URL.
The Command line tool
The builddb command line tool will be a tool that can :
- with the upload command
-
create the json file containing the sha256sum and url, sign it, and upload it to the builds database.
- with the list command
-
list the build signatures for a selected filename, sha256sum or URL. The type of field being queried is specified as part of the argument: filename:some-filename.tar, sha256sum:5ab3122ba9c613e73a1, url:http://something/filenam.tar. If the type is not specified it is expected to be a filename.
- with the verify command
-
do the same as the list command but also verify the signatures, using a selected gpg keyring.
- with the download command
-
do the same as the verify command, but also download the file, and check that it matches the expected sha256sum.
It will take the following options either in a configuration file or as command line arguments to override the configuration file:
- Upload GPG Key ID
-
The GPG Key that should be used to sign the upload.
- Upload URL
-
The URL where the file can be downloaded. A %f in this string will be replaced by the filename. This option can be specified multiple times.
- Build Database Server
-
The url of the builds database where it should be uploaded.
- Verify GPG Keyring
-
A gpg keyring file containing the keys to use to check the build signatures. Separated by : it takes the minimal number of keys from this keyring which should have signed the build.
Usage examples
This tool does not exist yet, but this section give usage examples to show how it could be used if it existed.
Upload a build in the database:
$ builddb upload --gpg-key-id 1B678A63 \
--url https://people.torproject.org/~boklm/builds/%f \
tor-browser-linux64-3.0-rc-1_en-US.tar.xz
Download a file, and require that it has been signed by at least two people whose key is in the trusted-builders.gpg keyring file :
$ builddb download --gpg-keyring trusted-builders.gpg:2 \
tor-browser-linux64-3.0-rc-1_en-US.tar.xz
Good signature from: 01234567
Good signature from: ABCDEF00
Downloading file: OK
Checking sha256sum: OK
Verify that a file you already downloaded has been signed by at least two people from the trusted-builders.gpg keyring file:
$ sha256sum tor-browser-linux64-3.0-rc-1_en-US.tar.xz
6badd587f5c70e34808a98bd9b0b1627e4d9d097e0ed25ab3122ba9c613e73a1 tor-browser-linux64-3.0-rc-1_en-US.tar.xz
$ builddb verify --gpg-keyring trusted-builders.gpg:2 \
sha256sum:6badd587f5c70e34808a98bd9b0b1627e4d9d097e0ed25ab3122ba9c613e73a1
Good signature from: 01234567
Good signature from: ABCDEF00
If we want to verify that it is signed by three people, but it is only signed by two:
$ builddb verify --gpg-keyring trusted-builders.gpg:3 \
sha256sum:6badd587f5c70e34808a98bd9b0b1627e4d9d097e0ed25ab3122ba9c613e73a1
Error: only 2 signatures from trusted-builders.gpg but 3 required
$ echo $?
1
Tor Downloader
A tor downloader tool can be created. It will include a gpg keyring of trusted tor builders, and a wrapper on builddb download to call it with the right options or configuration file.
Test Helpers
In order to write some tests, we will need some test helpers to check some functionalities. We will list here the tools that need to be created.
Fingerprint tool
In order to make sure that tor browser is not leaking informations that can be used for fingerprinting, testers visit https://panopticlick.eff.org/.
We need a tool that would be similar to panopticlick, but that output the results in YAML (or other machine readable format), so that it can be used in automated tests.
The testing would work like this, you load in torbrowser a URL that look like this :
http://fingerprint.tor-qa/new/[ID]
with ID replaced by a UUID or other random ID. The web application would save all informations it can get from the web browser using HTTP headers. Some javascript on the page would get some infos, and load some URLs like this to transmit them to the webserver :
http://fingerprint.tor-qa/save/[ID]?info_name=some_value
When the javascript on the page finished saving what it wanted to save, it loads the following URL :
http://fingerprint.tor-qa/finish/[ID]
And then the web server generate a report in YAML containing all the infos it received, at the following URL :
http://fingerprint.tor-qa/results/[ID]
A diff tool also needs to be created. It would take as input two YAML files containing fingerprint informations, and show the difference for the fields that are not expected to differ (ignoring things like screen resolution which are expected to change).
This should make it possible to write some automated tests that compare the fingerprint of a new tor browser with the fingerprint from a previous version, and produce an error if it differs.
Identifier tool
We need to check that tor browser cookie protection is working as expected. To help writting those tests, we need a website that store cookies and tell us whether previous cookies are available, so we can load this page during the tests.
This web site will :
-
check if the cookie already exists
-
try to save the cookie if it does not exist yet
-
display on the page whether the cookie already exists and its value
Initial version would work with cookies only. Later versions could be extended to also use other techniques for storing informations in the browser : http://lucb1e.com/rp/cookielesscookies/
Network logging tool
Something that we need to check is that tor browser, tor birdy and other clients using tor do not leak network information by doing direct connections when they should be using tor.
To do that we need a tool to monitor the network activity from a process and log all non-tor activity.
There are different alternatives to implement this tool :
TODO: list the alternatives