Skip to content

Test a package with a custom test pipeline

There is no single convention for how a package should be tested. Upstream projects ship everything from a single shell script to layered test harnesses across multiple repositories, and the framework you encounter varies: make check, ctest, pytest, cargo test, go test, and many more. This tutorial describes a workflow generic enough to accommodate any of them: a disposable VM, your profile loaded in complain mode, a log collector, and two variables you fill with the project's install and test commands.

If the project already ships an autopkgtest/DEP-8 suite, the dedicated Test AppArmor profiles with autopkgtest tutorial is easier to follow. Use this tutorial when the project's testing surface sits outside DEP-8, or when you want to run upstream tests the distro packager did not wrap.

A custom test pipeline works on any distribution and any environment. Exercising the same profile on Ubuntu, Debian, Fedora, openSUSE, and so on gives you confidence the profiles you write behave identically across distros. Running the pipeline on a schedule catches regressions too: when a new upstream version of the package contains a new behavior that is not supported by AppArmor rules, it is possible to detect and fix this profile early, in most cases even before this package is shipped to your distribution.

In this tutorial you will learn how to:

  • Generate a disposable test VM
  • Run any project's upstream test suite
  • Retrieve AppArmor events in your host in real time
  • Modify the profile and iterate the tests.

Prerequisites

On the host:

sudo apt-get install -y qemu-system-x86 genisoimage sshpass

In this tutorial, we will use dedicated directories to communicate logs and profiles to the VM. Create a working directory with profiles/ and logs/ subdirectories:

export HOST_DIR="$HOME/apparmor-vm" # or anywhere else
mkdir -p ${HOST_DIR}/{profiles,logs}

Prepare the profile

Start from an existing profile

In most cases a profile for your application already exists. It might in some case not be perfect but it will be a good starting point for these tests.

  • The distro package. Distribution relying on apparmor generally already ships AppArmor profiles in /etc/apparmor.d/. Look there first.
  • roddhjav/apparmor.d. A community-maintained set covering several hundred applications, including many that no distro ships. Worth checking when the first two come up empty.

If nothing matches, you can start by writing a minimal profile for your application by hand, allowing only the accesses you already know it will need. You can follow this tutorial.

Where to attach the profile

Three situations come up in practice, depending on whether the distro ships a binary the test pipeline can drive:

  1. Distro binary, tests overridden to use it. The profile attaches to /usr/sbin/<name>. The test suite normally runs against a freshly-built binary, but a variable it respects (for example openvpn=/usr/sbin/openvpn for OpenVPN) redirects it at the installed one. Prefer this when the distro package matches upstream closely, because the profile ends up describing the binary users actually run.
  2. Freshly-built binary, tests default to it. The profile attaches to the build-tree path (for example /tmp/<project>/run/<binary>). Many test suites look for the binary inside the source tree by default, so no override is needed. Use this when the distro ships a binary the test suite cannot drive, for instance a different upstream fork.
  3. Freshly-built binary, tests point at it via env var. Same as option 2, but the harness requires an explicit environment variable (for example TEST_NGINX_BINARY=/tmp/nginx/objs/nginx). Same reasoning: distro binary unusable.

Pick option 1 when the distro binary works, and fall back to 2 or 3 only when it does not. The three examples below show one of each.

If the test suite hardcodes a build-tree path that does not match where your profile should attach, two fixes are available:

  • Put the binary where the profile expects it. Move the built binary to the production path, or symlink it (sudo ln -sf /tmp/foo/build/foo /usr/sbin/foo). The tests then exercise the same file the profile already covers.
  • Extend the profile's attachment expression. Use a brace expansion in the profile header to cover both paths at once: profile foo /{usr/sbin,tmp/foo/build}/foo flags=(complain) { ... }.

Place the profile in the shared directory

Place whichever profile you are iterating on in ${HOST_DIR}/profiles/. We recommend putting the tested profile(s) in complain mode by adding flags=(complain) in the profile header to let the test suite run to completion even with an incomplete profile, so the collector captures every access instead of stopping at the first denial.

Prepare the VM image

Any distribution whose kernel has AppArmor built in works. This tutorial uses Ubuntu 26.04 (Resolute Raccoon) as a concrete starting point but it can be used on any distribution (e.g. Debian, Fedora, openSUSE, and Arch) that uses AppArmor. Running the same pipeline across several distros surfaces quirks that a single-distro run would miss: a file path that only exists on Fedora, a library loaded only on openSUSE, a systemd unit that drops a capability only on Debian...

Download the Ubuntu cloud image:

cd ${HOST_DIR}
wget https://cloud-images.ubuntu.com/resolute/current/resolute-server-cloudimg-amd64.img

Make a working copy and grow it. The shipped image is under 1 GB, which is not enough room for the build artefacts of any non-trivial project:

cp resolute-server-cloudimg-amd64.img vm.img
qemu-img resize vm.img 20G

Reset between runs

The working copy is disposable. If the VM gets into a bad state, delete vm.img and re-run cp + qemu-img resize to start over from the pristine image.

Write the cloud-init seed

Cloud-init runs on first boot. It creates the ubuntu user, installs packages, writes config files, and grows the root partition to fill the disk. Two files plus an ISO wrapping them:

mkdir -p ${HOST_DIR}/cloud-init

cat > ${HOST_DIR}/cloud-init/user-data << 'EOF'
#cloud-config
users:
  - name: ubuntu
    plain_text_passwd: ubuntu
    lock_passwd: false
    sudo: ALL=(ALL) NOPASSWD:ALL
    shell: /bin/bash
ssh_pwauth: true
package_update: true
packages:
  - apparmor
  - apparmor-utils
write_files:
  - path: /etc/sysctl.d/99-apparmor-testing.conf
    content: |
      kernel.printk_ratelimit=0
      kernel.printk_ratelimit_burst=1000
growpart:
  mode: auto
  devices: ['/']
EOF

cat > ${HOST_DIR}/cloud-init/meta-data << 'EOF'
instance-id: apparmor-test-vm
local-hostname: apparmor-test
EOF

genisoimage -quiet -output ${HOST_DIR}/seed.iso \
  -volid cidata -joliet -rock \
  ${HOST_DIR}/cloud-init/user-data \
  ${HOST_DIR}/cloud-init/meta-data

What each section does:

  • plain_text_passwd and ssh_pwauth: true enable password login as ubuntu/ubuntu. Fine for a throwaway local VM; for anything else, switch to ssh_authorized_keys.
  • packages lists apparmor and apparmor-utils, which are not in the minimal cloud image. Extend this list with each project's build dependencies so everything is pre-installed on first boot.
  • write_files disables kernel rate limiting for printk, which would otherwise silently drop AppArmor audit entries under load.
  • growpart expands the root partition to the full 20 GB disk.

Cloud-init ships on the official cloud images for Debian, Fedora, openSUSE, and Arch too. The user-data schema shown above is portable; only the package names and default username change across distros.

Boot the VM

qemu-system-x86_64 \
  -m 4096 -smp 4 \
  -drive file=${HOST_DIR}/vm.img,if=virtio \
  -drive file=${HOST_DIR}/seed.iso,if=virtio,media=cdrom \
  -netdev user,id=net0,hostfwd=tcp::10022-:22 \
  -device virtio-net-pci,netdev=net0 \
  -virtfs local,path=${HOST_DIR},security_model=none,mount_tag=hostshare \
  -enable-kvm -nographic &

The non-obvious options:

Option Purpose
-drive …seed.iso,media=cdrom Cloud-init seed. Safe to leave attached on subsequent boots.
-netdev …hostfwd=tcp::10022-:22 Forward host port 10022 to guest port 22 for SSH.
-virtfs …mount_tag=hostshare Expose ${HOST_DIR} to the VM via 9p virtio-fs.

A few choices worth knowing about:

  • Port forwarding instead of direct connection. User-mode networking puts the guest behind a NAT, and hostfwd is how the host reaches the guest's SSH port without elevated privileges. Connecting directly to the guest's port 22 would require bridged networking, a TAP device, and root on the host, none of which buys anything for a throwaway VM.
  • 9p virtio-fs for the share. 9p is built into QEMU and works without any daemon on the host, which makes it the simplest way to share files. virtio-fs via virtiofsd is faster under heavy I/O but needs an additional daemon; worth considering if build times over the share become a bottleneck.

First boot runs cloud-init, which takes one to three minutes depending on how many packages you listed. Wait until it finishes before continuing:

until sshpass -p ubuntu ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \
    -p 10022 ubuntu@localhost 'cloud-init status --wait' 2>/dev/null | grep -q done; do
  sleep 5
done

Set up the AppArmor environment

SSH into the VM:

sshpass -p ubuntu ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \
    -p 10022 ubuntu@localhost

Once inside, mount the 9p share, load the profile, and start the event collector. One shot per VM boot:

profile=<profile-name>

sudo mkdir -p /mnt/host
sudo mount -t 9p -o trans=virtio,version=9p2000.L hostshare /mnt/host
sudo cp /mnt/host/profiles/* /etc/apparmor.d/
sudo apparmor_parser -r /etc/apparmor.d/${profile}

nohup sudo sh -c "journalctl -k -f | grep --line-buffered 'audit.*apparmor=' >> /mnt/host/logs/denials.log" > /dev/null 2>&1 &
disown

The collector is journalctl -k -f piped through grep to filter AppArmor kernel audit events, appending to /mnt/host/logs/denials.log (which is ${HOST_DIR}/logs/denials.log on your host). nohup + disown detach it so it survives the SSH session closing.

Scripted invocation

If you expect to rerun the pipeline often, or you want to drive it unattended (on a schedule, in CI), all the steps below can be executed from the host without an interactive SSH session. Define $SSH as a shortcut and heredoc commands into it:

SSH="sshpass -p ubuntu ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -p 10022 ubuntu@localhost"

$SSH 'bash -s' << ENDSSH
set -e
sudo mkdir -p /mnt/host
sudo mount -t 9p -o trans=virtio,version=9p2000.L hostshare /mnt/host
sudo cp /mnt/host/profiles/* /etc/apparmor.d/
sudo apparmor_parser -r /etc/apparmor.d/${profile}
...
ENDSSH
The rest of this page shows interactive commands; translate them to $SSH "..." or heredocs when automating.

Run the test pipeline

The commands to install and run a project's test suite are project-specific. Check the project's README or INSTALL file. The example sections below show concrete invocations for OpenVPN, John the Ripper, and nginx.

Inside the VM, run the install step first and the test step second. Running them as two separate commands keeps the install output on your terminal before the tests start, which makes build failures easy to spot. For automation, define two variables on the host and drive them through $SSH:

INSTALL_TESTS="<commands to fetch, build, and prepare the test suite>"
RUN_TESTS="<command to execute the tests>"

$SSH "${INSTALL_TESTS}"
$SSH "${RUN_TESTS}"

Review the results

${HOST_DIR}/logs/denials.log now holds all AppArmor events the kernel emitted during the run. Two ways to process it:

With aa-logprof. Generates rule suggestions interactively, one per denial:

sudo aa-logprof -f ${HOST_DIR}/logs/denials.log

Reviewing the proposals, accepting or narrowing them, and saving the updated profile is identical to the standard case, covered in Test AppArmor profiles with autopkgtest.

By hand. For quick spot-checks or when tracking a single access, grep the log directly:

grep 'apparmor="ALLOWED"' ${HOST_DIR}/logs/denials.log
grep 'apparmor="DENIED"'  ${HOST_DIR}/logs/denials.log

Each line includes the profile name, the operation, the target path, and the requested permission mask. Faster than aa-logprof for debugging a specific denial, and sometimes easier for getting a feel for the overall access pattern.

Iterate on the profile

Once the profile is updated on the host (either via aa-logprof or by hand), copy it back into the shared profiles/ directory:

cp /etc/apparmor.d/${profile} ${HOST_DIR}/profiles/

Then, inside your SSH session on the VM, hot-reload it. No reboot, no rebuild:

sudo cp /mnt/host/profiles/${profile} /etc/apparmor.d/
sudo apparmor_parser -r /etc/apparmor.d/${profile}

Back on the host, truncate the log so the next run's events are not mixed with the previous ones:

: > ${HOST_DIR}/logs/denials.log

Re-run just the test command inside the VM. The build artefacts and dependencies from the first run are still there, so INSTALL_TESTS does not need to run again.

If denials.log comes back empty (or contains only STATUS entries), the profile covers everything the test suite exercised. Switch the profile to enforce mode (remove flags=(complain) from the header) and run one final pass to confirm the suite still passes cleanly with the profile blocking everything it does not explicitly allow.

When the VM is no longer needed, shut it down from inside the session:

sudo poweroff

The pristine image stays untouched, so you can delete vm.img and start over from the cp + qemu-img resize step at any time.

Example: OpenVPN upstream tests

OpenVPN's upstream repository ships a test suite that builds against the source tree and runs via make check: cipher loopback (t_lpback.sh), client/server protocol handshake (t_cltsrv.sh), and SITNL network tests (t_net.sh). The Debian/Ubuntu openvpn package's DEP-8 suite does not cover these.

Profile target: option 1 above, /usr/sbin/openvpn (the distro binary). The openvpn=/usr/sbin/openvpn variable that make check respects redirects tests at the installed binary instead of the freshly-built src/openvpn/openvpn.

Package additions to the cloud-init user-data:

  - openvpn
  - git
  - build-essential
  - autoconf
  - automake
  - libtool
  - pkg-config
  - libssl-dev
  - liblzo2-dev
  - liblz4-dev
  - libpam0g-dev
  - libcap-ng-dev
  - libnl-genl-3-dev
  - libcmocka-dev

Pipeline:

profile=openvpn

INSTALL_TESTS='
  set -e
  git clone --depth=1 https://github.com/OpenVPN/openvpn /tmp/openvpn
  cd /tmp/openvpn
  autoreconf -i -v -f
  ./configure
  make -j$(nproc)
'

RUN_TESTS='
  cd /tmp/openvpn
  openvpn=/usr/sbin/openvpn make check
'

Expected output:

PASS: t_lpback.sh
PASS: t_cltsrv.sh
PASS: t_net.sh
SKIP: t_server_null.sh     (requires a t_server_null.rc config)
SKIP: t_client.sh          (requires a reachable OpenVPN server)

# TOTAL: 5  PASS: 3  SKIP: 2  FAIL: 0

For broader coverage, supply a t_server_null.rc and t_client.rc following the samples in the tests/ directory.

Example: John the Ripper upstream tests

John the Ripper ships two complementary test entry points: the built-in --test=0 self-test that exercises every supported hash format against known vectors, and the shell scripts in src/tests/ that drive the binary end-to-end (external modes, UTF-8 handling, wordlist rules). Neither is covered by the john package's DEP-8 tests.

Profile target: option 2 above, /tmp/john/run/john (the freshly-built binary). The distro john package is the classic fork, but src/tests/ and many --format= options assume jumbo, so the distro binary is not usable. The default JOHN=../../run/john in test_externals.sh already resolves to the same path the profile attaches to, so no override is needed.

Package additions to the cloud-init user-data:

  - git
  - build-essential
  - libssl-dev
  - zlib1g-dev
  - yasm
  - libgmp-dev
  - libpcap-dev
  - libbz2-dev
  - pkg-config

Pipeline:

profile=john

INSTALL_TESTS='
  set -e
  git clone --depth=1 -b bleeding-jumbo https://github.com/openwall/john /tmp/john
  cd /tmp/john/src
  ./configure
  make -sj$(nproc)
'

RUN_TESTS='
  set -e
  /tmp/john/run/john --test=0
  cd /tmp/john/src/tests
  JOHN=/tmp/john/run/john bash test_externals.sh
'

Expected output. The --test=0 run ends with:

All 436 formats passed self-tests

and test_externals.sh produces a long stream of candidate passwords from each external mode (DumbForce, KnownForce, and so on) with exit code 0.

Example: nginx upstream tests

nginx keeps its test suite in a separate repository, a Perl/prove-driven harness that boots real nginx instances on localhost ports 8000-8999, issues HTTP requests, and checks the responses. Pointing it at a binary is a one-env-var affair: TEST_NGINX_BINARY=/path/to/nginx prove .

Profile target: option 3 above, /tmp/nginx/objs/nginx (the freshly-built binary). Ubuntu's nginx -V advertises dynamic modules (http_geoip and others) whose .so files ship in optional libnginx-mod-* packages, some of which no longer exist. The test harness probes nginx -V and emits load_module directives that then fail on the distro binary. Building from source sidesteps this; TEST_NGINX_BINARY points the harness at the built binary.

Package additions to the cloud-init user-data:

  - git
  - build-essential
  - libpcre2-dev
  - zlib1g-dev
  - libssl-dev
  - perl
  - libio-socket-ssl-perl
  - libnet-ssleay-perl
  - libgd-perl
  - libcryptx-perl
  - libprotocol-websocket-perl

Pipeline:

profile=nginx

INSTALL_TESTS='
  set -e
  git clone --depth=1 https://github.com/nginx/nginx /tmp/nginx
  git clone --depth=1 https://github.com/nginx/nginx-tests /tmp/nginx-tests
  cd /tmp/nginx
  ./auto/configure --with-http_ssl_module
  make -j$(nproc)
'

RUN_TESTS='
  cd /tmp/nginx-tests
  TEST_NGINX_BINARY=/tmp/nginx/objs/nginx prove -j4 .
'

prove -j4 runs four test files in parallel. Each spins up its own nginx on a different port, which the framework manages automatically.

Expected output:

./upstream_resolve.t ....................... ok
./upstream_service.t ....................... ok
Files=484, Tests=3088, 101 wallclock secs
Result: PASS

Many tests are skipped if features were not compiled in (no http_v2 available, no mail available, and so on). Enable more at ./auto/configure time (--with-http_v2_module, --with-mail, --with-stream, --with-http_xslt_module=dynamic, and others) for broader coverage.

The AppArmor footprint this exercises is large. In one run the collector captured over 70000 events across 19 operations (accept, bind, connect, listen, sendmsg, recvmsg, mkdir, chmod, rename_src, rename_dest, unlink, and more) touching the worker's temp directories, config files, and nameservice files. This is exactly the kind of coverage manual testing does not reach.