Containerized Server


Containers are a strong form of encapsulation; they come with precisely the libraries the server needs to run, which makes them robust. The base filesystem is immutable, making them resist accidental permanent changes. They can only see the parts of your computer they need to see to do their job, which makes them more secure both against errors and attacks. Your server may go down, but it won’t leave permanent damage and is less likely to leak your ssh keys to the net.

What you need

We recommend podman, it makes the least problems with mapping the variable data to a host directory in a way that you can easily inspect and modify it without requiring anything to run as root.

If you can’t or don’t want to install that, there is still docker, both the regular variant as well as rootless mode. Rootless would be preferred, but you’ll have to make a choice between two not so optimal settings below.

We’ll be using podman as the command. Replace it with docker or sudo docker as needed.

How to get it

We supply pre-built docker containers in our container registry. All images starting with armagetronad- are server images. They are set up so they

  • look for data and configuration in /usr/share/armagetronad
  • store variable data in /var/armagetronad, the resource cache is in /var/armagetronad/resource
  • by default, use a nameless volume for the variable data
  • by default, run everything as user ‘nobody’
  • if started as root, chown the variable data to ‘nobody’, then run the server as ‘nobody’

They use the lean alpine linux as a base and come with only the runtime requirements in the finished image.

If you don’t want to rely on prebuilt images or want to make modifications to the base of the system, you can just build your own: The dockerfile is just in the root source directory. Build it with

podman build . -t armagetronad

You can give it a base image to use as a build parameter; should you need anything extra in your server, you can put it into the base image you use. Alternatively, you can use the resulting or supplied image as a base image in your own dockerfile and add your extra requirements on top of it.

There are no additional requirements on the host for building the image; however, the intermediate images used to do the actual build are significantly larger than the resulting image, around 300-500 MB.

How to run


You have a directory set up with a var subdirectory for the variable data and a data directory for configuration and resources:

[manuel@Kermit test]$ du
4	./var
4	./data/resource
8	./data/config
16	./data
24	.

You want the server to use those, if possible. Docker and podman accept only absolute paths to directories, so we have to get those, and we need to decide which image to use. Also, network port and a volume name are picked for later. This block and the following ones all belong into a single shell script, the final command line will be built from the variables we set up:

# image to run
# absolute path to var directory
VAR="`readlink -f ./var`"
# absolute path to data directory
DATA="`readlink -f ./data`"
# volume name
# port the server runs under


The server needs to be reachable from the network, for that, we need to forward the port. The easy way is to just attach the container to the host network:


The advantage of this approach is that if the desired port is already taken by another server, this one will just work like an uncontainerized server and pick the next free port. However, it exposes more than we need, so it’s not so great at encapsulation. You should probably use




To make the var directory visible, we’ll use a bind mount:


This makes the var directory on the host visible as /var/armagetronad inside the container. Just ideal!

Alternatively, you can use a named container; this will have podman create a var folder for you somewhere where it has full control (check out where with podman volume inspect ${VOL}):


Or you can give nothing; this will use an unnamed container that does not persist between runs. Just “” will not work with the command line given later, so we just pick a NOP option (in our setup; it means ‘do not detach’):



For the mapping above to work as expected, running the server as user ‘nobody’ inside the container won’t do. We need to make the UID inside the container match the UID outside:


Sadly, that’s not always enough. The folliwing combinations may require entering the container as root (Sometimes not! Try the above first):

  • podman with a nameless volume
  • rootless docker with a bind mount
  • rootful docker with any kind of volume

Use in that case:


If you do that, the entry point script will make the required adjustments, then drop root rights to launch the server process under the user ‘nobody’. So it should be reasonably safe.

Download the image

This only needs to be done every once in a while:

podman pull ${IMAGE}

Run the image

Finally, the run line:

podman run --userns=keep-id -it --rm -u ${U}:${U} \
    ${NET} \
    "${VARVOL}" \
    -v "${DATA}:/usr/share/armagetronad:ro" \

Docker users omit the --userns=keep-id argument, that is special podman sauce that helps with the userid mapping between container and host. It lets the UIDs match AND makes bind mounts inside the container owned by the user they’re owned on the host, so just what we need for the var folder.

-it makes the server run with an interactive terminal, making it possible to enter commands. If you prefer, you can background the container with -d and control it over the network.

--rm makes the unnamed container itself and the eventual unnamed volume be deleted when the server quits.

The extra -v line mounts the data directory where the container expects it; the :ro option makes it read only inside the container.

For extra safety, you can add the --read-only flag. This makes write attempts to the container filesystem impossible, protecting even the current run against data mutations. YMMV, though, it failed for podman on Manjaro Linux, but worked fine for docker and podman on Ubuntu.

Another optional, but recommended, extra argument would be --init. This boots a tiny init process inside the container and only launces the server from that. Observable advantage: CTRL-C works in all configurations. The init process used needs to be installed on the host system; docker takes care that it exists. Podman uses catatonit, you may need to install that separately.

And that’s it!

Put it together

So, with the definitions from the setup, good command lines to run armagetronad are:


Podman works rootless all the way through and supports bind mounts to your var directory without access trouble. So you use this:

podman run --userns=keep-id -it --rm --read-only --init \
    -u ${UID}:${UID} -p=${PORT}:${PORT}/udp \
    -v "${VAR}:/var/armagetronad" \
    -v "${DATA}:/usr/share/armagetronad:ro" \

Rootless docker

Rootless docker has access right trouble to bind mounts unless you give root access inside the container; it’s probably better to use a named volume:

docker run -it --rm --read-only --init \
    -u ${UID}:${UID} -p=${PORT}:${PORT}/udp \
    -v "${VOL}:/var/armagetronad" \
    -v "${DATA}:/usr/share/armagetronad:ro" \

If you do want to use a bind mount for var, start as root inside the container (the server will run as ‘nobody’):

docker run -it --rm --read-only --init \
    -u 0:0 -p=${PORT}:${PORT}/udp \
    -v "${VAR}:/var/armagetronad" \
    -v "${DATA}:/usr/share/armagetronad:ro" \


No problem again with bind mounts:

sudo docker run -it --rm --read-only --init \
    -u ${UID}:${UID} -p=${PORT}:${PORT}/udp \
    -v "${VAR}:/var/armagetronad" \
    -v "${DATA}:/usr/share/armagetronad:ro" \


Not much expertise there. You should be able to translate the command lines into orchestration configuration items; you just want to drop everything that’s just there to allow terminal like interaction. So no -it. --init would still be recommended as that also takes care of proper cleanup when the server exits. Depending on how the orchestration handles containers, --rm may not be required as containers can just be tracked and recycled, in principle. And mind that the container exits when the server exits, which you ideally enforce to happen with DEDICATED_IDLE once a day or so. The orchestrator needs to restart the server then.

Documentation patches very welcome :)