Turning your infrastructure into a dependency graph
Deploying a web server is fraught. You need error handling, database migrations, and rollback mechanisms. If all goes well, nobody notices. But if a deploy goes badly, you might suffer downtime at best or permanent data loss at worst. Even worse, when things go wrong, you’re necessarily working within a broken setup: you can no longer trust your tools.
The industry advice is to adopt heavy tools like Ansible, Chef, or Terraform. These are powerful, but they come with a tax. You have to install Python or Ruby agents, manage remote state files, learn a complex DSL, and maintain a control plane.
For single-node services, I believe Make offers a simpler and often-overlooked path. It is installed on virtually every Unix server in existence, has zero dependencies, and provides a robust “standard library” for dependency management.
The Core Philosophy: Files are State
You might associate make with C projects, where it’s commonly used as a build tool. The standard pattern is to recompile any .o object file that is older than a .c source file using something like this:
SOURCES=$(wildcard *.c)
OBJECTS=$(SOURCES:.c=.o)
MACROFILES=defs.h constants.h
%.o: %.c $(MACROFILES)
$(CC) $(CFLAGS) -c $< -o $@If you stop to think about it, this is incredibly powerful. This short script:
- Finds every
.csource file in the directory. - Uses Make’s substitution system to build a list of corresponding object files.
- Constructs a Directed Acyclic Graph (DAG) of all dependencies.
- Considers an object file outdated if it is older than its source file or any macro files.
- Resolves them using the provided recipe.
The syntax seems arcane if you’re not used to it, but once you learn it, it lets you write clear, concise dependency rules.
It is a category error to think of make merely as a build tool; it is actually a very sophisticated state engine
An Aside: Poorly-Recalled Oral History
In my sophomore year, I worked at SLAC. Since I knew nothing about computing, and was therefore useless, a senior researcher gave me a half-day introduction to Unix. He had worked at Bell Labs before coming to Stanford and knew the creators of Make.
I wish I could remember his name to thank him for his advice; he was very kind to me, and I’m sure I was a brat. And I had no idea my first computing lesson came from an industry titan!
He mentioned to me that the ideas which went into Make – specifically its dependency resolution – originated with backward-chaining inference techniques from early AI research. I didn’t appreciate his anecdote at the time, but I am fascinated by this history now. Regardless of the precise history (or whether I remember it correctly), he taught me to use Make early and often for automation. That lesson did stick.
Ops as State Management
I understand Ops to be fundamentally about state management:
- Is the systemd unit file (
/etc/.../app.service) older than my configuration template? - Is the installed JAR file older than the latest release?
- Does the GPG keyring exist?
- Has the signature on the JAR file been verified?
Crucially, when you model your deployment using Make, you get idempotency for free.
A Bash script usually runs every command every time unless you litter it with if [ ! -f ... ] checks. Make solves this natively. If the target exists and is up to date, Make does nothing.
For example, installing a GPG key for signature verification:
$(KEYRING_DIR):
sudo install -d -m 0755 "$(KEYRING_DIR)"
$(KEYRING_FILE): | $(KEYRING_DIR)
# make a tempdir for the download
tmpdir="$$(mktemp -d)"
trap 'rm -rf "$$tmpdir"' EXIT
aws s3 cp "$(KEY_URL)" "$$tmpdir/key.asc" --no-progress
# check fingerprint...
gpg --dearmor < "$$tmpdir/key.asc" > "$$tmpdir/key.gpg"
sudo install -m 0644 "$$tmpdir/key.gpg" "$(SIGNING_KEY)"Here, $(KEYRING_FILE) requires $(KEYRING_DIR) to exist, but changes to the directory’s timestamp won’t trigger a rebuild of the file. If $(KEYRING_FILE) exists, Make skips this entire block. Re-runs become instant.
Make like it’s 1999! (Or, It’s not 1970 anymore)
Standard Make has defaults that are hostile to scripting. It executes every line in a separate sub-shell (meaning variables don’t persist), and it doesn’t fail fast by default.
We can fix that with a few lines of “Modern Make” configuration:
.ONESHELL:
.DELETE_ON_ERROR:
.SHELLFLAGS := -euo pipefail -c
SHELL := /bin/bash.ONESHELL: Tells Make to run the entire recipe in a single shell instance. You can write standard, multi-line Bash scripts with loops and variables right inside your Makefile. .SHELLFLAGS := -euo pipefail -c: This is the “strict mode” of Bash.
-e: Exit immediately if any command fails.-u: Treat unset variables as errors.-o pipefail: Ifcurl | tarfails, the whole command fails (not just the last part).
.DELETE_ON_ERROR: If a rule fails halfway through (e.g., a download gets interrupted), Make deletes the partial file. This prevents you from deploying a corrupt artifact on the next run.
(N.B.: the make that ships with OSX is very old and does not support these features; if you develop on a Mac and would like to follow along, you will need to install a “modern” version.)
Key Patterns for Server Orchestration
Once you have this foundation, you can implement patterns that rival complex orchestration tools.
The Sudo Sandwich
Running make as root is dangerous. Running as a user often lacks the permissions required for Ops work. We need a mix.
The solution is the SUBMAKE pattern. Your entry point is unprivileged, but specific targets call sudo make recursively.
SUBMAKE := $(MAKE) --no-print-directory -f $(SELF) $(SUBMAKE_VARS)
.PHONY: stage-jar stage-jar-as-app-user
# install the jar file with tight perms if it has changed
stage-jar: $(LOCAL_JAR)
@echo ">> Staging latest uberjar into $(RELEASES_DIR)"
sudo install -C -m 0400 -o "$(APP_USER)" -g "$(APP_GROUP)" "$(JAR_FILE)" "$(JAR_DEST)"
sudo -u $(APP_USER) $(SUBMAKE) stage-jar-as-app-user
# update the STABLE_JAR_LINK symlink; needs to run as APP_USER
stage-jar-as-app-user:
@echo ">> Updating stable symlink"
CUR=$$(readlink "$(abspath $(STABLE_JAR_LINK))" 2>/dev/null || echo "")
if [ "$$CUR" = "$(abspath $(JAR_DEST))" ]; then
echo "-- Symlink already points to latest."
else
ln -sfnv "$(abspath $(JAR_DEST))" "$(abspath $(STABLE_JAR_LINK))"
echo "-- Updated symlink $(STABLE_JAR_LINK) -> $(JAR_DEST)"
fiThis pattern limits the number of sudo commands, enables you to cleanly execute a script as sudo when needed, and makes it clear what runs as whom.
Templating
You don’t need a templating engine like Jinja2; you have envsubst:
nginx-https:
tmp="$$(mktemp)"
trap 'rm -f "$$tmp"' EXIT
export DOMAIN="$(DOMAIN)" SERVICE_NAME="$(SERVICE_NAME)"
envsubst '$${DOMAIN} $${SERVICE_NAME}' < "$(CONF_FULL_HTTPS_SRC)" > "$$tmp"
sudo install -m 0644 "$$tmp" -- "$(CONF_SITE_DEST)"
$(SUBMAKE) nginx-testDynamic Configuration
If you’re a lunatic like me, you can have Make build Make fragments, which it then includes. This enables your script to dynamically adapt to the state of your system. (And with this declarative control loop, you can possibly see a hint of some distant AI connection.)
I typically use this to determine the “current” release version. I define a target to build a latest.mk file that defines a variable LATEST_VERSION fetched from S3. Make attempts to build this included file before it runs the rest of the graph, ensuring my deployment logic always knows the state of the world.
The “Human-Driver” UX
My favorite part of using Make is the user experience. You can chain targets to create a complete workflow.
deploy: install-jar server-restart tail-server-logsWhen I run make deploy, the system:
- Downloads the artifact (if needed).
- Installs it (if it changed).
- Restarts the service (if a new artifact was installed).
- Immediately tails the logs.
I get a single entry point that handles the complexity but leaves me staring at the logs to confirm success. All from a 50-line, self-contained file that’s easy to audit or adapt to my needs.
The Secure Supply Chain: Signed Artifacts
The strongest argument for this approach is supply-chain integrity. How do you ensure the code running on the server is exactly what you built in CI, without giving the server dangerous permissions?
We use GPG-signed Uberjars.
In our CI pipeline, we sign the artifact (app.jar.asc). The Makefile on the server acts as a gatekeeper.
deploy: install-jar ...
install-jar: ... verify-signature
verify-signature: $(JAR_FILE) $(ASC_FILE)
gpg --verify $(ASC_FILE) $(JAR_FILE)The Make DAG enforces that signature verification must succeed before installation can occur. If verification fails, Make returns a non-zero exit code and the install-jar step never happens. Consequently, the deploy never happens either.
This allows for Credential-less Deploys. The server doesn’t need secrets to pull code. It doesn’t need to authenticate with a container registry or a git repo. It only needs an AWS IAM role with s3:GetObject permissions on the release bucket.
We don’t even grant s3:ListBucket permissions. The Makefile constructs the exact path based on the version ID (constructed using the latest.mk trick described above). This separates the signal (what version should be live) from the payload (the secure artifact).
Conclusion
Make is often overlooked, but its primitives—DAGs, file timestamps, and shell integration—are very useful for deployment orchestration.
Of course, Make is not a cloud provisioning tool. It will not manage hundreds of nodes or reconcile distributed state. But for single-node services, controlled clusters, or high-security environments, its explicitness is an advantage.
Make lets you establish clear dependency rules, and it respects the “Principle of Least Surprise.” It works on your laptop, your CI runner, and your production server with zero setup. It turns your infrastructure into a clean, dependency-driven graph. We find that it’s unreasonably effective.