Running tight
Twenty-five years ago I first reached for OpenBSD when I needed a firewall host I could actually trust. I am still reaching for it now — except these days I am reading the kernel source, and occasionally finding something worth sending to the team.
The pufferfish is not an obvious mascot for a security-focused operating system. It is small, moves slowly, and its primary defence is to inflate itself into something that is mostly still harmless. But it has very good defences, it has been doing the same thing reliably for a very long time, and it tends to be underestimated by larger, faster things that should know better. Puffy, OpenBSD's mascot since Theo de Raadt adopted it in the mid-1990s, is a better metaphor than it first appears.
I have been running OpenBSD since the late 1990s. This is a post about why, and about what I found when I finally sat down and read the source carefully.
CityReach and the firewall room
My first serious encounter with OpenBSD was operational rather than research. I was working at CityReach, an internet company in the years when "internet company" still felt like a specific, somewhat audacious thing to be. We needed firewall hosts we could trust: machines that would sit at the edge of the network, accept very little, do one job, and not surprise us at 2am with something unexpected.
OpenBSD was the answer almost by consensus. The project had been publishing security advisories with actual fixes attached since its founding. The default install was, and remains, deliberately minimal — you start with almost nothing and add only what you need, rather than starting with everything and trying to remove what you do not. The kernel was small enough that reading it felt possible rather than aspirational. And the team had a culture of correctness that was legible in the code itself: comments that explained not just what something did but why it was written that way, and what the alternatives were.
We ran Snort on those OpenBSD hosts for intrusion detection. This was, by modern standards, rudimentary: signature-based detection, carefully tuned rule sets, a great deal of manual log review. But it worked, and the combination of a minimal host OS with a focused IDS tool on top of it was effective in a way that mattered. The attack surface was narrow enough that when something unusual showed up in the Snort logs, it was usually worth looking at.
The insurance company and the CVS-managed firewall estate
A few years later I was working for a large insurer — the kind of organisation with offices in several countries, a complex network estate, and a genuine need to manage firewall policy consistently across all of it. The security team there had done something I thought was elegant: they were using CVS to manage their pf rulesets.
pf, OpenBSD's packet filter, has been the project's firewall since 2001. Its configuration syntax is clean enough to read as documentation, which means it is also clean enough to version-control meaningfully. Every firewall change was a CVS commit: who made it, when, why, and what the diff looked like. Rollback was cvs update -r. Audit trail was cvs log. The entire global firewall policy lived in a repository that any authorised engineer could check out, review, and understand.
It sounds straightforward written out like that. In practice, most organisations of that size were managing firewall changes through spreadsheets, change-control tickets, and the institutional memory of whoever had been there longest. The CVS approach was not glamorous, but it was correct in the way that OpenBSD things tend to be correct: it treated the firewall configuration as what it was — code — and applied the same discipline to it that you would apply to any other code that mattered.
What makes OpenBSD different
I have used a great many operating systems across thirty-plus years of this work. OpenBSD occupies a particular position that I have not found replicated elsewhere, and it is worth being specific about what that position is.
It is not primarily about the default security settings, though those are good. It is not primarily about pledge() and unveil(), though those are excellent additions. It is about the project's relationship with correctness as a discipline. The OpenBSD developers audit their own code. They write down what they find. They fix it, credit it, and publish it. The security advisories are honest: they say what was wrong, where, and why it matters. They do not reach for "defence-in-depth" as a way of minimising a real problem.
That culture produces code that is worth reading. Which is eventually what I found myself doing.
The Mac Mini audit
Earlier this year I set up a Mac Mini as an OpenBSD research platform and spent several weeks doing what the cross-reference technique in my previous post describes: reading source, checking it against the project's own documentation, and looking for places where the implementation and the specification had quietly diverged.
I found two things worth sending to the team. Both were accepted, both were fixed, and I received a commit credit on the CVS tree for each. I am not writing this to take a bow — in the context of what the OpenBSD developers find and fix themselves, these are modest contributions. But they illustrate something about how the work goes.
kern_unveil.c — escalation guard disabled since the initial commit
unveil_setflags() has contained a permission escalation guard since it was first written in 2018. The guard is correct: it checks whether a new permission set would be wider than the existing one and returns early if so. The problem is that the guard has been inside a #if 0 block since the initial commit, which means it has never executed.
The practical effect: before unveil(NULL, NULL) locks the sandbox, any process with PLEDGE_UNVEIL can widen its own permissions on an already-unveiled path. The unveil(2) man page states that attempting to increase permissions returns EPERM. The code does not enforce this. The fix is simply removing the #if 0 / #endif — the guard body has been correct for eight years, waiting to be switched on.
exec_elf.c — type truncation in elf_adjustpins
A subtler one. elf_adjustpins() receives a vaddr_t value — a 64-bit address type on amd64 — via a parameter typed as u_int, which is 32-bit. When no executable segment exists in an ELF binary, text_start is set to the sentinel value -1 (all bits set). Passed through the u_int parameter, this truncates to 0xFFFFFFFF, which then silently corrupts ps_pin.pn_start and pn_end in the pinsyscall structure.
Standard ELF binaries produced by the OpenBSD toolchain will always have a PF_X segment, so this is not a straightforward trigger in normal use. But it is a type mismatch in a security-critical path — pinsyscall is part of the mechanism that restricts which addresses may make system calls — and type mismatches in security-critical paths are worth fixing cleanly regardless of how contrived the trigger currently is.
Why this matters, modestly
I want to be careful not to overstate what these findings represent. OpenBSD is a very well-audited operating system. The team finds far more than external researchers do, and does so continuously. The unveil escalation bug sat in the codebase for eight years not because nobody was looking, but because the #if 0 looked like intentional scaffolding rather than a mistake — the kind of thing that only stands out when you are reading with the specific question "does this code do what the man page says it does?"
What the audit process confirmed, for me, is that the cross-reference technique works in the other direction too: reading the man pages and the code together, looking for places where the documented behaviour and the implemented behaviour diverge, is a useful approach on any well-documented codebase. OpenBSD is particularly good for this because its documentation is unusually honest about what the code is supposed to do.
Active research on the OpenBSD network stack continues. I will write further posts as findings reach an appropriate point for public discussion.
Puffy has been doing the same thing reliably for thirty years: minimal surface, clear rules, honest documentation, fix what is wrong and say so plainly. That is not a marketing position. It is a practice, sustained by a small team across a very long time.
The firewall hosts at CityReach are long gone. The pf rulesets from the insurance company are in some archive somewhere, or they are not. But the discipline that made OpenBSD the right answer then is the same discipline that makes its source worth reading now. The pufferfish is still there. I am still pinging it.