<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
    <channel>
      <title>Raskell</title>
      <link>https://raskell.io</link>
      <description>Writing about platform automation, edge systems, applied security, and open standards. Building automation-first platforms that survive production reality.</description>
      <generator>Zola</generator>
      <language>en</language>
      <atom:link href="https://raskell.io/rss.xml" rel="self" type="application/rss+xml"/>
      <lastBuildDate>Mon, 13 Apr 2026 00:00:00 +0000</lastBuildDate>
      <item>
          <title>Why We Built a Haskell Package Manager in Rust</title>
          <pubDate>Mon, 13 Apr 2026 00:00:00 +0000</pubDate>
          <author>Unknown</author>
          <link>https://raskell.io/articles/why-we-built-a-haskell-package-manager-in-rust/</link>
          <guid>https://raskell.io/articles/why-we-built-a-haskell-package-manager-in-rust/</guid>
          <description xml:base="https://raskell.io/articles/why-we-built-a-haskell-package-manager-in-rust/">&lt;p&gt;Why would anyone invest serious engineering effort into Haskell tooling in 2026? Haskell is a niche language. It has been a niche language for thirty years. Most companies do not use it. Most developers have never written a line of it. If you are going to pour months of work into building a package manager and toolchain from scratch, in Rust no less, the obvious question is: why not just use Rust?&lt;&#x2F;p&gt;
&lt;p&gt;Here is the answer, and it is the same answer I gave in &lt;a href=&quot;&#x2F;articles&#x2F;what-programming-languages-become-when-ai-writes-the-code&#x2F;&quot;&gt;The Last Programming Language Might Not Be for Humans&lt;&#x2F;a&gt;: the way we write software is changing. AI is becoming the primary author of code, and the languages that will matter most in that future are not the ones optimized for human typing speed. They are the ones optimized for formal correctness, composability, and provability. Haskell is not niche in that framing. It is early.&lt;&#x2F;p&gt;
&lt;p&gt;I have &lt;a href=&quot;&#x2F;articles&#x2F;all-beginning-is-haskell&#x2F;&quot;&gt;written before&lt;&#x2F;a&gt; about why Haskell shaped the way I think. The short version: Haskell teaches you to think about programs as compositions of well-typed transformations, and that discipline makes you better at everything else. I still believe this. I write most of my production software in Rust, but I think in Haskell.&lt;&#x2F;p&gt;
&lt;p&gt;The problem was never the language. The problem was everything around it.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-state-of-haskell-tooling&quot;&gt;The state of Haskell tooling&lt;&#x2F;h2&gt;
&lt;p&gt;If you want to start a Haskell project today, here is what you do. First you install ghcup, which manages GHC (the compiler), Cabal (the build tool), Stack (a different build tool), and HLS (the language server). Then you decide whether to use Cabal or Stack, which is a decision that has split the Haskell community for over a decade and which nobody has fully resolved. Then you configure your project, using either a &lt;code&gt;.cabal&lt;&#x2F;code&gt; file (a custom format that predates TOML, YAML, and JSON as configuration languages) or a &lt;code&gt;stack.yaml&lt;&#x2F;code&gt; plus a &lt;code&gt;.cabal&lt;&#x2F;code&gt; file (because Stack still needs Cabal files underneath). Then you wait for GHC to compile your dependencies, which takes long enough that you start questioning your life choices.&lt;&#x2F;p&gt;
&lt;p&gt;I am not exaggerating for effect. This is the actual experience. I have introduced Haskell to teams and watched the enthusiasm drain from people’s faces during the toolchain setup. Not because the language was hard. Because the first thirty minutes were spent fighting &lt;code&gt;ghcup&lt;&#x2F;code&gt;, &lt;code&gt;cabal update&lt;&#x2F;code&gt;, resolver mismatches, and cryptic build errors that had nothing to do with the code they wanted to write.&lt;&#x2F;p&gt;
&lt;p&gt;Here is what a typical first encounter looks like. You want to write a small HTTP server in Haskell. You install ghcup. You install GHC 9.8.2. You run &lt;code&gt;cabal init&lt;&#x2F;code&gt;. You get a &lt;code&gt;.cabal&lt;&#x2F;code&gt; file with a dozen fields, most of which you do not understand yet. You add &lt;code&gt;warp&lt;&#x2F;code&gt; as a dependency. You run &lt;code&gt;cabal build&lt;&#x2F;code&gt;. GHC starts compiling &lt;code&gt;warp&lt;&#x2F;code&gt; and its transitive dependencies: &lt;code&gt;http-types&lt;&#x2F;code&gt;, &lt;code&gt;bytestring&lt;&#x2F;code&gt;, &lt;code&gt;text&lt;&#x2F;code&gt;, &lt;code&gt;network&lt;&#x2F;code&gt;, &lt;code&gt;streaming-commons&lt;&#x2F;code&gt;, &lt;code&gt;vault&lt;&#x2F;code&gt;, &lt;code&gt;wai&lt;&#x2F;code&gt;, and about forty others. This takes four to six minutes on a modern machine. The first time. Every time you switch GHC versions or clean your cache, you pay that cost again.&lt;&#x2F;p&gt;
&lt;p&gt;Now compare this with Rust. You run &lt;code&gt;cargo new my-server&lt;&#x2F;code&gt;. You add &lt;code&gt;axum&lt;&#x2F;code&gt; to &lt;code&gt;Cargo.toml&lt;&#x2F;code&gt;. You run &lt;code&gt;cargo build&lt;&#x2F;code&gt;. It compiles. The first build is not instant either, but &lt;code&gt;cargo&lt;&#x2F;code&gt; does not ask you which of two incompatible build tools you prefer, does not require a separate tool to manage the compiler, and does not present you with a configuration format from 2005.&lt;&#x2F;p&gt;
&lt;p&gt;Or Python. &lt;code&gt;uv init my-server&lt;&#x2F;code&gt;. &lt;code&gt;uv add fastapi&lt;&#x2F;code&gt;. &lt;code&gt;uv run&lt;&#x2F;code&gt;. Done. The entire dependency resolution and installation takes less than a second because &lt;code&gt;uv&lt;&#x2F;code&gt; resolves and installs in parallel, in Rust, without spawning Python.&lt;&#x2F;p&gt;
&lt;p&gt;Every major language ecosystem has converged on the same answer: one tool that handles project creation, dependency management, building, testing, and publishing. Haskell has three tools that each do part of the job, disagree about how dependencies should work, and require a fourth tool to manage the compiler itself.&lt;&#x2F;p&gt;
&lt;p&gt;This is not a new complaint. People have been talking about Haskell’s tooling problem for years. The difference is that someone finally decided to do something about it the way &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;astral.sh&quot;&gt;astral.sh&lt;&#x2F;a&gt; did for Python: rewrite the developer experience from scratch, in Rust, and make everything dramatically faster.&lt;&#x2F;p&gt;
&lt;p&gt;That someone was me.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-astral-sh-playbook&quot;&gt;The astral.sh playbook&lt;&#x2F;h2&gt;
&lt;p&gt;When Astral released &lt;code&gt;uv&lt;&#x2F;code&gt; and &lt;code&gt;ruff&lt;&#x2F;code&gt;, it proved something important. You can take a mature ecosystem with deeply entrenched tooling, rebuild the developer experience in Rust, and people will switch. Not because the old tools were broken. Because the new ones were fast enough and coherent enough that the switching cost paid for itself immediately.&lt;&#x2F;p&gt;
&lt;p&gt;Python’s tooling situation before &lt;code&gt;uv&lt;&#x2F;code&gt; was remarkably similar to Haskell’s. You had &lt;code&gt;pip&lt;&#x2F;code&gt;, &lt;code&gt;pip-tools&lt;&#x2F;code&gt;, &lt;code&gt;pipenv&lt;&#x2F;code&gt;, &lt;code&gt;poetry&lt;&#x2F;code&gt;, &lt;code&gt;conda&lt;&#x2F;code&gt;, &lt;code&gt;virtualenv&lt;&#x2F;code&gt;, &lt;code&gt;venv&lt;&#x2F;code&gt;, &lt;code&gt;pyenv&lt;&#x2F;code&gt;. Each solved part of the problem. Each had opinions that conflicted with the others. Setting up a Python project from scratch meant choosing a stack of tools, hoping they worked together, and accepting that your lockfile format depended on which combination you picked. Sound familiar?&lt;&#x2F;p&gt;
&lt;p&gt;Astral looked at that landscape and did not try to fix any single tool. They rewrote the experience. &lt;code&gt;uv&lt;&#x2F;code&gt; is a single Rust binary that does what &lt;code&gt;pip&lt;&#x2F;code&gt;, &lt;code&gt;pip-tools&lt;&#x2F;code&gt;, &lt;code&gt;virtualenv&lt;&#x2F;code&gt;, and &lt;code&gt;pyenv&lt;&#x2F;code&gt; did, but 10-100x faster and with a coherent interface. &lt;code&gt;ruff&lt;&#x2F;code&gt; is a single Rust binary that does what &lt;code&gt;flake8&lt;&#x2F;code&gt;, &lt;code&gt;isort&lt;&#x2F;code&gt;, &lt;code&gt;pycodestyle&lt;&#x2F;code&gt;, and &lt;code&gt;pyflakes&lt;&#x2F;code&gt; did, but 100x faster. The Python community did not switch because they were told to. They switched because the tools were obviously better the first time they used them.&lt;&#x2F;p&gt;
&lt;p&gt;The playbook has three steps:&lt;&#x2F;p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Wrap first.&lt;&#x2F;strong&gt; Use the existing tools under the hood rather than reimplementing everything. &lt;code&gt;uv&lt;&#x2F;code&gt; wraps pip’s package index and resolver logic. hx wraps GHC and Cabal.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Tame second.&lt;&#x2F;strong&gt; Add better error messages, faster startup, unified configuration, and workflows that make sense. This is where most of the user-facing value lives.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Replace last.&lt;&#x2F;strong&gt; Only replace underlying components when you have to. For hx, that meant building a native build mode that bypasses Cabal entirely for simple projects, and a native dependency resolver in Rust that is 24x faster than Cabal’s constraint solver.&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;p&gt;This approach is pragmatic in a way that matters. You do not need to rebuild the world to improve the experience. You need to rebuild the surface. The parts that people touch every day.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;why-rust&quot;&gt;Why Rust&lt;&#x2F;h2&gt;
&lt;p&gt;The choice to build hx in Rust is not tribalism. It is a direct response to a structural problem.&lt;&#x2F;p&gt;
&lt;p&gt;Haskell’s existing tooling is written in Haskell. This creates a bootstrap problem. To build the build tool, you need the compiler. To install the compiler, you need the compiler manager. To build the compiler manager, you need a compiler. The dependency chain is circular, and every link in it is slow to compile.&lt;&#x2F;p&gt;
&lt;p&gt;Think about what this means in practice. You are a new developer. You want to try Haskell. You download ghcup. ghcup is a shell script that downloads a pre-built GHC binary, but it also installs Cabal, which is itself a Haskell binary compiled with GHC. If the pre-built binary does not exist for your platform, you need GHC to build Cabal, but you need Cabal to set up GHC. The bootstrap documentation exists because the bootstrap problem exists, and it exists because the tools are written in the language they manage.&lt;&#x2F;p&gt;
&lt;p&gt;GHC’s runtime system adds initialization overhead to every invocation. When you type &lt;code&gt;cabal build&lt;&#x2F;code&gt;, the first 45 milliseconds are spent starting the GHC runtime before Cabal even begins to think about your project. Stack is worse at 89 milliseconds. These numbers sound small until you are running commands in a tight development loop, hitting save and expecting the build to start instantly. Or in CI, where the build tool is invoked hundreds of times across a pipeline and those milliseconds compound into minutes.&lt;&#x2F;p&gt;
&lt;p&gt;hx starts in 12 milliseconds. Not because Rust is magic. Because a native binary without a garbage-collected runtime does not need to initialize one. The tool should not have the same dependencies as the thing it manages.&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;shell&quot; class=&quot;language-shell &quot;&gt;&lt;code class=&quot;language-shell&quot; data-lang=&quot;shell&quot;&gt;hx build    # 12ms startup + build time
cabal build # 45ms startup + build time
stack build # 89ms startup + build time
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Memory tells the same story:&lt;&#x2F;p&gt;
&lt;table&gt;&lt;thead&gt;&lt;tr&gt;&lt;th&gt;Tool&lt;&#x2F;th&gt;&lt;th&gt;Startup memory&lt;&#x2F;th&gt;&lt;th&gt;Build memory (simple project)&lt;&#x2F;th&gt;&lt;&#x2F;tr&gt;&lt;&#x2F;thead&gt;&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;hx&lt;&#x2F;td&gt;&lt;td&gt;8 MB&lt;&#x2F;td&gt;&lt;td&gt;45 MB&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;cabal&lt;&#x2F;td&gt;&lt;td&gt;45 MB&lt;&#x2F;td&gt;&lt;td&gt;250 MB&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;stack&lt;&#x2F;td&gt;&lt;td&gt;85 MB&lt;&#x2F;td&gt;&lt;td&gt;320 MB&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;&#x2F;tbody&gt;&lt;&#x2F;table&gt;
&lt;p&gt;For a tool you invoke constantly, this matters. Especially on CI runners with constrained memory, or on a laptop where you have four terminal panes open with different projects.&lt;&#x2F;p&gt;
&lt;p&gt;The Rust decision also solves the distribution problem. A Rust binary is a single static executable that cross-compiles trivially. No runtime dependencies. No “install GHC first so you can install the tool that installs GHC.” &lt;code&gt;curl | sh&lt;&#x2F;code&gt; and you are running. hx is available via the install script, Cargo, aqua, winget on Windows, and Homebrew. Every distribution channel ships a self-contained binary.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;what-hx-actually-does&quot;&gt;What hx actually does&lt;&#x2F;h2&gt;
&lt;p&gt;hx replaces the &lt;code&gt;cabal + stack + ghcup + fourmolu + hlint&lt;&#x2F;code&gt; workflow with a single binary:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;shell&quot; class=&quot;language-shell &quot;&gt;&lt;code class=&quot;language-shell&quot; data-lang=&quot;shell&quot;&gt;curl -fsSL https:&amp;#x2F;&amp;#x2F;arcanist.sh&amp;#x2F;hx&amp;#x2F;install.sh | sh
hx new my-app &amp;amp;&amp;amp; cd my-app
hx run
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;No ghcup. No stack. No cabal-install. One tool, one configuration file, one lockfile format.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;configuration&quot;&gt;Configuration&lt;&#x2F;h3&gt;
&lt;p&gt;The configuration is &lt;code&gt;hx.toml&lt;&#x2F;code&gt;. Not a &lt;code&gt;.cabal&lt;&#x2F;code&gt; file with its custom syntax that nobody can parse without a library. Not a &lt;code&gt;stack.yaml&lt;&#x2F;code&gt; with YAML indentation traps. TOML, the same format that Rust (&lt;code&gt;Cargo.toml&lt;&#x2F;code&gt;), Python (&lt;code&gt;pyproject.toml&lt;&#x2F;code&gt;), and most modern tools have converged on.&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;toml&quot; class=&quot;language-toml &quot;&gt;&lt;code class=&quot;language-toml&quot; data-lang=&quot;toml&quot;&gt;[project]
name = &amp;quot;my-app&amp;quot;
kind = &amp;quot;bin&amp;quot;

[toolchain]
ghc = &amp;quot;9.8.2&amp;quot;

[build]
optimization = 2
warnings = true

[format]
formatter = &amp;quot;fourmolu&amp;quot;

[lint]
hlint = true

[hooks]
pre-build = &amp;quot;scripts&amp;#x2F;generate-version.sh&amp;quot;
post-test = &amp;quot;scripts&amp;#x2F;notify.sh&amp;quot;
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Everything in one file. The toolchain version is pinned per-project, so different projects can use different GHC versions without conflict. When you run &lt;code&gt;hx build&lt;&#x2F;code&gt; in a project pinned to GHC 9.8.2 and another pinned to 9.6.4, hx switches automatically. No &lt;code&gt;ghcup set&lt;&#x2F;code&gt; commands. No global state.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;lockfiles&quot;&gt;Lockfiles&lt;&#x2F;h3&gt;
&lt;p&gt;The lockfile is also TOML. Every dependency is pinned with a sha256 fingerprint:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;toml&quot; class=&quot;language-toml &quot;&gt;&lt;code class=&quot;language-toml&quot; data-lang=&quot;toml&quot;&gt;version = 1
ghc = &amp;quot;9.8.2&amp;quot;
created_at = &amp;quot;2026-01-16T00:00:00Z&amp;quot;

[[package]]
name = &amp;quot;aeson&amp;quot;
version = &amp;quot;2.2.1.0&amp;quot;
sha256 = &amp;quot;a5a5b8a...&amp;quot;
deps = [&amp;quot;base&amp;quot;, &amp;quot;text&amp;quot;, &amp;quot;bytestring&amp;quot;]
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;&lt;code&gt;hx lock --check&lt;&#x2F;code&gt; in CI fails if the lockfile is stale. This is deterministic by default. Not “deterministic if you remember to run &lt;code&gt;cabal freeze&lt;&#x2F;code&gt; and commit the freeze file and hope nobody ran &lt;code&gt;cabal update&lt;&#x2F;code&gt; on a different machine.” Deterministic the way &lt;code&gt;cargo&lt;&#x2F;code&gt; and &lt;code&gt;uv&lt;&#x2F;code&gt; are deterministic. Automatically. Every time.&lt;&#x2F;p&gt;
&lt;p&gt;If you are coming from Stack, you might say “Stack already has lockfiles.” It does. Stack’s approach is to pin to a Stackage snapshot, which gives you a curated set of packages known to work together. This is a valid approach, but it means your dependency versions are dictated by what the Stackage maintainers decided to include in that snapshot. If you need a newer version of a package that is not in the current LTS, you start adding &lt;code&gt;extra-deps&lt;&#x2F;code&gt;, and your reproducibility guarantees become more complex. hx resolves from Hackage directly, pins every version, and verifies checksums. You control exactly what you get.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;native-builds&quot;&gt;Native builds&lt;&#x2F;h3&gt;
&lt;p&gt;For simple projects with only &lt;code&gt;base&lt;&#x2F;code&gt; dependencies, hx has a native build mode that bypasses Cabal entirely. It constructs the module graph itself and invokes GHC directly:&lt;&#x2F;p&gt;
&lt;table&gt;&lt;thead&gt;&lt;tr&gt;&lt;th&gt;Operation&lt;&#x2F;th&gt;&lt;th&gt;hx native&lt;&#x2F;th&gt;&lt;th&gt;hx (cabal backend)&lt;&#x2F;th&gt;&lt;th&gt;cabal&lt;&#x2F;th&gt;&lt;th&gt;stack&lt;&#x2F;th&gt;&lt;&#x2F;tr&gt;&lt;&#x2F;thead&gt;&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;Cold build&lt;&#x2F;td&gt;&lt;td&gt;0.48s&lt;&#x2F;td&gt;&lt;td&gt;2.52s&lt;&#x2F;td&gt;&lt;td&gt;2.68s&lt;&#x2F;td&gt;&lt;td&gt;3.2s&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;Incremental&lt;&#x2F;td&gt;&lt;td&gt;0.05s&lt;&#x2F;td&gt;&lt;td&gt;0.35s&lt;&#x2F;td&gt;&lt;td&gt;0.39s&lt;&#x2F;td&gt;&lt;td&gt;0.52s&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;Single file change&lt;&#x2F;td&gt;&lt;td&gt;0.31s&lt;&#x2F;td&gt;&lt;td&gt;1.42s&lt;&#x2F;td&gt;&lt;td&gt;1.42s&lt;&#x2F;td&gt;&lt;td&gt;1.8s&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;&#x2F;tbody&gt;&lt;&#x2F;table&gt;
&lt;p&gt;5.6x faster cold builds. 7.8x faster incremental builds. The difference comes from eliminating Cabal’s package database queries, build plan calculation, and job scheduling overhead.&lt;&#x2F;p&gt;
&lt;p&gt;Where does the time go in a normal Cabal build? Roughly: runtime initialization (45ms), reading the package database (80-120ms), computing the build plan (200-400ms depending on dependency count), checking file timestamps through the Cabal build system (100-200ms), and only then invoking GHC. hx native mode skips all of that. It reads file timestamps directly, constructs a minimal module graph, and calls GHC with exactly the flags needed. For projects with external dependencies, hx falls back to the Cabal backend transparently. You do not have to think about it.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;dependency-resolution&quot;&gt;Dependency resolution&lt;&#x2F;h3&gt;
&lt;p&gt;hx includes a native dependency resolver written in Rust. The &lt;code&gt;hx-solver&lt;&#x2F;code&gt; crate implements constraint resolution using the same algorithm as Cabal’s solver, but without the overhead of GHC’s runtime:&lt;&#x2F;p&gt;
&lt;table&gt;&lt;thead&gt;&lt;tr&gt;&lt;th&gt;Direct dependencies&lt;&#x2F;th&gt;&lt;th&gt;hx&lt;&#x2F;th&gt;&lt;th&gt;cabal&lt;&#x2F;th&gt;&lt;&#x2F;tr&gt;&lt;&#x2F;thead&gt;&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;10 packages&lt;&#x2F;td&gt;&lt;td&gt;5ms&lt;&#x2F;td&gt;&lt;td&gt;120ms&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;20 packages&lt;&#x2F;td&gt;&lt;td&gt;18ms&lt;&#x2F;td&gt;&lt;td&gt;450ms&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;50 packages&lt;&#x2F;td&gt;&lt;td&gt;85ms&lt;&#x2F;td&gt;&lt;td&gt;2.8s&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;100 packages&lt;&#x2F;td&gt;&lt;td&gt;320ms&lt;&#x2F;td&gt;&lt;td&gt;12.5s&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;&#x2F;tbody&gt;&lt;&#x2F;table&gt;
&lt;p&gt;At 100 dependencies, hx resolves in 320 milliseconds. Cabal takes 12.5 seconds. In a real-world test with 20 direct dependencies and their transitive closure, hx resolved in 1.2 seconds versus 8.5 seconds for &lt;code&gt;cabal freeze&lt;&#x2F;code&gt;. Stack’s resolver is faster at 0.8 seconds because Stackage snapshots are pre-computed, but you trade resolution speed for version flexibility.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;error-messages&quot;&gt;Error messages&lt;&#x2F;h3&gt;
&lt;p&gt;Haskell’s reputation for cryptic error messages is partly deserved and partly a tooling problem. GHC type errors can be daunting, but build tool errors are often worse because they mix configuration issues with compilation issues in unhelpful ways. “Could not resolve dependencies” from Cabal tells you almost nothing about which constraint is blocking resolution or what you could change to fix it.&lt;&#x2F;p&gt;
&lt;p&gt;hx uses structured error codes with actionable suggestions:&lt;&#x2F;p&gt;
&lt;pre&gt;&lt;code&gt;E0012: Package &amp;#x27;aeson&amp;#x27; not found in local index

  The package index may be outdated.
  Run: hx index update

  Or add the package explicitly:
  Run: hx add aeson
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;pre&gt;&lt;code&gt;E0020: GHC version mismatch

  Project requires GHC 9.8.2 but 9.6.4 is active.
  Run: hx toolchain install 9.8.2
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Every error has a code, a human-readable explanation, and a concrete command to fix it. &lt;code&gt;hx doctor&lt;&#x2F;code&gt; runs a comprehensive diagnostic of your entire environment, checking GHC, Cabal, HLS, PATH configuration, and project setup, reporting exactly what is wrong and how to fix each issue.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;everything-else&quot;&gt;Everything else&lt;&#x2F;h3&gt;
&lt;p&gt;hx bundles the rest of the development workflow too. &lt;code&gt;hx fmt&lt;&#x2F;code&gt; wraps fourmolu for formatting. &lt;code&gt;hx lint&lt;&#x2F;code&gt; wraps hlint. &lt;code&gt;hx coverage --html --open&lt;&#x2F;code&gt; generates an HTML coverage report and opens it in your browser. &lt;code&gt;hx doc --open&lt;&#x2F;code&gt; builds Haddock documentation and serves it locally. &lt;code&gt;hx watch&lt;&#x2F;code&gt; detects file changes in 15 milliseconds (versus 180ms for &lt;code&gt;stack --file-watch&lt;&#x2F;code&gt;) and triggers rebuilds or test runs. &lt;code&gt;hx profile --heap&lt;&#x2F;code&gt; generates heap profiles for memory analysis.&lt;&#x2F;p&gt;
&lt;p&gt;The goal is that you should never need to leave hx to do something with your Haskell project. Not because hx reimplements everything, but because it wraps the best existing tools with a consistent interface and fast orchestration.&lt;&#x2F;p&gt;
&lt;p&gt;There is also a plugin system using Steel, a Scheme dialect, for custom build lifecycle hooks:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;scheme&quot; class=&quot;language-scheme &quot;&gt;&lt;code class=&quot;language-scheme&quot; data-lang=&quot;scheme&quot;&gt;;; .hx&amp;#x2F;plugins&amp;#x2F;check-todos.scm
(define (on-build-success project)
  (when (file-exists? &amp;quot;TODO.md&amp;quot;)
    (warn &amp;quot;Do not forget to update TODO.md&amp;quot;)))

(register-hook &amp;#x27;post-build on-build-success)
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Plugins live in &lt;code&gt;.hx&#x2F;plugins&#x2F;&lt;&#x2F;code&gt; and time out after a configurable interval so a misbehaving script cannot stall your build. They hook into pre-build, post-build, pre-test, post-test, and other lifecycle events. Lightweight enough that you can add project-specific automation without maintaining a separate build script.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;migration&quot;&gt;Migration&lt;&#x2F;h3&gt;
&lt;p&gt;If you have an existing project, hx can import it:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;shell&quot; class=&quot;language-shell &quot;&gt;&lt;code class=&quot;language-shell&quot; data-lang=&quot;shell&quot;&gt;hx init --from-cabal   # Import from an existing .cabal project
hx init --from-stack   # Import from a Stack project
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;It reads your existing configuration, generates &lt;code&gt;hx.toml&lt;&#x2F;code&gt;, creates a lockfile, and you are running. The &lt;code&gt;.cabal&lt;&#x2F;code&gt; file is preserved for compatibility. hx reads it for package metadata and dependency specifications, but the build configuration and toolchain management move to &lt;code&gt;hx.toml&lt;&#x2F;code&gt;.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-architecture&quot;&gt;The architecture&lt;&#x2F;h2&gt;
&lt;p&gt;hx is structured as a Rust workspace with 14 crates:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo diagram&quot;&gt;&lt;code data-lang=&quot;diagram&quot; data-title=&quot;hx workspace architecture&quot;&gt;hx-cli
                            |
              +-------------+-------------+
              |             |             |
          hx-core       hx-config      hx-ui
              |             |
    +---------+---------+   |
    |         |         |   |
hx-cabal  hx-solver  hx-lock
    |         |
hx-cache  hx-toolchain
    |
hx-doctor

Separate concerns: hx-plugins, hx-lsp, hx-warnings, hx-telemetry&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Each crate has a single responsibility. &lt;code&gt;hx-solver&lt;&#x2F;code&gt; knows how to resolve dependencies but nothing about building. &lt;code&gt;hx-cabal&lt;&#x2F;code&gt; knows how to invoke Cabal but nothing about configuration. &lt;code&gt;hx-toolchain&lt;&#x2F;code&gt; manages GHC installations but nothing about lockfiles. This separation means you can test the resolver without setting up a build environment, and you can change the build backend without touching the resolver.&lt;&#x2F;p&gt;
&lt;p&gt;The &lt;code&gt;hx-lsp&lt;&#x2F;code&gt; crate is worth calling out. It provides language server protocol support, which means hx can manage HLS (Haskell Language Server) versions matched to your project’s GHC version. When your project uses GHC 9.8.2, hx ensures HLS is compatible. No more “HLS crashed because it was compiled with a different GHC than your project uses.” This is a problem that has frustrated Haskell developers for years, and it is entirely a tooling coordination problem.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-bigger-picture&quot;&gt;The bigger picture&lt;&#x2F;h2&gt;
&lt;p&gt;I built hx because I needed it. But the timing is not accidental.&lt;&#x2F;p&gt;
&lt;p&gt;In &lt;a href=&quot;&#x2F;articles&#x2F;what-programming-languages-become-when-ai-writes-the-code&#x2F;&quot;&gt;The Last Programming Language Might Not Be for Humans&lt;&#x2F;a&gt;, I laid out three futures for programming languages as AI becomes the primary author of code. The first future is explicit languages designed to minimize LLM errors through tight feedback loops. The second is declarative languages where code describes what something is rather than how to compute it, and the type system acts as a proof checker. The third is no language at all, where AI generates machine code directly.&lt;&#x2F;p&gt;
&lt;p&gt;I bet on the second future. Here is why.&lt;&#x2F;p&gt;
&lt;p&gt;When an LLM writes imperative code, it has to track mutable state across dozens of lines, reason about the order of side effects, and hold implicit language behaviors in context. When it writes Haskell, it expresses a relationship between inputs and outputs, and the compiler verifies that the relationship is consistent. The model does not need to simulate execution step by step. It needs to generate an expression that satisfies type constraints. This is what LLMs are good at. Pattern recognition. Constraint satisfaction. Formal structure.&lt;&#x2F;p&gt;
&lt;p&gt;Consider what happens when an AI generates a Haskell function with a wrong type. The compiler does not produce a vague runtime error three layers deep in a call stack. It produces a precise, localized type error at compile time: “Expected &lt;code&gt;[LogEntry] -&amp;gt; [ErrorSummary]&lt;&#x2F;code&gt;, got &lt;code&gt;[LogEntry] -&amp;gt; [LogEntry]&lt;&#x2F;code&gt;.” The model reads this, adjusts, and re-generates. The feedback loop is tight, but unlike the explicit-language approach, the tightness comes from the type system itself, not from bolted-on contracts. The correctness guarantees are structural, not ceremonial.&lt;&#x2F;p&gt;
&lt;p&gt;This matters even more when you think about code that has to survive time. Procedural code decays. Three years from now, nobody remembers why a function mutates a global variable on line 47. The variable name made sense to whoever wrote it. The mutation order made sense in the context of the original design. But context evaporates. Types do not. A function signature that says &lt;code&gt;Request -&amp;gt; Policy -&amp;gt; Decision&lt;&#x2F;code&gt; is self-documenting in a way that no amount of comments on imperative code can match. The proof is in the types, and the types are checked by the compiler, not by human memory.&lt;&#x2F;p&gt;
&lt;p&gt;But none of that matters if nobody can set up a Haskell project without losing thirty minutes to toolchain configuration. The language’s virtues are locked behind a tooling wall. You can have the most expressive type system in production use, the most rigorous correctness guarantees, the best theoretical fit for agent-assisted development, and it means nothing if a developer’s first experience is fighting &lt;code&gt;ghcup&lt;&#x2F;code&gt; for half an hour. First impressions are permanent, and Haskell’s first impression has been “powerful but painful” for too long.&lt;&#x2F;p&gt;
&lt;p&gt;If Haskell is going to be relevant in a world where AI writes most of the code, the experience of using Haskell has to be as fast and frictionless as the experience of using Rust or Python. Not comparable. Equal. That is what hx is for. Not to make Haskell slightly more convenient. To remove the tooling objection entirely, so the conversation can be about the language’s actual strengths instead of its ecosystem’s historical baggage.&lt;&#x2F;p&gt;
&lt;p&gt;And hx is just the first step. &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;arcanist.sh&#x2F;bhc&#x2F;&quot;&gt;BHC&lt;&#x2F;a&gt;, the Basel Haskell Compiler, goes further. GHC is a remarkable piece of engineering, but it was designed for a world where Haskell ran on desktops and servers with one performance profile. BHC is a clean-slate Haskell compiler, also written in Rust, offering six runtime profiles for different deployment targets:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Server&lt;&#x2F;strong&gt;: structured concurrency with automatic cancellation, observability hooks, deadline-aware scheduling&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Numeric&lt;&#x2F;strong&gt;: strict-by-default in hot paths, tensor lowering, SIMD auto-vectorization, GPU backends for CUDA and ROCm&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Edge&lt;&#x2F;strong&gt;: minimal runtime footprint, direct WASM emission, designed for Cloudflare Workers and Fastly Compute&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Realtime&lt;&#x2F;strong&gt;: bounded GC pauses under 1 millisecond, arena allocation, designed for games and audio processing&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Embedded&lt;&#x2F;strong&gt;: no GC at all, static allocation, bare-metal targets like ARM Cortex-M&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;Same language. Same type safety. Different performance contracts depending on what you are building. Your security policy engine compiles with the server profile. Your tensor pipeline compiles with the numeric profile and runs on a GPU. Your edge function compiles to WASM. You do not change your source code. You change the compiler flag.&lt;&#x2F;p&gt;
&lt;p&gt;hx already supports BHC as an alternative backend:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;shell&quot; class=&quot;language-shell &quot;&gt;&lt;code class=&quot;language-shell&quot; data-lang=&quot;shell&quot;&gt;hx build --compiler=bhc --profile=server
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;One flag. Same project. Different runtime.&lt;&#x2F;p&gt;
&lt;p&gt;The vision behind &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;arcanist.sh&quot;&gt;arcanist.sh&lt;&#x2F;a&gt; is that Haskell’s ideas deserve infrastructure that matches their ambition. The language has always been decades ahead of its tooling. hx closes the gap on the developer experience side. BHC closes it on the runtime side. Together, they make the case that Haskell is not a language for academics and hobbyists. It is a language for the era we are entering, where correctness is not a luxury, it is the load-bearing structure of software that AI writes and humans verify.&lt;&#x2F;p&gt;
&lt;p&gt;The tooling is not separate from the thesis. The tooling IS the thesis.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-bet-what-if-i-am-wrong&quot;&gt;The Bet, What If I am Wrong&lt;&#x2F;h2&gt;
&lt;p&gt;I want to be direct about something. This is a gamble.&lt;&#x2F;p&gt;
&lt;p&gt;I do not know whether Haskell will go through a revival. Nobody does. Nobody knows how AI-assisted development will actually evolve, which languages will matter in five years, or whether the thesis I outlined in the previous post will hold up against what reality delivers. I have a conviction, not a crystal ball.&lt;&#x2F;p&gt;
&lt;p&gt;I spent months building hx and BHC. Months of my own time, and to be perfectly blunt, a significant number of Anthropic’s Claude tokens. I pair-programmed most of this with Claude Code on my Max subscription, and that is not a footnote. It is part of the story. The tools I am building for AI-assisted Haskell development were themselves built using AI-assisted development. If that sounds circular, it is. The thesis tested itself during its own construction.&lt;&#x2F;p&gt;
&lt;p&gt;But I could be wrong. Haskell could remain niche forever. The AI era could favor a language nobody has thought of yet. The intermediate layer might not evolve the way I expect. The industry might double down on Python and TypeScript for agent-assisted workflows and never look back. These are all plausible outcomes.&lt;&#x2F;p&gt;
&lt;p&gt;What I can do is build toward what I believe in and put the work out in the open. If I am right, Haskell gets the tooling it always deserved, and the language is ready when the moment arrives. If I am wrong, the ideas in hx and BHC, fast Rust-based tooling, deterministic lockfiles, multiple runtime profiles, structured error messages, are valuable regardless. Good infrastructure design does not expire just because the language it serves does not win the popularity contest.&lt;&#x2F;p&gt;
&lt;p&gt;And honestly, even on the unlikely side, I would rather have tried and been wrong than watched from the sidelines while the most elegant language I have ever used slowly faded because nobody bothered to fix the parts that were not the language.&lt;&#x2F;p&gt;
&lt;p&gt;At least I have tried.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;try-it&quot;&gt;Try it&lt;&#x2F;h2&gt;
&lt;pre data-lang=&quot;shell&quot; class=&quot;language-shell &quot;&gt;&lt;code class=&quot;language-shell&quot; data-lang=&quot;shell&quot;&gt;curl -fsSL https:&amp;#x2F;&amp;#x2F;arcanist.sh&amp;#x2F;hx&amp;#x2F;install.sh | sh
hx new my-app &amp;amp;&amp;amp; cd my-app
hx run
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Then try &lt;code&gt;hx doctor&lt;&#x2F;code&gt;, &lt;code&gt;hx fmt&lt;&#x2F;code&gt;, &lt;code&gt;hx test --watch&lt;&#x2F;code&gt;. See how it feels when the tooling gets out of your way.&lt;&#x2F;p&gt;
&lt;p&gt;hx is MIT-licensed and open source. If you have opinions about Haskell tooling, I want to hear them.&lt;&#x2F;p&gt;
</description>
      </item>
      <item>
          <title>The Last Programming Language Might Not Be for Humans</title>
          <pubDate>Sat, 11 Apr 2026 00:00:00 +0000</pubDate>
          <author>Unknown</author>
          <link>https://raskell.io/articles/what-programming-languages-become-when-ai-writes-the-code/</link>
          <guid>https://raskell.io/articles/what-programming-languages-become-when-ai-writes-the-code/</guid>
          <description xml:base="https://raskell.io/articles/what-programming-languages-become-when-ai-writes-the-code/">&lt;p&gt;This morning I was standing at my desk, drinking watered-down instant coffee, doing what I do every morning after triaging the high-alert emails and notifications: thirty minutes of HackerNews. It is a ritual I time-box and never skip. I go to the office every day, and whether I am at that desk or at my home desk, the morning is the same. Coffee, posture, front page.&lt;&#x2F;p&gt;
&lt;p&gt;HackerNews remains one of the best ways to keep a finger on the pulse of the Bay Area, of tech, of science, of whatever intellectually stimulating thought surfaced overnight. I follow a handful of curated newsletters too, but I have noticed over the years that HN covers most of their content anyway if you know how to filter high signal from low signal. I could write something about a different link every single day. Most mornings I resist. This morning I did not.&lt;&#x2F;p&gt;
&lt;p&gt;A link caught my eye. &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;veralang.dev&#x2F;&quot;&gt;Vera&lt;&#x2F;a&gt;, a new programming language “designed for machines to write, not humans.” Statically typed, purely functional, compiles to WebAssembly, uses Microsoft’s Z3 solver for contract verification. It has a ferret mascot. I like animal mascots for tech projects. Ferris the crab for Rust, the gopher for Go, the Shisa guardian dog for &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;zentinelproxy.io&#x2F;&quot;&gt;Zentinel&lt;&#x2F;a&gt;. The ferret is a good choice.&lt;&#x2F;p&gt;
&lt;p&gt;But the mascot is not why I stopped scrolling. I stopped because somebody else had arrived at the same conclusion I had reached back in December: that conventional programming languages are not adapted to the capabilities of this new technology, and that something has to change.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-christmas-realization&quot;&gt;The Christmas realization&lt;&#x2F;h2&gt;
&lt;p&gt;I had been a paying Claude Code subscriber since May 2025, when Anthropic first launched it. The CLI orientation made sense to me immediately, even though the early rate limits and model quality left me wanting more. By December 2025, I had upgraded to the Max subscription, I was on vacation, and Anthropic had made the daily limits generous that month. I was burning through every idea I had accumulated over the years. Some were good. Some were terrible. All of them were finally testable in a way they had not been before, because I could pair-program with a model that kept up. I wrote about that shift more fully in &lt;a href=&quot;&#x2F;articles&#x2F;how-i-work-these-days&#x2F;&quot;&gt;How I Work These Days&lt;&#x2F;a&gt;, the short version being that late 2025 was when the relationship between ambition and execution fundamentally changed for me. The dam broke. Ideas that had been sitting in notebooks for years started becoming real software in days.&lt;&#x2F;p&gt;
&lt;p&gt;It was during one of those late-night sessions, deep in a Claude Code conversation about compiler design, that a thought crystallized. I had been building &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;arcanist.sh&#x2F;hx&#x2F;&quot;&gt;hx&lt;&#x2F;a&gt; and thinking about how AI would change the way people write Haskell, when I realized the question was bigger than Haskell. Agent-assisted software development is approaching a point where the output language itself, the intermediate layer we use to express how information should be processed, is going to change fundamentally.&lt;&#x2F;p&gt;
&lt;p&gt;This is not an abstract observation. The history of software is a history of abstraction layers accumulating, each one letting the next generation of programmers ignore what the previous generation had to master.&lt;&#x2F;p&gt;
&lt;p&gt;It started with machine code. Raw opcodes, different for every CPU architecture. If you wanted to write software, you memorized instruction sets and wrote them by hand. People published thick reference manuals, and there were engineers who could hold entire instruction set architectures in their heads. Some of them are still around, and some of them still swear by that methodology.&lt;&#x2F;p&gt;
&lt;p&gt;Then assemblers gave those opcodes human-readable names. &lt;code&gt;MOV AX, BX&lt;&#x2F;code&gt; instead of &lt;code&gt;89 D8&lt;&#x2F;code&gt;. You were still writing for a specific architecture, still thinking in registers, but now you could read what you wrote. The first abstraction was not a new capability. It was legibility.&lt;&#x2F;p&gt;
&lt;p&gt;Then C arrived and gave us a portable abstraction over hardware. You stopped thinking in registers and started thinking in functions and pointers. C compiled down to architecture-specific assembly, but you did not have to care which architecture. One language, many targets. The reference manuals shifted from instruction sets to language specifications.&lt;&#x2F;p&gt;
&lt;p&gt;Then interpreters and virtual machines added another layer. The JVM, Python, Perl. You stopped thinking about memory layout and started thinking about objects, iterators, garbage collection. The abstraction was thicker, the feedback loop was faster, and the audience expanded from hardware engineers to anyone who could write a script.&lt;&#x2F;p&gt;
&lt;p&gt;Then IDEs changed how you interacted with the language itself. Syntax highlighting, autocomplete, integrated debuggers, refactoring tools. You stopped holding the entire API surface in your head because the editor held it for you. The language did not change, but the cognitive cost of using it dropped. IntelliSense was not a language feature. It was an abstraction over the programmer’s memory.&lt;&#x2F;p&gt;
&lt;p&gt;Then the internet changed how code moved. Open source repositories, package managers, shared libraries. You stopped writing everything from scratch and started composing from parts other people had built. The reference material moved from printed books to web documentation, wikis, tutorials.&lt;&#x2F;p&gt;
&lt;p&gt;Then StackOverflow changed how people learned. Instead of reading manuals cover to cover, you searched for the specific problem you had and found someone who had already solved it. The knowledge layer itself became an abstraction. You did not need to understand the full system. You needed to find the right answer and adapt it to your context. StackOverflow never compiled a line of code, but it was an intermediate layer between human confusion and working software, and it was arguably more important to the average developer’s productivity than any language feature shipped in the same decade.&lt;&#x2F;p&gt;
&lt;p&gt;And now StackOverflow is receding. Not because the answers got worse. Because the next abstraction layer arrived. AI coding agents do what StackOverflow did, finding known solutions to known problems, but they also do what StackOverflow never could: synthesize novel solutions, hold project context across files, and generate working code from intent descriptions. The pattern is the same as every previous transition. Each layer makes the previous one less essential, not by replacing it but by absorbing it into a higher-level abstraction.&lt;&#x2F;p&gt;
&lt;p&gt;Programming languages have always been shaped by who writes them. Assembly was shaped by hardware engineers who thought in registers and opcodes. C was shaped by systems programmers who needed portable abstractions over memory. Python was shaped by people who wanted to get things done without fighting the syntax. COBOL was shaped by business analysts who wanted code that read like English. Every language carries the fingerprints of its intended author. If AI becomes the primary author of code, it follows that the language should adapt to that new author’s strengths and weaknesses. This is not a break from the pattern. It is the pattern, doing what it has always done.&lt;&#x2F;p&gt;
&lt;p&gt;I kept coming back to three concrete possibilities. Three ways the intermediate layer could evolve. Not competing visions, exactly. More like three points on a timeline.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;make-the-language-explicit-enough-for-machines&quot;&gt;Make the language explicit enough for machines&lt;&#x2F;h2&gt;
&lt;p&gt;Anyone who has spent real time with coding agents has seen the failure mode. You ask an LLM to write a Python function. The output looks plausible. It passes linting. The variable names are reasonable. The structure follows common patterns. Then it fails at runtime because of a subtle implicit behavior the model did not track. A default mutable argument that gets shared across calls. A generator that gets silently exhausted on second iteration. A method that returns &lt;code&gt;None&lt;&#x2F;code&gt; instead of raising an exception because some library author decided that was more “Pythonic.” The model was not wrong about the algorithm. It was wrong about the language’s hidden behaviors.&lt;&#x2F;p&gt;
&lt;p&gt;This is the problem Vera is trying to solve, and the first approach I had been contemplating. Take the simplicity and explicitness of Go, push it further, and design a language where every instruction, every method, every design pattern is as unambiguous as possible. No implicit behaviors. No naming ambiguity. No style choices. One canonical way to write everything, so that an LLM does not have to waste inference tokens reasoning about which of seventeen valid approaches to take.&lt;&#x2F;p&gt;
&lt;p&gt;The language would need excellent compiler diagnostics. Not just “type mismatch on line 47,” but structured feedback that a model can parse, understand, and act on immediately. Rust and Elixir already do this well for humans. Do it even better, and do it for machines.&lt;&#x2F;p&gt;
&lt;p&gt;The pipeline looks like this:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo diagram&quot;&gt;&lt;code data-lang=&quot;diagram&quot; data-title=&quot;Explicit language feedback loop&quot;&gt;+-------+     prompt      +-------+    explicit     +-----------+
| Human | -------------&amp;gt;  |  LLM  | -------------&amp;gt;  | Verifying |
+-------+                 +-------+    source       | Compiler  |
                             ^                      +-----------+
                             |                           |
                             |   structured error        |
                             |   + suggested fix         |
                             +---------------------------+
                                                         |
                                                    verified
                                                         |
                                                         v
                                                  [correct program]&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;The key insight is the feedback loop. The compiler does not just reject bad code. It explains what is wrong in terms the model can act on, with a concrete fix suggestion. The model re-generates. The compiler re-checks. You converge on correct code through iteration, and the tightness of that loop depends on how unambiguous the language is and how actionable the errors are.&lt;&#x2F;p&gt;
&lt;p&gt;Vera is exactly this idea, executed with conviction. Here is what a function looks like:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo&quot;&gt;&lt;code data-lang=&quot;vera&quot;&gt;public fn safe_divide(@Int, @Int -&amp;gt; @Int)
  requires(@Int.1 != 0)
  ensures(@Int.result == @Int.0 &amp;#x2F; @Int.1)
  effects(pure)
{
  @Int.0 &amp;#x2F; @Int.1
}&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;No variable names at all. Parameters are referenced by type and positional index using De Bruijn slot notation. &lt;code&gt;@Int.0&lt;&#x2F;code&gt; is the most recently bound integer, &lt;code&gt;@Int.1&lt;&#x2F;code&gt; is the one before that. Every function must declare its preconditions (&lt;code&gt;requires&lt;&#x2F;code&gt;), postconditions (&lt;code&gt;ensures&lt;&#x2F;code&gt;), and side effects (&lt;code&gt;effects&lt;&#x2F;code&gt;). The compiler verifies contracts statically using Z3 where possible and falls back to runtime checks for what it cannot decide at compile time.&lt;&#x2F;p&gt;
&lt;p&gt;The design principle is sharp: the model does not need to be right, it needs to be checkable. The language constrains the space of valid programs so tightly that the compiler catches mistakes before execution and explains them in natural language. Division by zero is not a runtime exception. It is a contract violation caught during compilation.&lt;&#x2F;p&gt;
&lt;p&gt;Think of it like this. Traditional languages are an open field. You can walk in any direction, and you might end up somewhere useful or you might walk off a cliff. Vera is a guided path with guardrails. You can only go certain directions, and every time you try to step off the path, a sign tells you exactly where to step instead. An LLM on an open field will wander. An LLM on a guided path will converge.&lt;&#x2F;p&gt;
&lt;p&gt;The early benchmark results are interesting, if mixed. Kimi K2.5 apparently writes perfect Vera code, scoring 100% on VeraBench and beating its own Python and TypeScript scores. Other models do not fare as well. Claude Opus 4 scores 88% in Vera versus 96% in Python. The Vera team is honest about this variance, which I appreciate. The &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=47696263&quot;&gt;HackerNews discussion&lt;&#x2F;a&gt; was thin, twelve points and three skeptical comments, which tells you how early this conversation still is. The broader developer community has not engaged with this idea seriously yet. That will change.&lt;&#x2F;p&gt;
&lt;p&gt;But there is something that bothers me about optimizing the language for how machines iterate on solutions. Look at the pipeline diagram again. The LLM is still generating step-by-step instructions. It is still describing HOW to do things, just with the ambiguity stripped out and the contracts made explicit. The machine still reasons through the process sequentially. You have reduced the noise in the feedback loop, but you have not changed the nature of the signal. The model is still writing recipes. They are just more precise recipes.&lt;&#x2F;p&gt;
&lt;p&gt;What if you stopped writing recipes entirely?&lt;&#x2F;p&gt;
&lt;h2 id=&quot;describe-what-not-how&quot;&gt;Describe what, not how&lt;&#x2F;h2&gt;
&lt;p&gt;The second idea starts from a different premise. Instead of making procedural code easier for AI to write correctly, change what you ask the AI to express.&lt;&#x2F;p&gt;
&lt;p&gt;Let me make this concrete. Say you need to find the ten most recent server errors in a log. Here is how you would describe that process in a procedural language:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;python&quot; class=&quot;language-python &quot;&gt;&lt;code class=&quot;language-python&quot; data-lang=&quot;python&quot;&gt;errors = []
for entry in log_entries:
    if entry.status &amp;gt;= 500:
        errors.append({
            &amp;quot;time&amp;quot;: entry.timestamp,
            &amp;quot;path&amp;quot;: entry.path,
            &amp;quot;code&amp;quot;: entry.status
        })
errors.sort(key=lambda e: e[&amp;quot;time&amp;quot;], reverse=True)
return errors[:10]
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;You are telling the machine: create an empty list. Walk through each entry. Check a condition. If it matches, build a dictionary and append it. Then sort the accumulated list by a key. Then take the first ten elements. Every step is an instruction. The machine has to track the mutable list, the iteration state, the sort, the slice. And if you get any step wrong, the others might still succeed, producing output that looks correct but is subtly broken. A missing &lt;code&gt;reverse=True&lt;&#x2F;code&gt; and you silently get the oldest errors instead of the most recent.&lt;&#x2F;p&gt;
&lt;p&gt;Here is the same thing in Haskell:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;haskell&quot; class=&quot;language-haskell &quot;&gt;&lt;code class=&quot;language-haskell&quot; data-lang=&quot;haskell&quot;&gt;recentErrors :: [LogEntry] -&amp;gt; [ErrorSummary]
recentErrors =
    take 10
  . sortBy (flip compare `on` time)
  . map toSummary
  . filter isServerError
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Read it bottom to top: filter server errors, transform each into a summary, sort by time descending, take ten. There is no mutable accumulator. No loop variable. No intermediate state. You are not describing a process. You are describing a relationship between the input and the output. The function says what the result IS, not how to compute it step by step.&lt;&#x2F;p&gt;
&lt;p&gt;The type signature at the top, &lt;code&gt;[LogEntry] -&amp;gt; [ErrorSummary]&lt;&#x2F;code&gt;, is a contract the compiler enforces. If &lt;code&gt;toSummary&lt;&#x2F;code&gt; returns the wrong type, if &lt;code&gt;isServerError&lt;&#x2F;code&gt; does not take a &lt;code&gt;LogEntry&lt;&#x2F;code&gt;, if you accidentally compose functions in an order that does not type-check, the compiler rejects the program before it runs. Not with a vague “object has no attribute” at runtime. With a precise type error at compile time that tells you exactly which piece does not fit.&lt;&#x2F;p&gt;
&lt;p&gt;This distinction matters enormously for AI. Think about what an LLM actually has to track in each case:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo diagram&quot;&gt;&lt;code data-lang=&quot;diagram&quot; data-title=&quot;Procedural vs declarative complexity&quot;&gt;Procedural (Vera, Python, Go)          Declarative (Haskell)
================================       ================================
- mutable variables and their          - input type
  current state at each step           - output type
- loop iteration progress              - which transformations to compose
- conditional branching outcomes       - whether types align
- order of side effects
- implicit language behaviors
- names and what they refer to&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;The procedural model asks the AI to simulate execution in its head. The declarative model asks the AI to describe a transformation and let the compiler verify it. One plays to an LLM’s weakness (tracking state across many steps). The other plays to its strength (recognizing and generating patterns that satisfy formal constraints).&lt;&#x2F;p&gt;
&lt;p&gt;The pipeline changes fundamentally:&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo diagram&quot;&gt;&lt;code data-lang=&quot;diagram&quot; data-title=&quot;Type-driven proof pipeline&quot;&gt;+-------+     prompt      +-------+   type sigs      +-----------+
| Human | -------------&amp;gt;  |  LLM  | -------------&amp;gt;    | Compiler  |
+-------+                 +-------+   pure exprs      +-----------+
                                                           |
                                                      types align?
                                                        &amp;#x2F;      \
                                                      yes       no
                                                      |          |
                                                      v          v
                                              [proven correct]  [precise type error:
                                                                 &amp;quot;Expected LogEntry,
                                                                  got String at ...]&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;No feedback loop needed in the happy path. If the types align, the program is correct by construction for the properties the type system tracks. The compiler is not iterating with the model. It is checking a proof.&lt;&#x2F;p&gt;
&lt;p&gt;This is the approach I bet on. It is the reason this blog exists.&lt;&#x2F;p&gt;
&lt;p&gt;Raskell, the name behind this site, is a portmanteau of Rascal (as in raccoon, which is my mascot) and Haskell. The language I have loved for years because of how elegantly it describes things close to mathematical proofs. QEDs, not TODOs. When you write a well-typed Haskell function, you are not just writing code. You are writing a proof that a certain transformation is valid, and the compiler is the proof checker.&lt;&#x2F;p&gt;
&lt;p&gt;But loving Haskell and shipping Haskell in production are different experiences, and the gap between them is mostly tooling. The ecosystem is fragmented in a way that has frustrated people for over a decade. &lt;code&gt;cabal&lt;&#x2F;code&gt;, &lt;code&gt;stack&lt;&#x2F;code&gt;, &lt;code&gt;ghcup&lt;&#x2F;code&gt;. Three tools that do overlapping jobs with different opinions about how dependencies should work. If you come from Python, imagine if &lt;code&gt;pip&lt;&#x2F;code&gt;, &lt;code&gt;poetry&lt;&#x2F;code&gt;, and &lt;code&gt;pyenv&lt;&#x2F;code&gt; were all developed independently, with different lockfile formats, different resolver algorithms, and occasional incompatibilities. That was Haskell. Build times were slow. Error messages ranged from helpful to cryptic. The runtime assumed one performance profile fits every use case. If you wanted Haskell for edge functions, for embedded systems, for GPU-accelerated numerics, you spent as much time fighting the toolchain as writing the actual code.&lt;&#x2F;p&gt;
&lt;p&gt;The language was right. The surrounding infrastructure was not.&lt;&#x2F;p&gt;
&lt;p&gt;So I started building &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;arcanist.sh&quot;&gt;arcanist.sh&lt;&#x2F;a&gt;, taking the same approach that &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;astral.sh&quot;&gt;astral.sh&lt;&#x2F;a&gt; brought to Python tooling. When astral.sh released &lt;code&gt;uv&lt;&#x2F;code&gt; and &lt;code&gt;ruff&lt;&#x2F;code&gt;, it showed that you could take a mature ecosystem with entrenched tooling, rebuild the developer experience from scratch in Rust, and make everything dramatically faster and more coherent. I wanted to do the same for Haskell. arcanist.sh houses two projects.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;arcanist.sh&#x2F;hx&#x2F;&quot;&gt;hx&lt;&#x2F;a&gt; is a fast, opinionated, next-gen toolchain for Haskell, built in Rust. One tool that replaces the fragmented stack. Managed compiler versions pinned per-project. Deterministic TOML lockfiles with fingerprint verification. 5.6x faster cold builds than cabal. 7.8x faster incremental rebuilds.&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;shell&quot; class=&quot;language-shell &quot;&gt;&lt;code class=&quot;language-shell&quot; data-lang=&quot;shell&quot;&gt;curl -fsSL https:&amp;#x2F;&amp;#x2F;arcanist.sh&amp;#x2F;install.sh | sh
hx new my-app &amp;amp;&amp;amp; cd my-app
hx run
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;No ghcup. No stack. No cabal-install. Just &lt;code&gt;hx&lt;&#x2F;code&gt;.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;arcanist.sh&#x2F;bhc&#x2F;&quot;&gt;BHC&lt;&#x2F;a&gt;, the Basel Haskell Compiler, goes further. It is a clean-slate Haskell compiler written in Rust, not a GHC fork, targeting the Haskell 2026 Platform specification. It uses LLVM for native code generation and offers six runtime profiles that you select at compile time:&lt;&#x2F;p&gt;
&lt;table&gt;&lt;thead&gt;&lt;tr&gt;&lt;th&gt;Profile&lt;&#x2F;th&gt;&lt;th&gt;Designed for&lt;&#x2F;th&gt;&lt;th&gt;Key trait&lt;&#x2F;th&gt;&lt;&#x2F;tr&gt;&lt;&#x2F;thead&gt;&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;default&lt;&#x2F;td&gt;&lt;td&gt;General applications&lt;&#x2F;td&gt;&lt;td&gt;Lazy evaluation, GC-managed, GHC-compatible semantics&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;server&lt;&#x2F;td&gt;&lt;td&gt;Backend services&lt;&#x2F;td&gt;&lt;td&gt;Structured concurrency, automatic cancellation, observability hooks&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;numeric&lt;&#x2F;td&gt;&lt;td&gt;ML and scientific compute&lt;&#x2F;td&gt;&lt;td&gt;Strict numerics, tensor lowering, SIMD, GPU backends (CUDA&#x2F;ROCm)&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;edge&lt;&#x2F;td&gt;&lt;td&gt;WASM and CDN workers&lt;&#x2F;td&gt;&lt;td&gt;Minimal footprint, direct WASM emission without LLVM&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;realtime&lt;&#x2F;td&gt;&lt;td&gt;Games, audio, robotics&lt;&#x2F;td&gt;&lt;td&gt;Bounded GC pauses under 1ms, arena allocation&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;embedded&lt;&#x2F;td&gt;&lt;td&gt;Bare metal, microcontrollers&lt;&#x2F;td&gt;&lt;td&gt;No GC at all, static allocation, targets like ARM Cortex-M&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;&#x2F;tbody&gt;&lt;&#x2F;table&gt;
&lt;p&gt;The same Haskell source, different runtime contracts depending on what you are building. Your security policy engine compiles with the server profile and gets structured concurrency with tracing. Your tensor pipeline compiles with the numeric profile and gets GPU acceleration. Your edge function compiles to WASM and runs on Cloudflare Workers. Same language. Same type safety. Different performance envelopes.&lt;&#x2F;p&gt;
&lt;p&gt;The conviction behind this work is specific. When AI writes the code, the language that survives is not the one optimized for procedural explicitness. It is the one that brings consistency and purity by describing what something is, not how to compute it. AI is extraordinarily good at generating expressions that satisfy formal constraints. And source code that reads like a proof can survive time. It can survive maintainer burnout. It can survive the fact that three years from now, nobody remembers why a function was written the way it was. The types remember. The proof is self-documenting in a way that procedural code never is, because the types encode the intent.&lt;&#x2F;p&gt;
&lt;p&gt;Vera and arcanist.sh accept the same premise: the intermediate layer is changing. They disagree about which direction it should change in. Vera optimizes for reducing errors in the generation loop. Valuable. But hx and BHC optimize for making the generated code correct by construction, because the language itself constrains what valid programs look like at a structural level.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;skip-the-language-entirely&quot;&gt;Skip the language entirely&lt;&#x2F;h2&gt;
&lt;p&gt;The third possibility is the one I think about late at night and do not have a concrete project for. Not yet. It is also the one I find most fascinating and most unsettling.&lt;&#x2F;p&gt;
&lt;p&gt;What if AI stops writing source code at all?&lt;&#x2F;p&gt;
&lt;p&gt;To understand why this is plausible, it helps to look at what source code actually is. It is not the final product. It never was. Source code is a set of instructions that a compiler transforms into machine code. Machine code is what the hardware executes. Source code exists because humans needed an abstraction layer between their intentions and the silicon. We think in concepts like “sort this list” or “reject unauthorized requests.” The CPU thinks in register moves, memory loads, and conditional jumps. Source code bridges that gap.&lt;&#x2F;p&gt;
&lt;p&gt;But that bridge was built for human authors. If the author is no longer human, the bridge serves a different purpose. It becomes an audit trail. A way for humans to read and verify what the AI produced. Not an authoring medium, but a transparency layer.&lt;&#x2F;p&gt;
&lt;p&gt;I see this playing out in two distinct phases.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;phase-one-ai-targets-existing-machine-code&quot;&gt;Phase one: AI targets existing machine code&lt;&#x2F;h3&gt;
&lt;p&gt;The first phase is closer than most people think, and it is conceptually straightforward. We already train models on source code. What happens when we also train them extensively on compiled artifacts? On binaries, object files, intermediate representations, the actual output of compilers?&lt;&#x2F;p&gt;
&lt;p&gt;Consider what a compiler does. It takes source code and transforms it into machine instructions following well-defined, deterministic rules. There is a mapping between source patterns and output patterns. A &lt;code&gt;for&lt;&#x2F;code&gt; loop in C becomes a specific sequence of compare, branch, and increment instructions on x86. A function call follows a specific calling convention. Memory allocation follows specific system call patterns. These mappings are learnable. They are patterns, and pattern recognition is exactly what LLMs excel at.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo diagram&quot;&gt;&lt;code data-lang=&quot;diagram&quot; data-title=&quot;Skipping the source layer&quot;&gt;Today:
human intent → prompt → LLM → source code → compiler → x86&amp;#x2F;ARM → CPU

Phase one:
human intent → prompt → LLM → x86&amp;#x2F;ARM directly → CPU
                                (trained on source +
                                 compiled artifact pairs)&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;In this phase, the model skips the source code layer and generates machine code directly. Not by “compiling” in the traditional sense. By having learned the patterns well enough to produce valid executables from intent descriptions. The way a fluent translator does not parse grammar rules consciously but produces correct sentences from meaning directly.&lt;&#x2F;p&gt;
&lt;p&gt;This sounds radical until you remember that we already trust compilers we do not read the output of. When was the last time you inspected the assembly output of &lt;code&gt;gcc -O3&lt;&#x2F;code&gt; to verify it correctly compiled your C program? You trust the compiler. You test the behavior of the resulting binary. You do not audit the intermediate representation. If an AI can produce binaries that pass the same behavioral tests, the practical difference between “AI-generated machine code” and “compiler-generated machine code” becomes a question of trust calibration, not fundamental possibility.&lt;&#x2F;p&gt;
&lt;p&gt;The analogy I keep returning to is aviation. Early pilots flew by hand and understood every mechanical system in the aircraft. Fly-by-wire changed that. The pilot communicates intent (climb, turn, maintain altitude). The computer translates that into control surface movements. The pilot does not manually adjust ailerons and elevators for every gust of wind. They trust the system. They verify outcomes (altitude, heading, airspeed), not intermediate steps. Phase one of post-language programming is fly-by-wire for software.&lt;&#x2F;p&gt;
&lt;p&gt;If this sounds speculative, consider what happened this week. Anthropic announced &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.anthropic.com&#x2F;glasswing&quot;&gt;Project Glasswing&lt;&#x2F;a&gt;, a coalition including AWS, Apple, Google, Microsoft, NVIDIA, CrowdStrike, Palo Alto Networks, the Linux Foundation, and others, formed to secure the world’s most critical software using AI. Dario Amodei, Anthropic’s CEO, put it plainly:&lt;&#x2F;p&gt;
&lt;blockquote&gt;
&lt;p&gt;“AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities.”&lt;&#x2F;p&gt;
&lt;&#x2F;blockquote&gt;
&lt;p&gt;The proof point he offered: “For OpenBSD, we found a bug that’s been present for 27 years.”&lt;&#x2F;p&gt;
&lt;p&gt;Think about what that means. OpenBSD is one of the most carefully audited codebases in the world. Decades of security-focused human review by some of the most meticulous systems programmers alive. And an AI model found something that every human reviewer missed for twenty-seven years. If AI can understand existing code deeply enough to find vulnerabilities that humans cannot, it can understand code deeply enough to generate it without human-readable source as an intermediate step. The question is no longer whether AI comprehends code at a structural level. That question was answered this week. The question is what it does with that comprehension next.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;phase-two-a-new-kind-of-machine&quot;&gt;Phase two: a new kind of machine&lt;&#x2F;h3&gt;
&lt;p&gt;The second phase is further out and more speculative. But I think it is where things ultimately go.&lt;&#x2F;p&gt;
&lt;p&gt;If AI is generating code for machines to execute and no human needs to read it, there is no reason that code needs to target instruction sets designed for human comprehension. x86 and ARM were designed with the assumption that someone, at least occasionally, would look at the instructions. They have mnemonics. They follow conventions that make disassembly feasible. They are organized into instructions that map, loosely, to operations humans understand.&lt;&#x2F;p&gt;
&lt;p&gt;But what if the execution target was designed from scratch for AI-generated code? A virtual machine or runtime that consumes a new kind of bytecode. Not optimized for human readability. Not optimized for hand-authored assembly. Optimized purely for execution density and machine generation.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo diagram&quot;&gt;&lt;code data-lang=&quot;diagram&quot; data-title=&quot;AI-native execution target&quot;&gt;Phase two:
human intent → prompt → AI → dense symbolic bytecode → AI-native VM
                               (opaque to humans,         |
                                optimized for machine     v
                                generation + execution)  [result]&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;I keep thinking about information density. Chinese characters encode meaning in individual symbols that carry far more semantic weight than Latin alphabet words. A single character can represent a concept that takes an entire English phrase to express. When a system is designed for readers who can process dense symbols natively, the representation compresses. It becomes more efficient at the cost of being less accessible to readers who were not part of the design audience.&lt;&#x2F;p&gt;
&lt;p&gt;AI-native bytecode could follow the same principle. Each instruction could encode complex composite operations that would take dozens of conventional instructions to express. The bytecode would be dense in ways that make current machine code look verbose. Entirely opaque when decompiled or analyzed. Not obfuscated on purpose. Just natively incomprehensible to human cognition, the same way a trained neural network’s weights are incomprehensible even though they encode real, functional knowledge.&lt;&#x2F;p&gt;
&lt;p&gt;The virtual machine running this bytecode would itself be a different kind of system. Not a stack machine or a register machine in the traditional sense. Possibly something closer to a dataflow engine, where the bytecode describes transformation graphs rather than sequential instructions. Think of it as the difference between giving someone turn-by-turn driving directions (go north, turn left, continue for two miles) versus handing them a map with the destination marked. The bytecode is the map. The VM figures out the route.&lt;&#x2F;p&gt;
&lt;p&gt;I want to be honest about where we are. We are not at phase one. Not in April 2026. Current models still need the intermediate layer. They produce better code when they can reason through it step by step. They benefit from type systems and contracts and explicit error messages. The first and second approaches are not just viable, they are necessary right now.&lt;&#x2F;p&gt;
&lt;p&gt;But the trajectory is visible. Models are getting better at generating correct programs with every generation. Formal verification is becoming more practical. Hardware is getting cheaper. The gap between “prompt that describes intent” and “correct executable output” is shrinking. Phase one will arrive when models trained on enough source-plus-binary pairs can reliably produce correct executables. Phase two will arrive when someone asks: if the model is already generating the binary, why are we targeting an instruction set that was designed for a species that is no longer doing the writing?&lt;&#x2F;p&gt;
&lt;p&gt;At some point the intermediate layer becomes optional. And optional things, given enough time, become vestigial.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;what-this-means&quot;&gt;What this means&lt;&#x2F;h2&gt;
&lt;p&gt;I do not think these three approaches are in competition. They are three points on a timeline, and the timeline is the story of the intermediate layer contracting.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo diagram&quot;&gt;&lt;code data-lang=&quot;diagram&quot; data-title=&quot;The intermediate layer timeline&quot;&gt;Near term          Medium term           Long term
(now)              (2-5 years)           (5-15 years)

Explicit           Declarative           Post-language
languages          languages             (AI-native targets)
(Vera)             (Haskell + BHC)

Reduce noise  --&amp;gt;  Change the signal --&amp;gt; Remove the layer
in the loop        entirely              entirely

HOW, but           WHAT, verified        Intent to
unambiguous        by types              execution&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;In the near term, explicit languages like Vera make AI-generated code more reliable by constraining the generation space and providing machine-readable diagnostics. This is useful today. If you are building an AI coding pipeline right now and need to ship next quarter, this approach works.&lt;&#x2F;p&gt;
&lt;p&gt;In the medium term, declarative languages like Haskell, especially with modern tooling and modern runtimes, make AI-generated code correct by construction. The type system does the heavy verification work at a fundamental level. The code that survives is the code that describes invariants, not procedures. This is the era I am building for with arcanist.sh.&lt;&#x2F;p&gt;
&lt;p&gt;In the long term, the language disappears. First into existing machine code generated directly by AI. Then into new execution formats designed for AI generation from the ground up. The intermediate layer that has defined software engineering for seventy years becomes an implementation detail.&lt;&#x2F;p&gt;
&lt;p&gt;That last possibility makes some people uncomfortable. It made me uncomfortable when I first thought about it in December. It means that the craft of programming as we know it, the fluency in syntax, the mastery of idioms, the instinct for an elegant implementation, becomes less like a core skill and more like knowing how to operate a manual lathe. Valuable in the right context. Still needed for the hard cases. But no longer the primary way most software gets built.&lt;&#x2F;p&gt;
&lt;p&gt;This has happened before. There was a time when every programmer understood assembly. Then C abstracted it away, and most programmers stopped reading machine code. Then Python and JavaScript abstracted C away, and most programmers stopped thinking about memory management. Each time, the previous layer did not disappear. It became the domain of specialists who maintained the infrastructure everyone else stood on. The same thing will happen to source code. It will not vanish. It will specialize.&lt;&#x2F;p&gt;
&lt;p&gt;I still write code every day. I still think in types. I am still building hx and BHC because I believe the medium-term future is both real and long, and Haskell’s strengths are exactly what that future demands. Pure functions, strong types, provable correctness. These are not luxuries in a world where AI writes the implementation. They are the load-bearing structure. But I do it with one eye on the horizon, knowing that the intermediate layer I am investing in is exactly that. Intermediate.&lt;&#x2F;p&gt;
&lt;p&gt;The person who built Vera saw the same thing I saw, probably around the same time. They chose the first approach. I chose the second. Somebody, eventually, will build the third. And then we will all need to figure out what we mean by “programming” when nobody writes programs anymore.&lt;&#x2F;p&gt;
&lt;p&gt;I am already curious about the answer.&lt;&#x2F;p&gt;
</description>
      </item>
      <item>
          <title>Notes from RSAC 2026</title>
          <pubDate>Thu, 09 Apr 2026 00:00:00 +0000</pubDate>
          <author>Unknown</author>
          <link>https://raskell.io/articles/notes-from-rsac-2026/</link>
          <guid>https://raskell.io/articles/notes-from-rsac-2026/</guid>
          <description xml:base="https://raskell.io/articles/notes-from-rsac-2026/">&lt;p&gt;I am writing this about two weeks after RSAC 2026 closed. In between, my coworkers and I drove down the West Coast, through Big Sur to LA, then out to Las Vegas for Easter weekend. That was deliberate. Not the route, exactly, but the space. The conference gave me a lot to process, and I have learned over the years that I process better when I am moving, when I am not sitting at a desk trying to force conclusions out of raw impressions.&lt;&#x2F;p&gt;
&lt;p&gt;So here is what I took away. Not a session-by-session recap. Not a vendor roundup. Just the things that are still with me now that the noise has faded and the Pacific Coast Highway is behind me.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-talk&quot;&gt;The talk&lt;&#x2F;h2&gt;
&lt;p&gt;Milan Duric and I presented “Self-Learning WAF: Using Generative AI to Tame ModSecurity False Positives” on Wednesday morning, March 25, in Moscone West 3020. We had an 8:30 AM slot, which is either a curse or a blessing depending on your audience. It turned out to be a blessing. The room was full of people who had chosen to be there at that hour, which means they cared about the topic, and the energy reflected that.&lt;&#x2F;p&gt;
&lt;p&gt;The talk went flawlessly. If you have ever presented at a conference of that scale, you know that “flawless” is not something you take for granted. There is always the moment before you start where you wonder if the demo will work, if the projector will behave, if your timing will hold. All of it held. Milan and I had rehearsed enough that the talk felt natural rather than performed, which is the line you want to hit.&lt;&#x2F;p&gt;
&lt;p&gt;I will write a separate post about the content of the talk itself, what we built, what we learned, how the audience responded to specific ideas. This post is about everything else.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;third-time-first-time&quot;&gt;Third time, first time&lt;&#x2F;h2&gt;
&lt;p&gt;This was my third RSA Conference. I attended in 2024, 2025, and now 2026. But it was my first time as a speaker, and that changed the experience in ways I did not fully expect.&lt;&#x2F;p&gt;
&lt;p&gt;When you attend as a participant, you are a consumer of the conference. You pick sessions, you walk the expo floor, you absorb. When you are a speaker, even for just one session, you become part of the fabric. People approach you after the talk. They reference something you said in a hallway conversation two days later. You are on the other side of the dynamic, and it gives you a different relationship with the event.&lt;&#x2F;p&gt;
&lt;p&gt;I have grown to genuinely appreciate the sheer volume and quality of the whole thing. RSAC is not one conference. It is several conferences layered on top of each other. There are deeply technical sessions where people walk you through real implementations, real code, real incident response timelines. There are strategic talks where CISOs and policy architects work through the organizational and regulatory implications of what is changing. And then there are the keynotes, the big voices, people who have shaped the field for decades, sharing what they see on the horizon.&lt;&#x2F;p&gt;
&lt;p&gt;Because it happens in San Francisco, in the heart of the Bay Area, the reach is different from any other security event. You are not just at a conference. You are at the geographic center of the industry that is driving the transformation everyone is trying to understand. The density of talent, capital, and ambition in that city during RSAC week is difficult to describe if you have not experienced it. The only comparable events I can think of are BlackHat and DefCon, but even those have a different energy. RSAC pulls in a wider spectrum of the industry, from the deeply technical to the deeply strategic, from startup founders to government officials, and puts them all in the same building for a week. That range is what makes it valuable.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-year-of-agents&quot;&gt;The year of agents&lt;&#x2F;h2&gt;
&lt;p&gt;Ever since I started attending in 2024, AI has been a substantial part of the conference. That makes sense. The developments in AI over the past few years have had a prominent cybersecurity dimension from the start, and the industry has been working through what that means, both as a threat to defend against and as a capability to harness.&lt;&#x2F;p&gt;
&lt;p&gt;But this year felt qualitatively different from the previous two. In 2024 and 2025, the AI conversation was broad and somewhat exploratory. What can large language models do for security? How do we detect AI-generated phishing? What does the threat landscape look like when attackers have access to the same models we do? Important questions, but still in the “what is possible” phase.&lt;&#x2F;p&gt;
&lt;p&gt;2026 was past that. The conversation had narrowed and deepened. It was specifically about agents. Not AI in general, not language models as a capability, but autonomous agents as a new category of infrastructure. Enterprise-grade agentic systems. Agentic orchestration patterns. Agent-native architectures. The shift from “can we use AI?” to “how do we architect our systems around autonomous agents that are already here?” was palpable in almost every session I attended.&lt;&#x2F;p&gt;
&lt;p&gt;The concept that kept coming up was NHI: non-human identities. This term has existed in the identity and access management world for a while, but at RSAC 2026 it had taken on a new meaning. The old NHI conversation was about service accounts, API keys, machine certificates. The new NHI conversation is about LLM inference backends that operate as something fundamentally different from traditional automated systems. These are entities that do not just execute a fixed pipeline. They reason, they make judgment calls, they interact with systems and data in ways that look more like what a human analyst does than what a cron job does. But they operate at machine speed, they do not sleep, and they do not have the contextual judgment or accountability that comes with a human in the seat.&lt;&#x2F;p&gt;
&lt;p&gt;The trust problem this creates is real, and it is not just a theoretical concern. Human employees were already risk factors before AI entered the picture. Insider threats, social engineering, credential compromise, accidental misconfiguration. These are well-understood attack surfaces. Now add entities that move faster than any human, that can touch more systems in a minute than a human employee touches in a day, and that are harder to audit because their reasoning process is opaque. The attack surface did not just grow. It changed shape in ways that existing security architectures were not designed for.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-human-in-the-loop-debate&quot;&gt;The human-in-the-loop debate&lt;&#x2F;h2&gt;
&lt;p&gt;This was the most interesting tension I observed across the conference, and I think it is going to be one of the defining questions for cybersecurity in the next few years.&lt;&#x2F;p&gt;
&lt;p&gt;A small number of talks presented approaches that kept a human firmly in the loop. AI assists, human decides. AI flags, human acts. AI generates recommendations, human approves or rejects. These were careful, measured presentations, and some of them were good. The argument is intuitive and appeals to anyone who has been burned by automation gone wrong: keep a human in the critical path because humans have judgment that machines do not.&lt;&#x2F;p&gt;
&lt;p&gt;But the majority of the conference had reached a different consensus, and it was stated with increasing confidence as the week went on: the human in the loop is a bottleneck. Not philosophically. Operationally. In terms of the speed at which threats materialize and the speed at which defenses need to respond.&lt;&#x2F;p&gt;
&lt;p&gt;The argument is straightforward once you lay it out. Adversaries and threat actors are already leveraging AI to accelerate their operations. They are scanning for vulnerabilities at machine speed. They are generating novel attack variations faster than any human analyst can write detection rules. They are using AI to identify and exploit zero-day vulnerabilities in timeframes that make traditional patch cycles look like geological processes. If your defensive response depends on a human reading an alert, understanding the context, making a judgment call, and clicking a button before a countermeasure activates, you have introduced a rate limiter into your defense that your attacker does not have. You are playing at human speed against an adversary operating at machine speed.&lt;&#x2F;p&gt;
&lt;p&gt;I agree with this assessment, and I want to be precise about what I mean by that. I do not think humans are irrelevant to security. They are not. Human judgment, human understanding of organizational context, human ability to reason about novel situations, these remain essential. But I think the role of the human needs to shift fundamentally. The human should be a supervisor, not a gatekeeper. The human should set policy, define constraints, establish acceptable parameters, review outcomes, and intervene when something goes wrong. But the human should not be the bottleneck whose reaction time determines how fast sophisticated defense measures can respond to a threat that is moving at inference speed.&lt;&#x2F;p&gt;
&lt;p&gt;This is an engineering problem, not a philosophical one. How do you design agent-native architectures where the human is still in control, still has full visibility, still sets the rules, but is not the limiting factor in the response loop? That is the challenge of 2026. I did not hear anyone at the conference claim to have fully solved it. But I heard a lot of people working on it seriously, and the framing had matured beyond the naive “just automate everything” takes that dominated the early AI-security conversation.&lt;&#x2F;p&gt;
&lt;p&gt;This is also, incidentally, part of why I built &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;zentinelproxy.io&#x2F;&quot;&gt;Zentinel&lt;&#x2F;a&gt; the way I did. A reverse proxy that sits at the edge, where policy enforcement happens at wire speed, is exactly the kind of system that needs to operate autonomously within human-defined constraints. The agent architecture in Zentinel, where security logic runs in isolated processes with bounded resources and explicit failure modes, is my answer to the question of how you let autonomous systems make real-time decisions while keeping the human in the position of supervisor rather than bottleneck. I wrote about this in more detail in &lt;a href=&quot;&#x2F;articles&#x2F;what-zentinel-is-really-optimizing-for&#x2F;&quot;&gt;What Zentinel Is Really Optimizing For&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-four-factors&quot;&gt;The four factors&lt;&#x2F;h2&gt;
&lt;p&gt;I attended a panel with four former NSA directors and US Cyber Command commanders. Both positions are held by the same person at any given time, so these were people who had sat at the intersection of signals intelligence and military cyber operations at the highest level the United States has. Regardless of how you feel about the NSA or US foreign policy, the caliber of strategic thinking in that room was extraordinary.&lt;&#x2F;p&gt;
&lt;p&gt;Paul Nakasone said something that has stayed with me since. He laid out what he considers the four most important factors when assessing the strategic potentiality of a nation state in this era. Not military strength. Not GDP. Four specific things: chips, data, talent, and energy.&lt;&#x2F;p&gt;
&lt;p&gt;Chips, meaning silicon, meaning raw compute power. How many advanced GPUs can you deploy? How advanced are they relative to the frontier? And critically: can you manufacture them domestically, or are you dependent on someone else’s fabrication capacity? Right now, the entire world depends on TSMC in Taiwan for leading-edge chip fabrication, and the geopolitical implications of that single point of dependency are staggering.&lt;&#x2F;p&gt;
&lt;p&gt;Data, meaning access to the raw material that AI systems learn from. Who has it, how much of it, how diverse is it, and under what legal and political constraints can it be used for training and inference?&lt;&#x2F;p&gt;
&lt;p&gt;Talent, meaning the human capital that knows how to build, train, deploy, secure, and govern these systems. Where that talent lives, where it wants to live, and what it takes to attract and retain it. This is not just about researchers at frontier labs. It is about the entire pipeline: the engineers who build the infrastructure, the operators who keep it running, the security professionals who defend it, the policy people who regulate it.&lt;&#x2F;p&gt;
&lt;p&gt;Energy, meaning access to cheap, abundant, reliable power. Because the compute demands of frontier AI are measured in gigawatts now, not megawatts. A single frontier training run can consume more electricity than a small city. The question of whether you can physically power your AI ambitions is no longer abstract.&lt;&#x2F;p&gt;
&lt;p&gt;I could not stop thinking about &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;ai-2027.com&#x2F;&quot;&gt;AI 2027&lt;&#x2F;a&gt; while listening to Nakasone. The scenario work by Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, and Romeo Dean, which I first wrote about in &lt;a href=&quot;&#x2F;articles&#x2F;how-i-work-these-days&#x2F;&quot;&gt;How I Work These Days&lt;&#x2F;a&gt;, makes strikingly similar assessments from a technology trajectory perspective rather than a national security one. Their scenario tracks the distribution of global AI compute (the US holding roughly 70% of frontier capacity through its companies, China around 12%), the geopolitical competition for chip manufacturing, the energy infrastructure required to sustain frontier operations (their projection of global AI datacenter spending reaching the trillion-dollar range by 2026 no longer reads like speculation), and the talent concentration in a handful of US-based labs.&lt;&#x2F;p&gt;
&lt;p&gt;What makes AI 2027 feel prophetic, and I do not use that word casually, is that its core thesis keeps holding up month after month. The idea that automating AI research itself creates a self-reinforcing feedback loop, that the cycle between capability and capability-building is compressing, that the timeline for transformative change is shorter than most institutional planning horizons assume. The specific dates and milestones may shift. The authors themselves have revised some timelines. But the directional assessment, the shape of the curve, still looks right to me as of April 2026. The scenario’s detailed treatment of compute distribution, espionage risks, and the escalatory dynamics between nation states competing for AI dominance maps remarkably well onto what I heard discussed in more guarded terms on the RSAC floor.&lt;&#x2F;p&gt;
&lt;p&gt;Listening to Nakasone lay out those four factors, I kept thinking about Europe. And about Switzerland specifically, since that is where I live and work. Europe is behind on all four. The continent does not manufacture frontier chips. It does not host the leading AI labs. Its regulatory environment, while well-intentioned, has optimized more for constraint than for capability. Its energy infrastructure is in the middle of a complex transition. And its talent pipeline, while strong in research, struggles to retain builders who can turn research into deployed systems at scale, because many of them leave for the Bay Area, London, Singapore, the Gulf states, or other places where the ecosystem is more supportive of what they want to build.&lt;&#x2F;p&gt;
&lt;p&gt;This is part of why I co-founded &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;die-zukunft.ch&#x2F;&quot;&gt;Die Zukunft&lt;&#x2F;a&gt;, a new Swiss political party focused on structural transformation. The name means “The Future” in German. The party exists because I believe the political infrastructure in Switzerland, and in most European countries, is not designed to respond to the kind of structural shift that Nakasone was describing. The decisions being made right now about compute sovereignty, energy policy, talent retention, and regulatory frameworks will determine whether Europe is a participant in the next decade or a consumer of other people’s technology. Die Zukunft’s platform addresses these questions directly: digital sovereignty defined in infrastructure terms, faster permitting for critical energy and compute projects, open standards as a hard requirement for government systems, and immigration policy designed to attract the talent that builds these systems. It is not a technology party in the narrow sense. It is a party built on the recognition that the structural transformation AI is driving is too consequential to be left to the current pace of European political response.&lt;&#x2F;p&gt;
&lt;p&gt;And this is also why I built &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;archipelag.io&#x2F;&quot;&gt;Archipelag&lt;&#x2F;a&gt;. If Europe wants digital sovereignty, it needs sovereign compute infrastructure. Not just policy positions about data residency, but actual physical capacity to run AI workloads within European jurisdictions, at competitive cost, without depending on American hyperscalers. Archipelag is a decentralized AI compute network that routes inference jobs to community-operated nodes with jurisdiction-aware routing baked into the infrastructure layer. It is designed so that a European company can run AI workloads with cryptographic guarantees about where their data is processed, using idle GPU capacity that already exists across the continent. It is my direct answer to the infrastructure gap that Nakasone’s four factors expose so clearly for Europe.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-enterprise-problem&quot;&gt;The enterprise problem&lt;&#x2F;h2&gt;
&lt;p&gt;This connects to something broader that I have been thinking about since well before RSAC, but that the conference brought into sharper focus.&lt;&#x2F;p&gt;
&lt;p&gt;I work for a large Swiss financial institution. My perspective is shaped by what I see inside that kind of organization every day. But I believe the dynamic applies to enterprises of all sizes and in all sectors, even if the scale and specifics differ.&lt;&#x2F;p&gt;
&lt;p&gt;The biggest threat I see right now is not a specific vulnerability, not a particular attack vector, not a novel exploit technique. It is inertia. It is the widening gap between what is happening at the frontier of AI capability and what most organizations are actually doing about it. Too many companies still think they can outsource their way through this transition. Buy an AI-powered security product from a vendor. Subscribe to a managed detection service that mentions “AI” somewhere in its marketing materials. Check the compliance box and move on to the next quarter.&lt;&#x2F;p&gt;
&lt;p&gt;I do not think that is going to work. Not because those vendors are bad. Some of them are genuinely good at what they do. But because the organizations that will remain competitive and secure over the next few years are the ones that build internal AI capability, not just consume external AI services. That means investing in your own GPU compute. That means building the internal expertise to deploy, fine-tune, and operate models on your own infrastructure. That means treating AI as a core organizational competency, not a procurement line item.&lt;&#x2F;p&gt;
&lt;p&gt;This is not a popular opinion in many boardrooms. It is expensive. It is hard. It requires talent that is difficult to hire and even harder to retain when they can work at a frontier lab or a well-funded startup instead. But the alternative, waiting and relying on service providers to package AI innovation for you at their pace and under their terms, means you are always operating one step behind. You are consuming someone else’s capability with someone else’s priorities, under someone else’s constraints. In a landscape that is moving as fast as this one, that delay compounds.&lt;&#x2F;p&gt;
&lt;p&gt;The companies that understand this and invest now are going to have a structural advantage that widens over time. The ones that wait are going to find themselves trying to close a gap that gets larger with every quarter of inaction. I saw enough at RSAC to believe that some organizations have internalized this. And I saw enough to believe that many more have not.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-vendor-floor&quot;&gt;The vendor floor&lt;&#x2F;h2&gt;
&lt;p&gt;I want to be fair about this, because I know how the previous section sounds, and I do not want to come across as someone who dismisses the entire vendor ecosystem. I am not easily swayed by big tech vendors and their keynote promises, and I think anyone who works in security should maintain a healthy skepticism toward product demos and polished presentations. That is just professional hygiene.&lt;&#x2F;p&gt;
&lt;p&gt;That said, I genuinely enjoyed many of the keynotes this year. Some of these companies have real long-term vision. They see the shape of what is coming, and their best presenters can communicate that vision with clarity and conviction. I respect that, even when I disagree with their specific approach or their business model.&lt;&#x2F;p&gt;
&lt;p&gt;But enjoying a keynote and trusting that buying a product will translate into sustainable cybersecurity for your organization are very different things. Actual security is hands-on work. It is understanding your own systems, your own architecture, your own threat model, your own failure modes. It is the boring, unglamorous work of knowing what runs where, what talks to what, what happens when something fails, and what your actual attack surface looks like on a Tuesday afternoon. That work cannot be fully offloaded to a vendor. It cannot be outsourced to a dashboard, no matter how sophisticated the analytics behind it.&lt;&#x2F;p&gt;
&lt;p&gt;The vendors that impressed me most this year were the ones that acknowledged this honestly. The ones that positioned their tools as force multipliers for competent teams rather than replacements for the need to have competent teams in the first place. That distinction matters, and the vendors who understand it tend to build better products because they are designing for operators, not for procurement committees.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;closing-night&quot;&gt;Closing night&lt;&#x2F;h2&gt;
&lt;p&gt;The last session of the conference was Hugh Jackman in conversation with Hugh Thompson. I was not sure what to expect from a Hollywood actor closing out a cybersecurity conference, and I suspect a lot of people in the audience had the same reservation going in. But it worked. Jackman is funny, self-aware, and surprisingly thoughtful about creativity, discipline, and the craft of doing hard things well. He talked about preparation, about the difference between performing and connecting, about the years of work that go into making something look effortless.&lt;&#x2F;p&gt;
&lt;p&gt;At one point he taught the audience that if you say “raise up lights” in an American accent, you are saying “razor blades” in Australian. The room loved it. It was one of those moments where several thousand cybersecurity professionals all became delighted seven-year-olds for about ten seconds, and it was a good reminder that conferences are also about shared human moments, not just information transfer.&lt;&#x2F;p&gt;
&lt;p&gt;It was the right way to end a dense week. Light enough to let people exhale after five days of intense content, but substantive enough in its own way that it did not feel like filler.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-road-after&quot;&gt;The road after&lt;&#x2F;h2&gt;
&lt;p&gt;After the conference closed, a few of us stayed on. We rented a car and drove south from San Francisco, down Highway 1 through Big Sur. If you have not done that drive, I do not know how to describe it adequately except to say that it recalibrates your sense of scale. The Pacific is very large and very indifferent, and spending a few hours winding along cliff-edge roads with that water stretching out to the horizon below you is a useful counterweight to a week of thinking about the future of everything.&lt;&#x2F;p&gt;
&lt;p&gt;We spent time in LA. Santa Monica and Venice Beach, walking the boardwalk, eating food that was too expensive and not caring. The kind of aimless, unstructured time that my brain needed after five days of absorbing information at high density. I find that the most useful thinking often happens when you are not trying to think. When you are just watching the ocean or walking on a beach and letting your subconscious do whatever it does with the raw material you fed it.&lt;&#x2F;p&gt;
&lt;p&gt;Then Las Vegas for Easter weekend. Spring break crowds, desert heat starting to build, the particular surreality of the Strip. It was not productive time in any conventional sense, and it was not meant to be. It was decompression. Space for the conference to settle from a collection of impressions into something more like understanding.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;what-stays-with-me&quot;&gt;What stays with me&lt;&#x2F;h2&gt;
&lt;p&gt;Two weeks out, here is what I think is different.&lt;&#x2F;p&gt;
&lt;p&gt;I went to RSAC 2026 with a set of convictions about where things are heading. Agents are going to be the primary operating model for security infrastructure. Human-in-the-loop is going to shift to human-as-supervisor. Organizations that do not build their own AI infrastructure are going to fall behind structurally. Europe needs to wake up to the compute sovereignty problem before it becomes irreversible. These were things I already believed before I got on the plane to San Francisco.&lt;&#x2F;p&gt;
&lt;p&gt;What the conference did was sharpen them. Hearing Nakasone frame national potentiality in terms of chips, data, talent, and energy gave me a cleaner lens for thinking about the geostrategic dimension. Seeing the breadth and depth of the agentic conversation on the conference floor confirmed that this is not a niche position or an edge case anymore. It is the emerging consensus of the industry. And presenting our own work, standing in front of a room and showing what we actually built, made it more real in a way that writing code in a terminal at midnight does not.&lt;&#x2F;p&gt;
&lt;p&gt;Conferences like RSAC have always had this effect on me. They compress a year’s worth of signals into a week, and then you spend the following weeks unpacking what you heard and figuring out what it means for what you are building. After RSAC 2024, I started thinking seriously about edge security architectures, which eventually became Zentinel. After RSAC 2025, the urgency around AI-native infrastructure solidified into the work that became Archipelag. This year, I expect the sharpened understanding of agentic systems and the geostrategic landscape to feed directly into what I build next.&lt;&#x2F;p&gt;
&lt;p&gt;I also came away with a renewed sense of urgency about the gap between what the frontier looks like and what most organizations are doing about it. That gap, between the leading edge and the institutional mean, is the real risk. Not any single threat actor, not any specific vulnerability class. The systemic inability of large institutions to move at the pace that the situation demands. That is what keeps me up at night, and that is what I am trying to address in my own work, whether it is building infrastructure, writing about it, or working on the political dimension through Die Zukunft.&lt;&#x2F;p&gt;
&lt;p&gt;I am going to write a separate post about the talk itself, about what Milan and I built with self-learning WAFs and what we learned along the way. That is coming soon. For now, these are the notes I wanted to capture while the impressions are still sharp enough to be useful.&lt;&#x2F;p&gt;
&lt;p&gt;I am already looking forward to RSAC 2027.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;references-and-further-reading&quot;&gt;References and further reading&lt;&#x2F;h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.rsaconference.com&#x2F;&quot;&gt;RSAC 2026&lt;&#x2F;a&gt; - The conference itself&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;ai-2027.com&#x2F;&quot;&gt;AI 2027&lt;&#x2F;a&gt; - Scenario work by Kokotajlo, Alexander, Larsen, Lifland, and Dean&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a href=&quot;&#x2F;articles&#x2F;how-i-work-these-days&#x2F;&quot;&gt;How I Work These Days&lt;&#x2F;a&gt; - Where I first wrote about AI 2027 and the shift in how I build&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a href=&quot;&#x2F;articles&#x2F;what-zentinel-is-really-optimizing-for&#x2F;&quot;&gt;What Zentinel Is Really Optimizing For&lt;&#x2F;a&gt; - The design philosophy behind Zentinel and why agent isolation matters&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;zentinelproxy.io&#x2F;&quot;&gt;Zentinel&lt;&#x2F;a&gt; - The security-first reverse proxy I built on Pingora&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;archipelag.io&#x2F;&quot;&gt;Archipelag&lt;&#x2F;a&gt; - Decentralized, sovereignty-first AI compute network&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;die-zukunft.ch&#x2F;&quot;&gt;Die Zukunft&lt;&#x2F;a&gt; - Swiss political party for structural transformation&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
</description>
      </item>
      <item>
          <title>What Zentinel Is Really Optimizing For</title>
          <pubDate>Sun, 22 Mar 2026 00:00:00 +0000</pubDate>
          <author>Unknown</author>
          <link>https://raskell.io/articles/what-zentinel-is-really-optimizing-for/</link>
          <guid>https://raskell.io/articles/what-zentinel-is-really-optimizing-for/</guid>
          <description xml:base="https://raskell.io/articles/what-zentinel-is-really-optimizing-for/">&lt;p&gt;The clearest way I can describe the motivation behind Zentinel is this: I got tired of not trusting the thing that stood between my users and the internet.&lt;&#x2F;p&gt;
&lt;p&gt;Not because the proxies I ran were bad. They were not. I have genuine respect for the engineering in Nginx, HAProxy, Envoy. I learned a lot from operating them, and I mean that sincerely. But over years of running these systems in production, a pattern kept repeating, and it was always some version of the same story: the proxy did something I could not predict from reading its configuration.&lt;&#x2F;p&gt;
&lt;p&gt;A WAF module gets slow under load, and because it runs inside the proxy process, the entire data path backs up. A retry storm starts because the default retry policy is implicit rather than explicit. A configuration reload takes effect partially because there is no atomic swap. An unbounded queue grows until memory runs out and the OOM killer takes the proxy down along with everything else on the box.&lt;&#x2F;p&gt;
&lt;p&gt;None of these are exotic. If you have operated proxies at any real scale, you have seen most of them. And every time, the root cause pointed at the same structural issue: the proxy was optimized for something other than what I actually needed from it.&lt;&#x2F;p&gt;
&lt;p&gt;That is not a criticism. It is a statement about time.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;every-proxy-was-built-for-its-era&quot;&gt;Every proxy was built for its era&lt;&#x2F;h2&gt;
&lt;p&gt;HAProxy was born in 2000. Willy Tarreau built it to solve a specific, urgent problem: distributing TCP connections across a pool of backend servers. The internet was scaling fast. Sites needed load balancing. HAProxy did that one job with extraordinary precision, and twenty-five years later it still does. It is one of the most reliable pieces of infrastructure software ever written. But it was built as a load balancer, and when you need it to do security enforcement, you are extending a load balancer.&lt;&#x2F;p&gt;
&lt;p&gt;Nginx arrived in 2004. Igor Sysoev was solving the C10K problem: how do you handle 10,000 concurrent connections without the process-per-connection model falling over? Nginx was built as a web server. An event-driven architecture that served static files and handled connections with remarkable efficiency. The reverse proxy capability came later, almost as a side effect of how well it handled connections, and eventually became one of its most important use cases. But the core was always a web server. The assumptions about configuration, about reload behavior, about how modules interact with the request path, those assumptions come from web serving.&lt;&#x2F;p&gt;
&lt;p&gt;Varnish showed up in 2006. Poul-Henning Kamp built it because dynamic web pages were slow and caching was the answer. Varnish sat in front of your web server, cached responses in memory, and served them fast. That was the whole job. A caching proxy, and a beautiful one.&lt;&#x2F;p&gt;
&lt;p&gt;Envoy was born at Lyft around 2016. Microservices had created a new problem: how do you route, observe, and control traffic between hundreds of internal services that come and go? The service mesh was the answer, and Envoy was the data plane. It brought observability, retries, circuit breaking, and policy enforcement to a world where the network topology was no longer something you could draw on a whiteboard and expect to remain accurate for more than a week.&lt;&#x2F;p&gt;
&lt;p&gt;Traefik arrived in the same era, optimized for automatic service discovery in container environments. Services appear and disappear. The proxy figures out the routing on its own.&lt;&#x2F;p&gt;
&lt;p&gt;Every single one of these was the right tool at the right time. They solved the problem that mattered most when they were built, and they solved it well. I used most of them. I admired most of them. I am not here to argue that any of them were wrong.&lt;&#x2F;p&gt;
&lt;p&gt;But I am here to say that their time shaped what they became, and so did the economics of how software was built.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-generality-trap&quot;&gt;The generality trap&lt;&#x2F;h2&gt;
&lt;p&gt;Here is something I noticed over years of operating these tools: they all converge.&lt;&#x2F;p&gt;
&lt;p&gt;Nginx adds load balancing. HAProxy adds HTTP&#x2F;2 and Lua scripting. Envoy adds caching, WAF capabilities, ext_proc for external processing. Traefik adds middleware chains. Every proxy, over time, becomes a Swiss army knife.&lt;&#x2F;p&gt;
&lt;p&gt;This is not a design failure. It is an economic inevitability.&lt;&#x2F;p&gt;
&lt;p&gt;Building a production-grade reverse proxy takes a team, sometimes a large one, working over many years. Nginx took Igor years before it was production-ready. Envoy is maintained by hundreds of contributors across multiple organizations. HAProxy has been continuously refined for a quarter of a century by one of the most skilled systems programmers alive.&lt;&#x2F;p&gt;
&lt;p&gt;When the cost of building software is that high, you need the result to serve a broad market. You cannot afford to optimize for one narrow concern. You need your proxy to be useful to web servers and API gateways and service meshes and CDN edges and everything in between. The economic pressure pushes relentlessly toward generality. Toward one more feature. Toward covering one more use case. Toward becoming the tool that everyone can use, even if nobody uses it for exactly the thing they wish it was designed for.&lt;&#x2F;p&gt;
&lt;p&gt;This creates a particular kind of friction. You adopt a proxy for its core strength, and then you spend years working around its assumptions in every other area. You chose Nginx because it handles connections well, but now you are fighting its reload model and its embedded Lua modules that share a fate with the worker process. You chose Envoy because it observes service traffic brilliantly, but now you are wrestling with an xDS configuration surface that could fill a textbook, and a C++ codebase where memory safety is a matter of programmer discipline rather than compiler guarantee.&lt;&#x2F;p&gt;
&lt;p&gt;I lived in that friction for a long time. I knew what I wanted. I could describe it in detail to anyone who would listen. But wanting a purpose-built proxy and building one are different things when building one requires a team and years of sustained effort.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;what-i-kept-wishing-for&quot;&gt;What I kept wishing for&lt;&#x2F;h2&gt;
&lt;p&gt;After enough 3 AM incidents and enough post-mortems, the shape of what I was looking for became concrete enough to write down.&lt;&#x2F;p&gt;
&lt;p&gt;I wanted a reverse proxy where the operator can reason about what will happen under any condition. Including conditions they did not anticipate. That is the whole thesis. Everything else follows from it.&lt;&#x2F;p&gt;
&lt;p&gt;When you are on call, “reason about what will happen” is not philosophical. It means concrete things.&lt;&#x2F;p&gt;
&lt;p&gt;It means every queue has a maximum depth. Every timeout is explicit and declared. Every connection pool has a ceiling. No unbounded allocations anywhere. If you set a body size limit of 10 MB, that is a hard limit, not a suggestion. If a security agent’s concurrency is capped at 100, the 101st request gets the configured failure mode, not a silent queue that grows until the box dies.&lt;&#x2F;p&gt;
&lt;p&gt;It means every route declares what happens when a security agent is unreachable. Not a global toggle. Per route, per agent. Your API fails closed when the WAF is down (deny everything, because your API handles sensitive data and you do not want it exposed without inspection). Your marketing site fails open (allow traffic, log the gap, because a few minutes of unfiltered marketing pages is better than a full outage). You decide. You write it down. The system enforces it.&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;kdl&quot; class=&quot;language-kdl &quot;&gt;&lt;code class=&quot;language-kdl&quot; data-lang=&quot;kdl&quot;&gt;agents {
    agent &amp;quot;waf&amp;quot; {
        transport { unix-socket &amp;quot;&amp;#x2F;var&amp;#x2F;run&amp;#x2F;zentinel-waf.sock&amp;quot; }
        events &amp;quot;request-headers&amp;quot; &amp;quot;request-body&amp;quot;
        timeout-ms 50
        max-concurrent-calls 100
        failure-mode &amp;quot;closed&amp;quot;

        circuit-breaker {
            failure-threshold 5
            success-threshold 3
            timeout-seconds 30
        }
    }
}

routes {
    route &amp;quot;api&amp;quot; {
        priority 100
        matches { path-prefix &amp;quot;&amp;#x2F;api&amp;#x2F;&amp;quot; }
        upstream &amp;quot;backend&amp;quot;
        filters &amp;quot;waf&amp;quot;
        failure-mode &amp;quot;closed&amp;quot;
    }

    route &amp;quot;marketing&amp;quot; {
        priority 50
        matches { path-prefix &amp;quot;&amp;#x2F;&amp;quot; }
        upstream &amp;quot;static-backend&amp;quot;
        filters &amp;quot;waf&amp;quot;
        failure-mode &amp;quot;open&amp;quot;
    }
}
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;It means every security decision gets a trace ID and ends up in structured logs. When someone asks “why was this request blocked at 3:47 AM?”, you can answer with a correlation ID and a full trace: which agent decided, which rule matched, how long it took. Not “the WAF blocked it, probably.” The actual chain of events.&lt;&#x2F;p&gt;
&lt;p&gt;It means configuration reloads are atomic. You send SIGHUP, the new configuration is parsed, validated, and swapped in. In-flight requests finish on the old config. New requests pick up the new one. No window where half the routes are on the old version and half on the new.&lt;&#x2F;p&gt;
&lt;p&gt;I could describe all of this clearly. I had been able to for years. Describing what you want and having the means to build it are different things.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-insight-about-isolation&quot;&gt;The insight about isolation&lt;&#x2F;h2&gt;
&lt;p&gt;The single biggest realization I had was about failure domains.&lt;&#x2F;p&gt;
&lt;p&gt;In every proxy I had operated, extension logic ran inside the proxy process. Nginx has embedded Lua via OpenResty. HAProxy has SPOE and Lua. Envoy has Wasm filters and ext_proc. They all share the same structural problem: the extension and the proxy share a fate.&lt;&#x2F;p&gt;
&lt;p&gt;If your Lua WAF script enters an infinite loop in nginx, the nginx worker is stuck. If your Wasm filter in Envoy allocates too much memory, the Envoy process pays for it. A slow SPOE agent in HAProxy backs pressure into the proxy’s request handling. I have been on the receiving end of all three patterns, and they all end the same way: you are awake at 3 AM trying to figure out why your entire proxy fleet is degraded because one security module is having a bad day.&lt;&#x2F;p&gt;
&lt;p&gt;This is the classic shared-fate problem. When everything runs in one process, everything fails together. A slow WAF does not just slow down WAF-protected routes. It consumes worker resources that affect all routes. A memory leak in an auth filter does not just take down auth. It takes down the process.&lt;&#x2F;p&gt;
&lt;p&gt;The answer I kept coming back to was process isolation. Not as a compromise or a workaround, but as the foundational design principle. Security and policy logic should live in separate processes, each with its own memory, its own concurrency limits, and its own circuit breaker.&lt;&#x2F;p&gt;
&lt;pre class=&quot;giallo diagram&quot;&gt;&lt;code data-lang=&quot;diagram&quot; data-title=&quot;Zentinel agent isolation&quot;&gt;┌─────────────────────┐
             │   Proxy Core        │
             │   (Rust &amp;#x2F; Pingora)  │
             └──────────┬──────────┘
                        │
          ┌─────────────┼─────────────┐
          │             │             │
 ┌────────▼───────┐ ┌──▼──────────┐ ┌▼───────────────┐
 │  WAF Agent     │ │ Auth Agent  │ │ Custom Agent   │
 │  semaphore: 100│ │ semaphore:50│ │ semaphore: 25  │
 │  timeout: 50ms │ │ timeout:30ms│ │ timeout: 200ms │
 │  fail: closed  │ │ fail: closed│ │ fail: open     │
 └────────────────┘ └─────────────┘ └────────────────┘

 Each agent: own process, own memory, own circuit breaker.
 Slow WAF ≠ slow auth. Crashed agent ≠ crashed proxy.&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;If the WAF agent gets slow, the WAF agent’s semaphore fills up. The auth agent keeps running on its own semaphore. The proxy core keeps routing. Nobody shares a failure domain unless you explicitly configure them to.&lt;&#x2F;p&gt;
&lt;p&gt;This is not just about crash isolation, though that matters. The deeper point is queue isolation. In a shared-process model, a slow filter creates backpressure that affects all traffic. With process isolation, a slow agent only affects the routes that use it, and only up to the concurrency limit you configured. The blast radius is bounded and declared. You can look at the config and know the worst case.&lt;&#x2F;p&gt;
&lt;p&gt;The agents communicate over Unix domain sockets or gRPC, and they can be written in any language. There are SDKs for Rust, Go, Python, TypeScript, Elixir, Kotlin, and Haskell. The protocol is simple: 4-byte length, 1-byte type, JSON or MessagePack payload.&lt;&#x2F;p&gt;
&lt;p&gt;The operational consequence is what I care about most: you can deploy, restart, and update agents independently. Roll out a new WAF rule set without touching the proxy. Restart a misbehaving auth agent without dropping a single connection. When you are trying to fix one thing at 3 AM without breaking three others, that independence matters more than any benchmark number.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;a-product-of-this-moment&quot;&gt;A product of this moment&lt;&#x2F;h2&gt;
&lt;p&gt;Every piece of software is shaped by when it was built.&lt;&#x2F;p&gt;
&lt;p&gt;The proxies I described earlier were products of their time not just in what problems they solved, but in how they could be built. Nginx needed Igor working for years. Envoy needed Google, Lyft, and a large open source community. HAProxy needed Willy Tarreau’s decades of sustained refinement. That was the only way to build infrastructure of that quality. One person, or even a small team, could not realistically build a production-grade reverse proxy with a novel architecture and ship it in months. The economics did not allow it.&lt;&#x2F;p&gt;
&lt;p&gt;That structural reality is changing, and I think the change matters more than most people in infrastructure have absorbed yet.&lt;&#x2F;p&gt;
&lt;p&gt;I wrote about the broader shift in &lt;a href=&quot;&#x2F;articles&#x2F;how-i-work-these-days&#x2F;&quot;&gt;How I Work These Days&lt;&#x2F;a&gt;. The short version: I had been using Claude Code since May 2025, but it was not until Christmas 2025, working with Opus 4.5, that something fundamentally clicked. The constraint I had lived with for years, the gap between knowing exactly what I wanted and having the bandwidth to build it, narrowed in a way that I still find hard to fully describe. Not because the model wrote the code for me. But because the feedback loop between design intent and working implementation compressed from weeks to hours.&lt;&#x2F;p&gt;
&lt;p&gt;When Cloudflare open-sourced Pingora in 2024, I had paid attention immediately. A proxy framework written in Rust, battle-tested at over a trillion requests per day in Cloudflare’s own network. The TCP listener, the HTTP parser, the TLS termination, the connection pooling, the async runtime. All the low-level machinery that you do not want to write from scratch. I had watched River, the community Pingora-based reverse proxy, hoping it would become the thing I could reach for and trust. It never got there.&lt;&#x2F;p&gt;
&lt;p&gt;So I stopped waiting and started building. Pingora as a foundation. Rust for memory safety at the boundary. An agentic workflow that let one person move at the pace of a small team.&lt;&#x2F;p&gt;
&lt;p&gt;What came out was not a general-purpose proxy. It was not a Swiss army knife. It was a purpose-built tool, tailored from the start to one specific problem: safe, observable, operatable edge traffic enforcement. No more, no less.&lt;&#x2F;p&gt;
&lt;p&gt;This is the part I think matters beyond Zentinel itself: when the cost of building serious software drops dramatically, you can afford to be specialized. You do not need to serve a broad market to justify the investment. You can build exactly the thing that solves exactly your problem, with exactly the tradeoffs you want. No feature creep driven by needing to justify a twenty-person team. No compromises driven by needing to appeal to every possible use case.&lt;&#x2F;p&gt;
&lt;p&gt;I think of it as bespoke infrastructure. Software that is tailored to a specific problem by someone who deeply understands that problem, made viable by tools that compress the gap between “I know exactly what this should be” and “it exists.” The same way that a load balancer was the right thing to build in 2000, and a service mesh data plane was the right thing to build in 2016, a purpose-built safe reverse proxy is the right thing to build in 2026. Not because the world suddenly needs one more proxy, but because the world can finally have proxies that are designed for specific jobs instead of being general enough to justify their development cost.&lt;&#x2F;p&gt;
&lt;p&gt;Zentinel is a post-agentic reverse proxy. Not because it uses AI internally (it does not, unless you count the inference-aware rate limiting for LLM traffic). But because it could only exist in a world where agentic development made bespoke infrastructure viable. One person. Three months. A clear vision that had been accumulating for years. That is not a story that was possible to tell in 2020.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;glass-box-infrastructure&quot;&gt;Glass-box infrastructure&lt;&#x2F;h2&gt;
&lt;p&gt;There is a conviction behind Zentinel that is not technical, and I want to be honest about it.&lt;&#x2F;p&gt;
&lt;p&gt;I believe critical web infrastructure should be open. Not “open core” with the important parts behind a license. Not “source available” with restrictions on how you run it. Open in the way that matters: you can read it, fork it, modify it, run it on your own hardware, never call anyone for permission.&lt;&#x2F;p&gt;
&lt;p&gt;Zentinel is Apache-2.0-licensed. Every agent is open source. The configuration format is documented. The protocol is specified. There is no hidden control plane, no phone-home telemetry, no vendor dependency.&lt;&#x2F;p&gt;
&lt;p&gt;But open source alone is not what I mean by transparent. Plenty of projects publish their source and remain effectively opaque. The code is there, technically, but understanding what a particular configuration will actually do still requires reading thousands of lines of parser logic, or just deploying it and hoping for the best.&lt;&#x2F;p&gt;
&lt;p&gt;This is where the Rust decision pays off in a way I did not fully anticipate when I started.&lt;&#x2F;p&gt;
&lt;p&gt;Because Zentinel is written in Rust, the core crates compile to WebAssembly. Not as a side project or a reimplementation. The same &lt;code&gt;zentinel-config&lt;&#x2F;code&gt; crate that parses and validates your KDL configuration in production compiles to a Wasm module that runs in a browser tab.&lt;&#x2F;p&gt;
&lt;p&gt;This means the &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;zentinelproxy.io&#x2F;playground&#x2F;&quot;&gt;config playground&lt;&#x2F;a&gt; on the Zentinel website is not a JavaScript approximation of what the parser does. It is the parser. The actual Rust code, compiled to Wasm, running against your configuration in real time. When it says your config is valid, that is the same validation logic that will run when Zentinel starts up on your server. When it flags an error, that is the same error you would see in production.&lt;&#x2F;p&gt;
&lt;p&gt;The same applies to the &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;zentinelproxy.io&#x2F;converter&#x2F;&quot;&gt;config converter&lt;&#x2F;a&gt;. If you are migrating from nginx, HAProxy, or Traefik, the conversion tool runs the actual Zentinel config crate in your browser. You paste your existing config, you get KDL output, and you can validate the result on the spot. No round-trip to a server. No “upload your infrastructure config to our cloud service.” It runs locally, in your browser, using the production code.&lt;&#x2F;p&gt;
&lt;p&gt;This matters more than it might sound. When you can run the same code that your proxy runs in production, right in your browser, you can reason about the proxy’s internal behavior directly. You are not trusting documentation about how the parser interprets a particular KDL construct. You are running the parser. You are not guessing what happens when two routes have the same priority. You are watching the actual matching logic evaluate your routes.&lt;&#x2F;p&gt;
&lt;p&gt;The proxy becomes glass-like. Not transparent in the “we published the source, good luck reading it” sense. Transparent in the sense that you can interact with its internals, poke at its logic, verify its behavior before it ever touches production traffic. The same Rust, the same types, the same validation rules, running wherever you need them: on the server, in CI, in your browser.&lt;&#x2F;p&gt;
&lt;p&gt;That is what I mean by trustworthy infrastructure. Not “trust us, it works.” Trust it because you can verify it yourself, using the same code, without asking anyone for permission.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;where-this-leaves-me&quot;&gt;Where this leaves me&lt;&#x2F;h2&gt;
&lt;p&gt;Every generation of proxies solved the problem of its era. HAProxy solved load balancing. Nginx solved web serving. Envoy solved service mesh routing. Varnish solved caching. Each was the right answer at the right time, and each was shaped by what was possible when it was built.&lt;&#x2F;p&gt;
&lt;p&gt;The problem I kept running into was different. Not throughput. Not feature count. Not service discovery. Just: can I understand what this system will do at 3 AM when something I did not plan for happens? Can I reason about its failure modes from reading its configuration? Can I trust it enough to sleep?&lt;&#x2F;p&gt;
&lt;p&gt;No existing proxy was designed from scratch for that question, because no existing proxy could afford to be that specialized. The economics of building infrastructure software pushed everything toward generality.&lt;&#x2F;p&gt;
&lt;p&gt;What changed is that the economics changed. One person, with the right foundation and the right tools, can now build purpose-built infrastructure that would have required a team and a multi-year roadmap before.&lt;&#x2F;p&gt;
&lt;p&gt;Zentinel is the proxy I needed and could not find, built in the specific window of time when building it became possible. It is a product of this moment, in the same way that every proxy before it was a product of its own.&lt;&#x2F;p&gt;
&lt;p&gt;And maybe that is what this era of software is really about. Not that AI writes code for you. That framing misses the point entirely. It is that the gap between knowing what should exist and making it exist got smaller, and the things people build when that gap closes are going to be very specific, very opinionated, and very good at exactly one thing.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;references-and-further-reading&quot;&gt;References and further reading&lt;&#x2F;h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;zentinelproxy&#x2F;zentinel&quot;&gt;Zentinel&lt;&#x2F;a&gt; - The source code, Apache-2.0-licensed&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;docs.zentinelproxy.io&#x2F;&quot;&gt;Zentinel documentation&lt;&#x2F;a&gt; - Architecture, configuration reference, agent protocol&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;zentinelproxy.io&#x2F;playground&#x2F;&quot;&gt;Zentinel playground&lt;&#x2F;a&gt; - Browser-based config validation using the actual Rust crate compiled to Wasm&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;zentinelproxy.io&#x2F;converter&#x2F;&quot;&gt;Zentinel config converter&lt;&#x2F;a&gt; - Migrate from nginx, HAProxy, or Traefik configs to KDL&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;zentinelproxy.io&#x2F;manifesto&#x2F;&quot;&gt;Zentinel manifesto&lt;&#x2F;a&gt; - The design philosophy in full&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;cloudflare&#x2F;pingora&quot;&gt;Pingora&lt;&#x2F;a&gt; - Cloudflare’s open source proxy framework that Zentinel builds on&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a href=&quot;&#x2F;articles&#x2F;how-i-work-these-days&#x2F;&quot;&gt;How I Work These Days&lt;&#x2F;a&gt; - The broader shift in how I build software, and where Zentinel fits in that story&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;http:&#x2F;&#x2F;www.haproxy.org&#x2F;&quot;&gt;HAProxy&lt;&#x2F;a&gt; - Willy Tarreau’s load balancer, still one of the best pieces of infrastructure software ever written&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;nginx.org&#x2F;&quot;&gt;Nginx&lt;&#x2F;a&gt; - Igor Sysoev’s web server that became the internet’s default reverse proxy&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.envoyproxy.io&#x2F;&quot;&gt;Envoy&lt;&#x2F;a&gt; - The service mesh data plane that brought observability to distributed systems&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;varnish-cache.org&#x2F;&quot;&gt;Varnish&lt;&#x2F;a&gt; - Poul-Henning Kamp’s caching proxy&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
</description>
      </item>
      <item>
          <title>How I Work These Days</title>
          <pubDate>Sun, 15 Mar 2026 00:00:00 +0000</pubDate>
          <author>Unknown</author>
          <link>https://raskell.io/articles/how-i-work-these-days/</link>
          <guid>https://raskell.io/articles/how-i-work-these-days/</guid>
          <description xml:base="https://raskell.io/articles/how-i-work-these-days/">&lt;p&gt;If you had asked me a few years ago what kind of shift would truly change software again, I would probably have said something vague about machine learning becoming more useful, more accessible, more integrated into normal tooling. I would not have said that within a few years I would be spending large parts of my day in conversation with models, building products at a pace that used to feel unrealistic for one person.&lt;&#x2F;p&gt;
&lt;p&gt;But that is where I am now, and the path here did not start with ChatGPT. It started earlier.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;before-the-shock&quot;&gt;Before the shock&lt;&#x2F;h2&gt;
&lt;p&gt;I had been on Kaggle and Hugging Face since 2020. I was already paying attention. I had a decent understanding of machine learning, enough to know that something important was happening. I was not looking at this space as an outsider who suddenly discovered AI in a news cycle. I had been around it long enough to see that the ingredients were there.&lt;&#x2F;p&gt;
&lt;p&gt;Still, understanding a field and feeling a historical shift are not the same thing. When OpenAI released ChatGPT 3.5 in November 2022, something in me changed almost immediately. I do not mean that in a mystical way. I mean I recognized, very quickly, that this was not just another incremental product launch. It felt like a boundary marker, one of those moments where you can see a new layer of the technology stack forming in front of you.&lt;&#x2F;p&gt;
&lt;p&gt;At the time, I thought: this is going to be enormous. Bigger than most people realize. Bigger, maybe, than the web itself in terms of how deeply it will alter the shape of work, software, and the distribution of capability. That sounds exaggerated when people say it too casually. I know that. But that was honestly my reaction back then. Not hype. Recognition.&lt;&#x2F;p&gt;
&lt;p&gt;I was one of the early people willing to pay OpenAI. That mattered to me. I wanted access, and I wanted to stay close to the frontier as it moved. I did not want to be reading second-hand summaries while something this consequential was taking shape in real time.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-part-i-still-underestimated&quot;&gt;The part I still underestimated&lt;&#x2F;h2&gt;
&lt;p&gt;Even then, I still underestimated one thing: the speed. I understood generative AI was around the corner. I did not understand just how fast it would become operationally useful for actual software creation.&lt;&#x2F;p&gt;
&lt;p&gt;I had read AI 2027, the scenario work by Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, and Romeo Dean. It stayed with me, and it still does now in March 2026. I still think the basic direction it sketches is largely correct, even if the path is turning out a bit differently in practice than any single forecast can capture. But even with that framing in my head, I did not fully expect that less than two years after ChatGPT 3.5, coding agents would already start to feel like a real category rather than a novelty.&lt;&#x2F;p&gt;
&lt;p&gt;That part came faster than I thought.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;paying-attention-waiting-for-the-right-moment&quot;&gt;Paying attention, waiting for the right moment&lt;&#x2F;h2&gt;
&lt;p&gt;In 2025 I was trying different tools seriously. I was paying for Claude Code from May 2025 onward, but I was not especially impressed at first. I liked the CLI orientation. I liked that it felt coder-friendly. That part made sense to me immediately. But the model quality at that moment, and the rate limiting, left me cold. It was interesting. It was not yet transformative for my own workflow.&lt;&#x2F;p&gt;
&lt;p&gt;So I mostly leaned on Zed’s offering. I liked the editor experience, and I still do. I still reach for Zed when I want to inspect, edit, or move through files outside of Vim in the terminal. It fit me better in that phase.&lt;&#x2F;p&gt;
&lt;p&gt;Then November 2025 arrived, three years after ChatGPT 3.5, and then December came with Christmas break. I gave Claude Code another real try, this time with Opus 4.5, and that was the moment it really landed for me. Not politely. Not academically. It hit me hard.&lt;&#x2F;p&gt;
&lt;p&gt;I remember the feeling very clearly: this is it. This is the first time the whole thing feels like more than an assistant and less than a gimmick. This is a tool I can genuinely build with.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;zentinel-was-the-proof&quot;&gt;Zentinel was the proof&lt;&#x2F;h2&gt;
&lt;p&gt;Zentinel had been sitting in the back of my mind for a long time. The idea was not new. The frustration behind it was not new either. At my day job, we had been dealing with unreliable reverse proxies for long enough that the pain was familiar. I had always wanted River, the Pingora-based reverse proxy, to succeed. I wanted that project to become the thing I could reach for and trust. But it never got there.&lt;&#x2F;p&gt;
&lt;p&gt;So I did what this new moment suddenly made possible: I stopped waiting for somebody else to build the thing I wanted to exist.&lt;&#x2F;p&gt;
&lt;p&gt;I built Zentinel with Opus 4.5, and it worked.&lt;&#x2F;p&gt;
&lt;p&gt;That is the part that still feels a little surreal when I say it plainly. Three months later it is up, it is real, and people are using it. Not as a demo. Not as an abandoned prototype. As actual software in the hands of actual users.&lt;&#x2F;p&gt;
&lt;p&gt;That changed something fundamental in how I think about work. Once one long-held idea made it through that bottleneck, a lot of others started moving too. It was not just that I had a new tool. It was that the relationship between ambition and execution had changed. The old constraint, the one that said “yes, this could exist, but not with your current time and current bandwidth,” had weakened.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;then-everything-else-started-moving&quot;&gt;Then everything else started moving&lt;&#x2F;h2&gt;
&lt;p&gt;In the time since, I have built a whole range of things that had been accumulating in my head for years: Cyanea, Archipelag, Humankind, Arcanist and its Rust-based &lt;code&gt;hx&lt;&#x2F;code&gt; Haskell toolchain, the new Basel Haskell Compiler, and other pieces besides.&lt;&#x2F;p&gt;
&lt;p&gt;It has been a wild ride, but not in the shallow “everything is crazy” sense people often write about. More in the sense that an internal dam broke. For a long time I had more ideas than I had time, more design clarity than I had execution bandwidth, and more conviction than I had manpower. That is a frustrating place to live in for years. You learn to carry around a quiet backlog of unrealized things. Some of them stay alive as notes. Some become recurring thoughts during commutes or late at night. Some start to hurt a little because you know they are viable, but you also know you are not going to get to them with the tools and energy available to you at that point in your life.&lt;&#x2F;p&gt;
&lt;p&gt;Now the world is different. I can finally push through the backlog that used to exist only in notebooks, mental sketches, half-written design docs, and conversations with myself.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;what-my-days-look-like-now&quot;&gt;What my days look like now&lt;&#x2F;h2&gt;
&lt;p&gt;My daily routine changed dramatically.&lt;&#x2F;p&gt;
&lt;p&gt;I still have a day job, and then I have the rest of my work. Together it often adds up to something close to sixteen hours a day. That would sound bleak if I were forcing it. It does not feel bleak. It feels like release.&lt;&#x2F;p&gt;
&lt;p&gt;My workspace reflects that change. These days I use mostly Apple hardware, which is funny if you know how much of a Linux person I am, and how much of an OpenBSD person I still am. That part of me has not gone anywhere. I still love those systems. I still think they matter deeply. But if I am being honest about the practical question of where I am most productive right now, Apple has become the answer.&lt;&#x2F;p&gt;
&lt;p&gt;I work across a MacBook Pro M4, an iMac, and an Apple Vision Pro. I spend time talking ideas out in real time with ChatGPT’s voice mode, using it less like a search engine and more like a sparring partner. Sometimes intimate, sometimes brutally honest, sometimes audacious in exactly the way a good thinking partner should be. I push an idea, it pushes back, I sharpen it, it sharpens me, and we keep going until something solid emerges.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;&#x2F;how-i-work-these-days-desk.avif&quot; alt=&quot;My desk setup right now&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;p&gt;Then there is the terminal, where much of the actual implementation happens in Ghostty, usually with four panes open, often with multiple Claude Code agents running in parallel on the Max plan. Two hundred dollars a month for that level of leverage is, for me, one of the clearest trades I have ever made.&lt;&#x2F;p&gt;
&lt;p&gt;This is not a lifestyle performance. It is just the current shape of my work: ideas moving between speech, terminal, editor, design notes, code, back to speech, then back to code again.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;beast-mode-for-lack-of-a-better-term&quot;&gt;Beast mode, for lack of a better term&lt;&#x2F;h2&gt;
&lt;p&gt;There is a part of this that feels almost embarrassingly direct to say, but it would be dishonest to leave it out: I am the happiest I have ever been.&lt;&#x2F;p&gt;
&lt;p&gt;Not because everything is easy. It is not. Not because every project succeeds. They will not. Not because the industry suddenly became sane. It did not.&lt;&#x2F;p&gt;
&lt;p&gt;I am happy because the mismatch that used to define so much of my working life has narrowed. For years I had to live with the feeling that my ideas were outrunning my available hours and my available hands. Now, for the first time, it feels like I can actually meet myself where my ambition has been waiting.&lt;&#x2F;p&gt;
&lt;p&gt;There is a phrase people use, “beast mode,” and usually I would avoid it because it sounds like posturing. But I do not really have a cleaner shorthand for the intensity of this period. I am working hard, very hard, but with a degree of joy and clarity that makes the effort feel proportionate.&lt;&#x2F;p&gt;
&lt;p&gt;I am in conquest mode.&lt;&#x2F;p&gt;
&lt;p&gt;Not conquest in the empty startup sense. Not domination, not vanity metrics, not growth for its own sake. I mean conquest over the inertia that used to keep good ideas trapped inside my head. Conquest over backlog. Conquest over hesitation. Conquest over the old excuses about lacking time, lacking team, lacking the right moment.&lt;&#x2F;p&gt;
&lt;p&gt;And yes, some of it is for me. To feel better. To feel more whole. To stop carrying around years of deferred execution. But some of it is also because I genuinely want to make useful things. I want to build software that improves the texture of work, that makes systems more reliable, that gives people better tools, that opens up possibilities that were previously too expensive or too cumbersome to pursue.&lt;&#x2F;p&gt;
&lt;p&gt;That still matters to me. Probably more than ever.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-real-change&quot;&gt;The real change&lt;&#x2F;h2&gt;
&lt;p&gt;So when I say that the way I work these days has changed, I do not just mean that I use different tools.&lt;&#x2F;p&gt;
&lt;p&gt;I mean that the relation between thought and execution changed. The lag collapsed. The emotional burden of unrealized ideas shrank. The number of things that are now viable to attempt expanded dramatically. That is the real story.&lt;&#x2F;p&gt;
&lt;p&gt;I was already paying attention in 2020. I recognized the significance of ChatGPT in 2022. I underestimated the speed anyway. Then late 2025 arrived, the tools crossed a threshold, and my daily life reorganized itself around that fact.&lt;&#x2F;p&gt;
&lt;p&gt;Three years is not a long time. It feels longer when you live through a real transition.&lt;&#x2F;p&gt;
&lt;p&gt;And I suspect we are still early.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;references-and-further-reading&quot;&gt;References and further reading&lt;&#x2F;h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;ai-2027.com&#x2F;&quot;&gt;AI 2027&lt;&#x2F;a&gt; - Scenario work that influenced how I thought about the trajectory of this space&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;zed.dev&#x2F;&quot;&gt;Zed&lt;&#x2F;a&gt; - Editor I still use alongside Vim&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;ghostty.org&#x2F;&quot;&gt;Ghostty&lt;&#x2F;a&gt; - Terminal I use for most of my agent-heavy coding sessions&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;zentinelproxy.io&#x2F;&quot;&gt;Zentinel&lt;&#x2F;a&gt; - Reverse proxy project that became the first real proof point for this workflow&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;cyanea.bio&#x2F;&quot;&gt;Cyanea&lt;&#x2F;a&gt; - One of the projects that came to life during this period&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;archipelag.io&#x2F;&quot;&gt;Archipelag&lt;&#x2F;a&gt; - Another product that moved from idea to reality in this new working mode&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
</description>
      </item>
      <item>
          <title>Archipelag.io Is in Open Beta: Here&#x27;s Why I Built It</title>
          <pubDate>Fri, 13 Mar 2026 00:00:00 +0000</pubDate>
          <author>Unknown</author>
          <link>https://raskell.io/articles/archipelag-io-distributed-compute-from-mining-rigs-to-open-beta/</link>
          <guid>https://raskell.io/articles/archipelag-io-distributed-compute-from-mining-rigs-to-open-beta/</guid>
          <description xml:base="https://raskell.io/articles/archipelag-io-distributed-compute-from-mining-rigs-to-open-beta/">&lt;p&gt;There is an abandoned factory building in Glarus, a small town wedged between mountains in eastern Switzerland. In 2016, the building was loud. Not machinery-loud, fan-loud. Rows of bare motherboards bolted to open-air frames, each bristling with GPUs and daisy-chained power supplies. The air tasted like warm dust and ozone. Cables ran everywhere, held in place by zip ties and optimism. This was an Ethereum mining operation, and I was standing in the middle of it, watching people I knew convert their gaming rigs, hardware they loved, into money-printing machines.&lt;&#x2F;p&gt;
&lt;p&gt;I was there because Vitalik Buterin had decided to visit. He had flown in on a private jet to Geneva, driven up in a black limousine with tinted windows, and walked into this dusty, chaotic space to see what Swiss miners were building. It was surreal. The creator of Ethereum, stepping over power cables in an industrial ruin, nodding at rack after rack of GPUs humming away at proof-of-work hashes. I do not think he was impressed by the elegance of the setup. Nobody was. But something about that scene stuck with me.&lt;&#x2F;p&gt;
&lt;p&gt;People were willing to sacrifice their gaming entertainment, their &lt;em&gt;leisure hardware&lt;&#x2F;em&gt;, to chase the dream of sovereign financial independence using fundamentally nerdy equipment: PCs, internet connections, blockchain protocols, and GPU graphics cards. They were converting consumer-grade technology into economic infrastructure, and they were doing it themselves. No data center leases. No vendor contracts. No permission from anyone. Just people, hardware, and a protocol that made it worth their while.&lt;&#x2F;p&gt;
&lt;p&gt;I had skin in the game too. I invested (gambled, honestly) in crypto during that era. I watched the charts, rode the swings, felt the dopamine spikes and the stomach-drops. The financial side was wild and ultimately unsustainable for most people. But the &lt;em&gt;infrastructure&lt;&#x2F;em&gt; side, the part where ordinary humans turned their homes into compute nodes and got paid for it: that part was real, and that part stayed with me long after the crypto hype faded and the rigs went quiet.&lt;&#x2F;p&gt;
&lt;p&gt;This is the story of how that factory visit turned into &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;archipelag.io&quot;&gt;Archipelag.io&lt;&#x2F;a&gt;, a distributed compute network that entered open beta today. It has been ten years of thinking, one year of building, and a lot of being wrong about the right things at the wrong time.&lt;&#x2F;p&gt;
</description>
      </item>
      <item>
          <title>How AI Makes Bare Metal Viable Again</title>
          <pubDate>Sun, 08 Mar 2026 00:00:00 +0000</pubDate>
          <author>Unknown</author>
          <link>https://raskell.io/articles/how-ai-makes-bare-metal-viable-again/</link>
          <guid>https://raskell.io/articles/how-ai-makes-bare-metal-viable-again/</guid>
          <description xml:base="https://raskell.io/articles/how-ai-makes-bare-metal-viable-again/">&lt;p&gt;I was paying over two hundred dollars a month to run two apps that had zero paying users.&lt;&#x2F;p&gt;
&lt;p&gt;Not because the apps were complex. Not because they needed high availability across regions. Because I was running Kubernetes on DigitalOcean, and Kubernetes has opinions about how much infrastructure you need. A control plane. Worker nodes. Load balancers. Persistent volumes. Managed databases. Each line item modest on its own, adding up to a bill that felt absurd for two Phoenix applications in their bootstrapping phase.&lt;&#x2F;p&gt;
&lt;p&gt;The apps are &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;archipelag.io&quot;&gt;archipelag.io&lt;&#x2F;a&gt; and &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;cyanea.bio&quot;&gt;cyanea.bio&lt;&#x2F;a&gt;. Both are Elixir&#x2F;Phoenix projects. Archipelag uses PostgreSQL and NATS for its messaging layer. Cyanea uses SQLite. Neither gets meaningful traffic yet. Both are real products I am actively building, not side projects I will abandon next month. But they are pre-revenue, and every dollar I spend on infrastructure is a dollar I am betting against future income that does not exist yet.&lt;&#x2F;p&gt;
&lt;p&gt;Something had to change.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-kubernetes-trap&quot;&gt;The Kubernetes trap&lt;&#x2F;h2&gt;
&lt;p&gt;Here is the thing about Kubernetes: it solves problems you might not have. If you are running fifty microservices across three regions with autoscaling requirements and a platform team to manage it, Kubernetes earns its keep. If you are running two BEAM applications that each consume less than 512 MB of memory, you are paying a complexity tax for infrastructure capabilities you will never touch.&lt;&#x2F;p&gt;
&lt;p&gt;My K8s setup on DigitalOcean looked like this: a managed cluster with two worker nodes (the minimum for any reasonable availability), a managed PostgreSQL instance for Archipelag, a load balancer for ingress, persistent volumes for Cyanea’s SQLite database. Each component had its own monthly cost. The cluster management fee alone was more than what I would eventually pay for an entire bare metal server.&lt;&#x2F;p&gt;
&lt;p&gt;The operational overhead was worse than the cost. Helm charts. Ingress controllers. Certificate managers. Pod disruption budgets. Every time I wanted to deploy a new version, I was wrangling YAML files that described infrastructure concerns my apps did not care about. A Phoenix release does not need a pod spec. It needs a port, an environment, and someone to restart it if it crashes.&lt;&#x2F;p&gt;
&lt;p&gt;And the YAML, my God, the YAML. A simple Phoenix app that listens on a port and serves HTTP needs, at minimum, a Deployment manifest, a Service manifest, and an Ingress manifest. Add a ConfigMap for environment variables, a Secret for credentials, a PersistentVolumeClaim if you need disk, a HorizontalPodAutoscaler if you want autoscaling. For Cyanea alone, I had six Kubernetes manifests totaling a few hundred lines of YAML, all to describe an application that boils down to: run this binary, give it a port, point a domain at it.&lt;&#x2F;p&gt;
&lt;p&gt;The cognitive load compounds. You learn the Kubernetes resource model, then the DigitalOcean-specific annotations for their load balancer, then the cert-manager CRDs for TLS, then the quirks of persistent volumes on managed K8s (spoiler: they are not as persistent as you think if you do not get the reclaim policy right). Each layer has its own documentation, its own failure modes, its own upgrade cycle. I spent more time debugging infrastructure than building product.&lt;&#x2F;p&gt;
&lt;p&gt;The irony is not lost on me. Kubernetes was designed for teams running hundreds of services at Google-scale. I was running two apps. The orchestrator had more moving parts than the things it was orchestrating. It was like hiring a logistics fleet to deliver two packages across town.&lt;&#x2F;p&gt;
&lt;p&gt;I knew I was over-engineered. But the alternative, at the time, seemed like a step backward.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-nomad-detour&quot;&gt;The Nomad detour&lt;&#x2F;h2&gt;
&lt;p&gt;I should mention that Kubernetes was never the only orchestrator I considered. For the past five years, while the industry went all-in on K8s, I had been quietly admiring HashiCorp’s &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.nomadproject.io&#x2F;&quot;&gt;Nomad&lt;&#x2F;a&gt;. Where Kubernetes is a sprawling ecosystem of CRDs, operators, and control loops, Nomad is refreshingly minimal. A single binary. A simple job spec. No opinions about networking, no built-in service mesh, no mandatory etcd cluster. You tell it what to run, it runs it.&lt;&#x2F;p&gt;
&lt;p&gt;That minimalism appealed to me. Nomad treats workload scheduling as the core problem and stays out of everything else. No built-in networking layer means you bring your own, which sounds like a drawback until you realize it means you are not locked into someone else’s networking model.&lt;&#x2F;p&gt;
&lt;p&gt;And I happened to have my own networking layer already. I had been building &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;zentinelproxy.io&#x2F;&quot;&gt;Zentinel&lt;&#x2F;a&gt; in parallel, a security-first reverse proxy built on Cloudflare’s &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;cloudflare&#x2F;pingora&quot;&gt;Pingora&lt;&#x2F;a&gt; framework in Rust. Zentinel handles TLS termination, WAF inspection, rate limiting, domain-based routing, all the edge concerns I care about. It also supports sleepable ops, where backend instances can be suspended and woken on demand, which is perfect for apps that do not need to be running 24&#x2F;7.&lt;&#x2F;p&gt;
&lt;p&gt;So I tried pairing them. Nomad for workload scheduling, Zentinel for the network layer. And it worked. The combination gave me a lightweight orchestrator that did not try to own every concern, paired with a reverse proxy that handled edge traffic the way I wanted. Two focused tools, each doing one thing well.&lt;&#x2F;p&gt;
&lt;p&gt;But then IBM acquired HashiCorp, and the calculus changed.&lt;&#x2F;p&gt;
&lt;p&gt;The acquisition itself was not the problem. Companies get acquired. It happens. The problem was the trajectory. HashiCorp had already re-licensed Terraform from MPL to BSL (Business Source License) in 2023, a move that fractured the community and spawned the &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;opentofu.org&#x2F;&quot;&gt;OpenTofu&lt;&#x2F;a&gt; fork. The pattern was familiar: open-source project gains adoption, company monetizes through enterprise features, company gets acquired, new owner tightens the screws. I had watched it happen with Redis, with Elasticsearch, with MongoDB. Each time the community forks, there is a period of uncertainty, split maintenance effort, and feature divergence.&lt;&#x2F;p&gt;
&lt;p&gt;I did not want to build my infrastructure on a foundation where the governance could shift at any time. Nomad is still open source today. But “still open source” and “will remain open source” are different statements, and after the Terraform situation, I was not confident in the latter. The BSL license change had been a signal, and IBM’s acquisition amplified it. I did not need to go down that road with another HashiCorp product.&lt;&#x2F;p&gt;
&lt;p&gt;The Nomad experiment did teach me something valuable, though. It confirmed that the KISS approach to deployment was right. You do not need the full Kubernetes machinery. A scheduler that starts processes, checks their health, and restarts them when they crash is sufficient for a wide range of workloads. And a dedicated reverse proxy that handles TLS and routing is cleaner than bundling networking into the orchestrator.&lt;&#x2F;p&gt;
&lt;p&gt;That insight, Nomad’s minimalism plus Zentinel’s Pingora-based proxy architecture, became the design seed for what I would eventually build.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-fly-io-middle-ground&quot;&gt;The fly.io middle ground&lt;&#x2F;h2&gt;
&lt;p&gt;With Nomad off the table as a long-term bet, I migrated to &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;fly.io&quot;&gt;fly.io&lt;&#x2F;a&gt; in late 2025. It was genuinely better than K8s for my use case. Fly understands BEAM applications at a fundamental level. The BEAM runtime is designed for the kind of lightweight, long-lived processes that Fly’s infrastructure optimizes for. You push a release, it runs it. No YAML. No ingress controllers. No cluster management.&lt;&#x2F;p&gt;
&lt;p&gt;Fly also made the service dependencies painless. Managed Postgres with a few commands. NATS was straightforward to set up. Tigris (Fly’s S3-compatible object storage) handled blob storage for Cyanea’s file uploads. The developer experience was genuinely excellent, and I mean that without reservation. The Fly team has built something thoughtful.&lt;&#x2F;p&gt;
&lt;p&gt;The cost dropped meaningfully. No cluster management fee. No minimum node count. Pay-per-VM pricing that scales down to fractions of a shared CPU. Fly’s model is honest about what small applications actually need, and the pricing reflects that. I went from over two hundred dollars a month on DigitalOcean K8s to roughly a quarter of that.&lt;&#x2F;p&gt;
&lt;p&gt;For a while, it was the right answer. And if I had been scaling horizontally, adding regions, needing the kind of elastic compute that cloud-native platforms excel at, I would have stayed. If my apps suddenly got traction and I needed instances in Tokyo, Frankfurt, and Virginia, Fly would be the obvious choice. The multi-region story is one of Fly’s genuine strengths. You deploy once, it runs everywhere. That is hard to replicate.&lt;&#x2F;p&gt;
&lt;p&gt;But I was not scaling horizontally. I was running two apps in one location. On a good day, they handled maybe a few hundred requests. The compute they needed was trivial, a fraction of a shared CPU core. And I was still paying for a platform designed to scale to thousands of instances across dozens of regions, even though I needed exactly one instance of each app, in exactly one place, doing very little work.&lt;&#x2F;p&gt;
&lt;p&gt;There is also a subtler cost that managed platforms carry: the abstraction tax. When something goes wrong on Fly (and it did, occasionally, things like deployment timeouts or the odd networking hiccup), you are debugging at the platform level, not the system level. You file a support ticket or check the status page. You do not SSH in and look at processes, because there are no processes you can see. The platform is the intermediary, and the intermediary has its own failure modes that you cannot inspect or fix.&lt;&#x2F;p&gt;
&lt;p&gt;The cloud-native model, even the lean version that Fly offers, has a floor. You are always paying for the platform’s capabilities, not just your usage of them. When your usage is “two small apps, one location, no scale,” that floor matters. And when the platform sits between you and your processes, you lose the ability to debug at the level where the answers actually live.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-bare-metal-math&quot;&gt;The bare metal math&lt;&#x2F;h2&gt;
&lt;p&gt;I started looking at dedicated servers. Not VPS instances, not cloud VMs. Actual hardware you can SSH into, where your processes run on real cores and your data sits on real disks.&lt;&#x2F;p&gt;
&lt;p&gt;Hetzner runs a &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.hetzner.com&#x2F;sb&#x2F;&quot;&gt;server auction&lt;&#x2F;a&gt; where they sell refurbished dedicated machines at steep discounts. These are servers that have been running in Hetzner’s data centers, got rotated out of customer contracts, and are resold at prices that make cloud compute look like a luxury good. The hardware is used but maintained, and Hetzner’s data centers are well-run, proper cooling, redundant power, good network connectivity.&lt;&#x2F;p&gt;
&lt;p&gt;I found a box with a multi-core Intel CPU, 128 GB of DDR4 RAM, and two 1 TB NVMe drives that I configured in RAID 1 for redundancy. EUR 38 a month. About forty-two dollars. Fixed price. No bandwidth metering (Hetzner includes 20 TB of traffic on dedicated servers, which for my workload might as well be unlimited). No surprises on the bill.&lt;&#x2F;p&gt;
&lt;p&gt;Let that sink in for a moment. For less than what I was paying for managed Postgres alone on either platform, I could have an entire server with more RAM than I know what to do with, fast NVMe storage with mirror redundancy, and enough compute headroom to run not two but twenty applications without breaking a sweat. The two NVMe drives alone, if bought retail, would cost more than a year of hosting.&lt;&#x2F;p&gt;
&lt;p&gt;I ran the numbers on capacity. My two Phoenix apps, even under load, would use maybe 1-2 GB of RAM combined. PostgreSQL with a modest dataset, another gig or two. NATS, negligible. That leaves well over 120 GB of RAM sitting idle. The CPU tells a similar story. Phoenix on the BEAM is remarkably efficient with CPU resources, the scheduler does its own preemptive multitasking across lightweight processes, and my workloads are I&#x2F;O-bound, not compute-bound. I could run my entire current stack and barely register on a load graph.&lt;&#x2F;p&gt;
&lt;p&gt;The headroom is the point. On a cloud platform, headroom costs money. More RAM, higher tier. More CPU, higher tier. On bare metal, the headroom is already paid for. Growing from two apps to ten does not change my monthly bill. Adding a staging environment does not change my monthly bill. Running background workers, a metrics stack, a CI runner, none of it changes my monthly bill. The marginal cost of additional workloads on existing hardware is zero.&lt;&#x2F;p&gt;
&lt;p&gt;The math was obvious. The problem was everything else.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;why-bare-metal-was-hard&quot;&gt;Why bare metal was hard&lt;&#x2F;h2&gt;
&lt;p&gt;Bare metal has always been cheap. That was never the issue. The issue was everything you had to build and maintain yourself.&lt;&#x2F;p&gt;
&lt;p&gt;On a managed platform, you get deployment pipelines, TLS certificate management, process supervision, reverse proxying, log aggregation, health checks, and rollback mechanisms out of the box. On bare metal, you get a Linux login prompt and a blinking cursor.&lt;&#x2F;p&gt;
&lt;p&gt;Historically, going bare metal for web applications meant weeks of setup:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;Install and configure nginx or HAProxy as a reverse proxy&lt;&#x2F;li&gt;
&lt;li&gt;Set up Certbot or acme.sh for Let’s Encrypt certificates, and hope the renewal cron does not silently break&lt;&#x2F;li&gt;
&lt;li&gt;Write deployment scripts (rsync, symlinks, restart commands) and debug them for months&lt;&#x2F;li&gt;
&lt;li&gt;Configure systemd services for each app, with the right restart policies and environment files&lt;&#x2F;li&gt;
&lt;li&gt;Build a process supervision layer that handles crashes, port allocation, and graceful shutdowns&lt;&#x2F;li&gt;
&lt;li&gt;Figure out zero-downtime deploys (which means running two instances, health checking the new one, swapping traffic, draining the old one)&lt;&#x2F;li&gt;
&lt;li&gt;Set up log rotation, monitoring, backups&lt;&#x2F;li&gt;
&lt;li&gt;Harden the server (firewall, SSH config, automatic security updates)&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;Each of these is a solved problem in isolation. There are blog posts and Stack Overflow answers for every one of them. But stitching them together into a coherent, reliable deployment system is a full-time job for a week or two, and maintaining it is an ongoing tax on your attention.&lt;&#x2F;p&gt;
&lt;p&gt;This is why the cloud won. Not because bare metal is expensive. Because the operational cost of doing it yourself was prohibitive for small teams. The cloud sold you a package deal: we handle the infrastructure, you handle the application. Worth it, even at a premium.&lt;&#x2F;p&gt;
&lt;p&gt;But what if that operational cost dropped to near zero?&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-ai-shift&quot;&gt;The AI shift&lt;&#x2F;h2&gt;
&lt;p&gt;I had Claude Code with Opus 4.6 available. I had spent months working with it on other projects. Compilers, CRDT engines, reverse proxies. I knew what it could do with a clear spec and a well-defined problem domain.&lt;&#x2F;p&gt;
&lt;p&gt;And deploying web applications to bare metal is a well-defined problem domain.&lt;&#x2F;p&gt;
&lt;p&gt;The core requirements are straightforward: upload an artifact, start it on a port, check that it is healthy, route traffic to it, stop the old one. Everything else, TLS, process supervision, rollback, log capture, is layered on top of that core loop. The problem space is wide but shallow. Lots of features, few genuinely novel algorithms.&lt;&#x2F;p&gt;
&lt;p&gt;This is exactly the kind of work where AI shines. Not because it writes perfect code on the first try. But because it can iterate through a feature list at a pace that would take a solo developer weeks, producing working implementations in hours. The feedback loop is tight: describe what you want, get code, test it, refine. The domain knowledge exists in a thousand deployment tools that came before. The AI has seen all of them.&lt;&#x2F;p&gt;
&lt;p&gt;So I decided to build my own deployment tool. From scratch. With AI as my co-engineer.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;building-vela&quot;&gt;Building Vela&lt;&#x2F;h2&gt;
&lt;p&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;raskell-io&#x2F;vela&quot;&gt;Vela&lt;&#x2F;a&gt; is what came out of that process. A single Rust binary that handles everything I listed above: reverse proxy, auto-TLS, process supervision, zero-downtime deploys, health checks, secret management, log streaming, rollbacks. No containers. No Docker. No YAML.&lt;&#x2F;p&gt;
&lt;p&gt;The design draws from both of its ancestors. From Nomad, the suckless philosophy: a single binary, minimal configuration, no opinions about things that are not its problem. From Zentinel, the Pingora-inspired proxy architecture: hyper-based reverse proxy with TLS termination, domain-based routing, and WebSocket support baked into the same process. Vela is what happens when you take the best ideas from tools you admire and combine them into something purpose-built for your exact workload.&lt;&#x2F;p&gt;
&lt;p&gt;The design philosophy is blunt: one binary, two modes.&lt;&#x2F;p&gt;
&lt;pre&gt;&lt;code&gt;┌─────────────────────────────────────────────┐
│  Your server                                │
│                                             │
│  Vela daemon                                │
│  ├── Reverse proxy (:80&amp;#x2F;:443, auto-TLS)     │
│  ├── Process manager (start, health, swap)  │
│  └── IPC socket                             │
│                                             │
│  Apps                                       │
│  ├── cyanea.bio      → :10001              │
│  └── archipelag.io   → :10002              │
└─────────────────────────────────────────────┘

┌─────────────────────────────────────────────┐
│  Your laptop                                │
│                                             │
│  vela deploy  →  scp + ssh  →  server       │
└─────────────────────────────────────────────┘
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;&lt;code&gt;vela serve&lt;&#x2F;code&gt; runs on the server. It is the reverse proxy, the process manager, and the IPC daemon, all in one process. &lt;code&gt;vela deploy&lt;&#x2F;code&gt; runs on your laptop. It reads a manifest, uploads your artifact over SSH, and tells the server to activate it.&lt;&#x2F;p&gt;
&lt;p&gt;SSH is the control plane. No tokens, no API keys, no custom authentication layer. If you can SSH into the server, you can deploy. This is a deliberate choice. SSH key management is a solved problem. Every developer already has it configured. Every server already has it running. Building a custom auth system on top would be adding complexity for no practical gain.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;the-manifest&quot;&gt;The manifest&lt;&#x2F;h3&gt;
&lt;p&gt;Each app gets a &lt;code&gt;Vela.toml&lt;&#x2F;code&gt; in its project root:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;toml&quot; class=&quot;language-toml &quot;&gt;&lt;code class=&quot;language-toml&quot; data-lang=&quot;toml&quot;&gt;[app]
name = &amp;quot;cyanea&amp;quot;
domain = &amp;quot;app.cyanea.bio&amp;quot;

[deploy]
server = &amp;quot;deploy@my-server&amp;quot;
type = &amp;quot;beam&amp;quot;
binary = &amp;quot;server&amp;quot;
health = &amp;quot;&amp;#x2F;health&amp;quot;
strategy = &amp;quot;sequential&amp;quot;
pre_start = &amp;quot;bin&amp;#x2F;cyanea eval &amp;#x27;Cyanea.Release.migrate()&amp;#x27;&amp;quot;

[env]
DATABASE_PATH = &amp;quot;${data_dir}&amp;#x2F;cyanea.db&amp;quot;
SECRET_KEY_BASE = &amp;quot;${secret:SECRET_KEY_BASE}&amp;quot;
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;That is the entire deploy configuration. The app type tells Vela how to start it (&lt;code&gt;beam&lt;&#x2F;code&gt; runs Elixir releases, &lt;code&gt;binary&lt;&#x2F;code&gt; runs compiled executables). The health path tells it where to check. The strategy tells it how to swap traffic. The &lt;code&gt;pre_start&lt;&#x2F;code&gt; hook runs database migrations before the new instance starts, and if migrations fail, the deploy aborts and the old instance keeps running.&lt;&#x2F;p&gt;
&lt;p&gt;Environment variables support two substitution patterns: &lt;code&gt;${data_dir}&lt;&#x2F;code&gt; expands to the app’s persistent data directory (which survives deploys), and &lt;code&gt;${secret:KEY}&lt;&#x2F;code&gt; pulls from the server-side secret store. Secrets never live in your repo.&lt;&#x2F;p&gt;
&lt;p&gt;Deploying looks like this:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;bash&quot; class=&quot;language-bash &quot;&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;MIX_ENV=prod mix release
vela deploy .&amp;#x2F;_build&amp;#x2F;prod&amp;#x2F;rel&amp;#x2F;cyanea
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Two commands. The artifact goes up, the health check passes, traffic swaps, done.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;zero-downtime-deploys&quot;&gt;Zero-downtime deploys&lt;&#x2F;h3&gt;
&lt;p&gt;Vela supports two deploy strategies, and the choice matters.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Blue-green&lt;&#x2F;strong&gt; is the default. The new instance starts alongside the old one on a fresh port. Vela runs a health check against it (30 retries, one per second, five-second timeout per attempt). Once the health check passes, the reverse proxy atomically swaps the route table entry for that domain to point at the new port. The old instance gets a configurable drain period to finish in-flight requests, then receives SIGTERM. If it does not exit within the drain window, SIGKILL.&lt;&#x2F;p&gt;
&lt;pre&gt;&lt;code&gt;Time ──────────────────────────────────────────►

Old instance     ████████████████████░░░░  (draining)
New instance              ░░░░████████████████████
                          ▲   ▲
                     start │   │ health passes, swap
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Zero downtime. The user never sees a blip. This works for stateless apps and apps backed by PostgreSQL (where both instances can connect to the same database simultaneously).&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Sequential&lt;&#x2F;strong&gt; is for SQLite apps. You cannot have two processes writing to the same SQLite database (WAL mode helps, but concurrent writers from separate instances is asking for trouble). So Vela stops the old instance first, starts the new one, health checks it, and activates it. Sub-second blip. Acceptable for apps where the alternative is write contention.&lt;&#x2F;p&gt;
&lt;pre&gt;&lt;code&gt;Time ──────────────────────────────────────────►

Old instance     ████████████████████
New instance                          ░░░░████████████████████
                                 ▲   ▲
                            stop │   │ start + health check
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;The decision is per-app, configured in the manifest. Cyanea uses sequential (SQLite). Archipelag uses blue-green (PostgreSQL).&lt;&#x2F;p&gt;
&lt;h3 id=&quot;process-supervision&quot;&gt;Process supervision&lt;&#x2F;h3&gt;
&lt;p&gt;Vela does not just start your app and walk away. It supervises it. If a process crashes, Vela detects the exit (via non-blocking &lt;code&gt;try_wait&lt;&#x2F;code&gt; on the child process handle), logs it, and restarts from the stored launch configuration:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;rust&quot; class=&quot;language-rust &quot;&gt;&lt;code class=&quot;language-rust&quot; data-lang=&quot;rust&quot;&gt;pub async fn check_and_restart(&amp;amp;mut self) -&amp;gt; Vec&amp;lt;String&amp;gt; {
    let mut to_restart = Vec::new();

    for (key, process) in &amp;amp;mut self.running {
        match process.child.try_wait() {
            Ok(Some(status)) if !status.success() =&amp;gt; {
                &amp;#x2F;&amp;#x2F; Process exited unexpectedly
                to_restart.push((
                    key.clone(),
                    process.launch_config.clone(),
                ));
            }
            _ =&amp;gt; {}
        }
    }

    for (key, config) in to_restart {
        &amp;#x2F;&amp;#x2F; Restart on same port if available, allocate new otherwise
        self.restart_from_config(&amp;amp;key, &amp;amp;config).await;
    }
}
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Each app’s &lt;code&gt;LaunchConfig&lt;&#x2F;code&gt; (release directory, binary name, app type, environment variables, data directory) is stored so that restarts use the exact same configuration. The daemon also persists app state to disk, so if Vela itself restarts (server reboot, daemon upgrade), it restores all running apps from their saved configurations.&lt;&#x2F;p&gt;
&lt;p&gt;This is the kind of feature that would take a day to specify and a week to implement if you were writing it from scratch. With Claude, it took about an hour of iteration, including the edge cases around port reallocation and the pending&#x2F;active state split during deploys.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;built-in-services&quot;&gt;Built-in services&lt;&#x2F;h3&gt;
&lt;p&gt;Both of my apps have service dependencies. Archipelag needs PostgreSQL and NATS. Rather than managing these separately, Vela handles service provisioning directly:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;toml&quot; class=&quot;language-toml &quot;&gt;&lt;code class=&quot;language-toml&quot; data-lang=&quot;toml&quot;&gt;[services.postgres]
version = &amp;quot;17&amp;quot;
databases = [&amp;quot;archipelag_prod&amp;quot;]

[services.nats]
version = &amp;quot;2.10&amp;quot;
jetstream = true
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;On first deploy, Vela installs PostgreSQL (via apt), creates the database with a generated password, and injects &lt;code&gt;DATABASE_URL&lt;&#x2F;code&gt; into the app’s environment. For NATS, it downloads the binary, generates a config, and starts it as a supervised child process with &lt;code&gt;NATS_URL&lt;&#x2F;code&gt; injected. Service credentials persist across deploys and daemon restarts.&lt;&#x2F;p&gt;
&lt;p&gt;This was one of those features where the AI really earned its keep. The NATS lifecycle management alone, downloading the right binary for the platform, generating config, supervising the process, health-checking the monitoring endpoint, persisting credentials, involved touching six or seven modules. Claude handled the plumbing while I focused on the design decisions.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;the-reverse-proxy&quot;&gt;The reverse proxy&lt;&#x2F;h3&gt;
&lt;p&gt;Vela embeds its own reverse proxy built on hyper. It handles TLS termination (auto-provisioned via Let’s Encrypt ACME HTTP-01, or static certificates for Cloudflare setups), domain-based routing, WebSocket upgrades, and HTTP-to-HTTPS redirects.&lt;&#x2F;p&gt;
&lt;p&gt;The routing model is simple. A thread-safe hash map from domain to port:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;rust&quot; class=&quot;language-rust &quot;&gt;&lt;code class=&quot;language-rust&quot; data-lang=&quot;rust&quot;&gt;pub struct RouteTable {
    routes: Arc&amp;lt;RwLock&amp;lt;HashMap&amp;lt;String, u16&amp;gt;&amp;gt;&amp;gt;,
}
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;When a request arrives, Vela extracts the Host header, looks up the port, and forwards the request to &lt;code&gt;localhost:{port}&lt;&#x2F;code&gt;. When a deploy swaps traffic, it is a single write-lock on the hash map to update the port number. Atomic. No configuration reload. No proxy restart.&lt;&#x2F;p&gt;
&lt;p&gt;For WebSocket connections (which both Phoenix apps use for LiveView), Vela detects the &lt;code&gt;Upgrade: websocket&lt;&#x2F;code&gt; header and switches to raw TCP tunneling with bidirectional I&#x2F;O. This was important for my use case, Phoenix LiveView is WebSocket-native, and if the proxy does not handle upgrades correctly, the entire UI breaks.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;from-empty-box-to-production-in-a-day&quot;&gt;From empty box to production in a day&lt;&#x2F;h2&gt;
&lt;p&gt;Here is the timeline of the actual migration. I bought the Hetzner server and within about 48 hours, both apps were running in production with HTTPS, process supervision, automated backups, and daily health reports.&lt;&#x2F;p&gt;
&lt;p&gt;The sequence went roughly like this:&lt;&#x2F;p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Hardware validation&lt;&#x2F;strong&gt;: Check NVMe drive health, run memory tests, verify RAID configuration. The drives had about 25,000 power-on hours (these are auction servers, they have been used), but SMART health passed and wear levels were well within acceptable range.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;OS provisioning&lt;&#x2F;strong&gt;: Debian, RAID 1 across both NVMe drives. Straightforward.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Server hardening&lt;&#x2F;strong&gt;: Firewall rules, SSH hardening (key-only auth, non-default port, rate limiting), automatic security updates, intrusion detection. This is the part I am deliberately vague about. If you are running a public-facing server, hardening is non-negotiable, but I am not going to publish my exact firewall configuration.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Vela installation&lt;&#x2F;strong&gt;: Download the binary, create a config file, install the systemd service. Five minutes.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;First app deployed (Cyanea)&lt;&#x2F;strong&gt;: Built the Elixir release on the server, set secrets, ran migrations, deployed. The entire build-and-deploy cycle for a Phoenix app with a Rust NIF took about fifteen minutes, most of which was compilation.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Second app deployed (Archipelag)&lt;&#x2F;strong&gt;: Same flow, plus provisioning PostgreSQL and restoring a database dump from Fly, plus setting up NATS. About thirty minutes.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;TLS certificates&lt;&#x2F;strong&gt;: Updated DNS records, Let’s Encrypt certificates issued automatically. Vela handles the ACME challenge internally, no Certbot, no cron job, no manual cert management.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Monitoring&lt;&#x2F;strong&gt;: A daily health report script that checks system metrics, service status, and app health, then emails a summary. Simple but effective.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;p&gt;The most time-consuming part was not the tooling. It was migrating the PostgreSQL data from Fly and verifying that both apps behaved correctly in their new environment. The infrastructure setup itself, the part that would have taken weeks without Vela, took hours.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-broader-thesis&quot;&gt;The broader thesis&lt;&#x2F;h2&gt;
&lt;p&gt;Here is what I think is happening, and I think it is bigger than my personal infrastructure bill.&lt;&#x2F;p&gt;
&lt;p&gt;The cloud won because it sold a bundle: compute, networking, storage, deployment, monitoring, scaling, security, all integrated, all managed. The alternative was building each piece yourself, and the labor cost made that prohibitive for small teams. Managed infrastructure was cheaper than an ops engineer.&lt;&#x2F;p&gt;
&lt;p&gt;AI changes that equation. Not by making the cloud cheaper, but by making bespoke tooling economically viable.&lt;&#x2F;p&gt;
&lt;p&gt;Consider what I got with Vela. A deployment tool that does exactly what I need and nothing more. No container orchestration, because I do not use containers. No multi-region routing, because I run in one location. No autoscaling, because two apps do not need to autoscale. Every feature exists because I needed it. Every feature works with my specific stack (Elixir&#x2F;BEAM, Rust, SQLite, PostgreSQL, NATS). The tool is tailored to my workload the way a bespoke suit is tailored to a body.&lt;&#x2F;p&gt;
&lt;p&gt;This kind of custom tooling used to be a luxury. You needed either a platform team that could invest weeks of engineering time, or the rare individual who was both a skilled systems programmer and willing to spend their evenings writing deployment tools instead of building products. The economics did not make sense for a solo founder or a two-person team.&lt;&#x2F;p&gt;
&lt;p&gt;With AI, the cost of building bespoke tooling drops by an order of magnitude. Not to zero, you still need to know what you want, you still need to test and iterate, you still need to understand the domain well enough to evaluate the output. But the gap between “I know what I need” and “I have a working implementation” shrinks from weeks to hours.&lt;&#x2F;p&gt;
&lt;p&gt;And when bespoke tooling is cheap, the cloud’s bundle becomes less compelling. You do not need the managed Kubernetes service if you can build a deployment tool that fits your exact needs. You do not need the managed database service if you can install PostgreSQL yourself and the AI helps you set up backups, monitoring, and failover. You do not need the managed TLS service if your deployment tool handles ACME natively.&lt;&#x2F;p&gt;
&lt;p&gt;What you are left paying for is compute and bandwidth. And for compute and bandwidth, bare metal is drastically cheaper than the cloud.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-api-is-bespoke&quot;&gt;The API is bespoke&lt;&#x2F;h2&gt;
&lt;p&gt;There is a subtlety here that I think is worth calling out. When people talk about the cloud’s advantages, they often point to the API-driven experience. Infrastructure as code. Declarative configuration. Programmable everything. And that is real. The cloud’s API layer is genuinely valuable.&lt;&#x2F;p&gt;
&lt;p&gt;But the API does not have to come from a cloud provider. It can come from your own tooling.&lt;&#x2F;p&gt;
&lt;p&gt;Vela gives me an API-driven experience. I declare my app’s configuration in a TOML manifest. I run a single command to deploy. I can check status, stream logs, manage secrets, trigger backups, and roll back releases, all from my laptop, all through a CLI that speaks SSH to a daemon on the server. The experience is not worse than Fly or Heroku. In some ways it is better, because the tool does exactly what I need and nothing else, and when something goes wrong, I can read the source code.&lt;&#x2F;p&gt;
&lt;p&gt;The difference is that my “API” is a 5,000-line Rust binary instead of a multi-billion-dollar cloud platform. And that is fine. I do not need the platform. I need the interface. AI lets me build the interface.&lt;&#x2F;p&gt;
&lt;p&gt;This is, I think, the pattern that will play out more broadly. The cloud’s value was never just compute. It was the operational layer on top of compute, the tooling that made raw hardware usable. AI makes it possible to build that operational layer yourself, tailored to your needs, at a fraction of the cost. The cloud becomes optional. The server becomes a commodity. The differentiator is the tooling, and the tooling is something AI can help you build.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;what-the-numbers-look-like&quot;&gt;What the numbers look like&lt;&#x2F;h2&gt;
&lt;p&gt;Let me be concrete about costs, because this is ultimately an economic argument.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Kubernetes on DigitalOcean&lt;&#x2F;strong&gt; (my original setup):&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;Managed K8s cluster: ~$12&#x2F;month (control plane fee)&lt;&#x2F;li&gt;
&lt;li&gt;Worker nodes (2x smallest): ~$24&#x2F;month&lt;&#x2F;li&gt;
&lt;li&gt;Managed PostgreSQL: ~$15&#x2F;month&lt;&#x2F;li&gt;
&lt;li&gt;Load balancer: ~$12&#x2F;month&lt;&#x2F;li&gt;
&lt;li&gt;Persistent volumes, bandwidth, extras: ~$15&#x2F;month&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Total: ~$78-80&#x2F;month&lt;&#x2F;strong&gt; (and this was after I trimmed it)&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;&lt;strong&gt;Fly.io&lt;&#x2F;strong&gt; (the middle ground):&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;Two Phoenix apps (shared-cpu-1x, 256MB each): ~$14&#x2F;month&lt;&#x2F;li&gt;
&lt;li&gt;Managed Postgres: ~$25&#x2F;month&lt;&#x2F;li&gt;
&lt;li&gt;Managed NATS: ~$20&#x2F;month&lt;&#x2F;li&gt;
&lt;li&gt;Bandwidth, extras: ~$10&#x2F;month&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Total: ~$70&#x2F;month&lt;&#x2F;strong&gt;&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;The managed services were the killer. Fly’s compute pricing is fair, but managed Postgres and managed NATS added up fast. And that was at near-zero traffic. Egress pricing on Fly is metered, so if either app had started getting real user load, the bandwidth bill alone would have pushed the total well past a hundred dollars a month.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Hetzner bare metal&lt;&#x2F;strong&gt; (current):&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;Dedicated server (auction): EUR 38&#x2F;month (~$42)&lt;&#x2F;li&gt;
&lt;li&gt;That is it. PostgreSQL, NATS, TLS, everything runs on the box.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Total: ~$42&#x2F;month&lt;&#x2F;strong&gt;&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;The Hetzner box is cheaper than Fly right now, and the gap only widens as usage grows. But the raw dollar comparison understates the difference. Look at what I am getting. 128 GB of RAM versus 512 MB. Multi-core CPU versus shared fractional cores. Two terabytes of NVMe storage versus a few gigs. Bandwidth that is essentially unlimited (Hetzner includes 20 TB of traffic) versus metered egress that scales with every user you add.&lt;&#x2F;p&gt;
&lt;p&gt;The capacity gap is the real story. On Fly, scaling from two apps to ten means linearly increasing costs, more VMs, more managed database instances, more bandwidth charges. On my Hetzner box, scaling from two apps to ten means… nothing. The resources are already there. I paid for them. PostgreSQL, NATS, any other service I want to run, it all fits on the same box with room to spare.&lt;&#x2F;p&gt;
&lt;p&gt;And there is no surprise bill. No bandwidth overage. No “your database exceeded the row limit” fee. No managed service add-on creep. Thirty-eight euros a month, every month, regardless of what I run on it.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;when-this-does-not-apply&quot;&gt;When this does not apply&lt;&#x2F;h2&gt;
&lt;p&gt;I would be dishonest if I pretended bare metal is the right answer for everyone. It is not.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;If you need multi-region presence&lt;&#x2F;strong&gt;, the cloud still wins. Running your own hardware in three continents is a different kind of problem. Edge computing, CDN-native architectures (which I &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;raskell.io&#x2F;articles&#x2F;edge-systems-are-the-new-backend&#x2F;&quot;&gt;wrote about previously&lt;&#x2F;a&gt;), and platforms like Fly or Cloudflare Workers are the right tools for workloads that need to be close to users worldwide.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;If you need elastic scaling&lt;&#x2F;strong&gt;, bare metal does not flex. A server has fixed resources. If your traffic spikes 10x for an hour, you cannot add capacity on demand. You can over-provision (and at these prices, generous over-provisioning is affordable), but it is not the same as true elasticity.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;If you do not understand the operational basics&lt;&#x2F;strong&gt;, bare metal will bite you. Server hardening, backup strategies, disk monitoring, security patching, these are your responsibility. The cloud abstracts them away. On bare metal, a missed security update is your problem. A full disk is your problem. A failed drive (RAID helps, but is not magic) is your problem.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;If your team is large and needs guardrails&lt;&#x2F;strong&gt;, managed platforms provide consistency and governance that bare metal does not. Kubernetes is complex, but it is complex in a standardized way. Everyone knows how to deploy to K8s. Everyone knows how to debug a pod. Your custom Vela setup is legible to exactly the people who built it.&lt;&#x2F;p&gt;
&lt;p&gt;The sweet spot for bare metal, especially AI-assisted bare metal, is small teams building products that need reliability but not scale, performance but not elasticity, control but not standardization. Solo founders. Two-person startups. Side projects that might become real businesses.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;what-i-learned&quot;&gt;What I learned&lt;&#x2F;h2&gt;
&lt;p&gt;The migration took about 48 hours from “I have an empty server” to “both apps are in production with HTTPS, monitoring, and automated backups.” Most of that time was data migration and validation, not infrastructure setup.&lt;&#x2F;p&gt;
&lt;p&gt;Vela is now at version 0.5.0 with a feature list I am genuinely proud of: blue-green and sequential deploys, process supervision with auto-restart, built-in reverse proxy with auto-TLS, service dependency management (Postgres and NATS), secret management, log streaming, rollbacks, remote builds, scheduled backups, deploy hooks, and machine-readable status output for monitoring integration.&lt;&#x2F;p&gt;
&lt;p&gt;I built most of it in a few focused sessions with Claude Code. Not because the code is trivial, it is about 4,000 lines of Rust with async IPC, Unix socket communication, ACME certificate management, process lifecycle handling, and a reverse proxy with WebSocket support. But because the problem domain is well-understood, the requirements were clear, and AI is remarkably good at turning clear requirements into working implementations.&lt;&#x2F;p&gt;
&lt;p&gt;The thing I keep coming back to: the cloud was never selling compute. It was selling convenience. And convenience used to require a company with thousands of engineers to build platforms that abstracted away the hard parts. Now, a developer with a clear idea of what they need and an AI that can write systems code can build a fit-for-purpose operational layer in a weekend.&lt;&#x2F;p&gt;
&lt;p&gt;That does not make the cloud irrelevant. It makes the cloud optional for a much larger class of workloads than it was before.&lt;&#x2F;p&gt;
&lt;p&gt;Buy a server. Build your tools. Ship your product. The infrastructure should be boring. With AI, it finally can be.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;references-and-further-reading&quot;&gt;References and further reading&lt;&#x2F;h2&gt;
&lt;h3 id=&quot;tools-and-platforms&quot;&gt;Tools and platforms&lt;&#x2F;h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;raskell-io&#x2F;vela&quot;&gt;Vela&lt;&#x2F;a&gt; - The bare-metal deployment tool built in this article&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.hetzner.com&#x2F;sb&#x2F;&quot;&gt;Hetzner Server Auction&lt;&#x2F;a&gt; - Refurbished dedicated servers at steep discounts&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;fly.io&quot;&gt;fly.io&lt;&#x2F;a&gt; - The managed platform I migrated from&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.nomadproject.io&#x2F;&quot;&gt;Nomad&lt;&#x2F;a&gt; - HashiCorp’s minimal workload orchestrator&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;opentofu.org&#x2F;&quot;&gt;OpenTofu&lt;&#x2F;a&gt; - Community fork of Terraform after the BSL relicense&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;cloudflare&#x2F;pingora&quot;&gt;Pingora&lt;&#x2F;a&gt; - Cloudflare’s Rust framework for building programmable proxies&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;hyper.rs&#x2F;&quot;&gt;hyper&lt;&#x2F;a&gt; - Rust HTTP library powering Vela’s reverse proxy&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;letsencrypt.org&#x2F;&quot;&gt;Let’s Encrypt&lt;&#x2F;a&gt; - Free TLS certificates, automated via ACME in Vela&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;nats.io&#x2F;&quot;&gt;NATS&lt;&#x2F;a&gt; - Lightweight messaging system used by Archipelag&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h3 id=&quot;frameworks-and-runtimes&quot;&gt;Frameworks and runtimes&lt;&#x2F;h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.phoenixframework.org&#x2F;&quot;&gt;Phoenix Framework&lt;&#x2F;a&gt; - Elixir web framework powering both apps&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.erlang.org&#x2F;&quot;&gt;Erlang&#x2F;OTP&lt;&#x2F;a&gt; - The BEAM virtual machine that runs Phoenix and Elixir&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.rust-lang.org&#x2F;&quot;&gt;Rust&lt;&#x2F;a&gt; - Systems language Vela and Zentinel are written in&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h3 id=&quot;projects-referenced&quot;&gt;Projects referenced&lt;&#x2F;h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;zentinelproxy.io&quot;&gt;Zentinel&lt;&#x2F;a&gt; - Security-first reverse proxy built on Pingora&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;archipelag.io&quot;&gt;Archipelag&lt;&#x2F;a&gt; - Distributed compute platform&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;cyanea.bio&quot;&gt;Cyanea&lt;&#x2F;a&gt; - Bioinformatics platform&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;anthropics&#x2F;claude-code&quot;&gt;Claude Code&lt;&#x2F;a&gt; - AI coding tool used to build Vela&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
</description>
      </item>
      <item>
          <title>Edge Systems Are the New Backend</title>
          <pubDate>Wed, 11 Feb 2026 00:00:00 +0000</pubDate>
          <author>Unknown</author>
          <link>https://raskell.io/articles/edge-systems-are-the-new-backend/</link>
          <guid>https://raskell.io/articles/edge-systems-are-the-new-backend/</guid>
          <description xml:base="https://raskell.io/articles/edge-systems-are-the-new-backend/">&lt;p&gt;A request arrives at your system. In the next 50 milliseconds, before any application code runs, this happens: TLS termination, route matching, WAF inspection against 285 detection rules, JWT validation, rate limit evaluation, request body validation against a JSON schema, and trace context generation. The request either dies at the edge or arrives at your backend pre-authenticated, pre-validated, and pre-authorized.&lt;&#x2F;p&gt;
&lt;p&gt;Five years ago, your backend did all of this. Every service validated its own tokens, enforced its own rate limits, ran its own security checks. Today, the backend might not even exist in the form you expect. It might be a static site served from edge nodes, a thin persistence API, or a headless CMS that publishes content to a CDN and never handles a user request directly.&lt;&#x2F;p&gt;
&lt;p&gt;Something shifted. Not just at the edge. On both ends.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-three-tier-past&quot;&gt;The three-tier past&lt;&#x2F;h2&gt;
&lt;p&gt;The architecture most of us learned was simple. Client, backend, database. The browser rendered HTML, maybe ran some jQuery. The backend did everything: authentication, authorization, business logic, rendering, validation, rate limiting, session management. The database stored state. Clean separation, one direction, easy to reason about.&lt;&#x2F;p&gt;
&lt;p&gt;This model worked because the browser was dumb. It could render markup and submit forms. Any real computation had to happen on the server. The backend was fat by necessity, not by design.&lt;&#x2F;p&gt;
&lt;p&gt;Microservices made it worse. Consider a typical setup: a user service, an order service, a payment service, a notification service, an inventory service. Each one needs to validate JWTs. Each one needs to enforce rate limits. Each one needs input validation, request logging, and error handling. That is five services times six concerns. Thirty implementations of logic that should exist exactly once.&lt;&#x2F;p&gt;
&lt;p&gt;Now multiply. Real organizations have 15, 50, 200 services. Each team implements auth slightly differently. One uses a shared library, one copied the code two years ago, one rolled their own because the library did not support their token format. The rate limiting configurations drift. The logging formats diverge. A security patch to the JWT validation logic means PRs across every repository, coordinated deployments, and someone asking “did we get all of them?”&lt;&#x2F;p&gt;
&lt;pre&gt;&lt;code&gt;                 ┌──────────┬──────────┬──────────┐
                 │ Users    │ Orders   │ Payments │
                 │ Service  │ Service  │ Service  │
                 ├──────────┼──────────┼──────────┤
  Auth           │ ✓ (v2.1) │ ✓ (v1.9) │ ✓ (v2.0)│
  Rate limiting  │ ✓ (lib)  │ ✓ (copy) │ ✗ (none)│
  Validation     │ ✓        │ ✓        │ ✓       │
  WAF&amp;#x2F;Security   │ ✗        │ ✗        │ ✗       │
  Logging        │ JSON     │ text     │ JSON    │
  Tracing        │ ✓        │ ✗        │ ✓       │
                 └──────────┴──────────┴──────────┘
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Libraries helped. Service meshes helped more. But the complexity was still distributed across every service, in every team’s codebase, in every deployment pipeline. The mesh moved networking concerns to a sidecar. It did not move application-level concerns like auth, validation, or security inspection.&lt;&#x2F;p&gt;
&lt;p&gt;The edge was an afterthought. A reverse proxy. TLS termination. Maybe Varnish for caching. Maybe a CDN for static assets. It was infrastructure plumbing, not a place where decisions happened.&lt;&#x2F;p&gt;
&lt;p&gt;That model is over.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;two-migrations-one-hollowing&quot;&gt;Two migrations, one hollowing&lt;&#x2F;h2&gt;
&lt;p&gt;Here is the thing I keep coming back to: business logic is migrating in two directions simultaneously.&lt;&#x2F;p&gt;
&lt;p&gt;Upward, to the edge. Infrastructure concerns like auth, WAF, and rate limiting now execute at the edge layer, before requests reach any backend. But it goes further than that. Edge Workers run actual application code. Containers deploy at the edge. Server-side rendering happens at edge nodes 50ms from the user, not in a data center 200ms away.&lt;&#x2F;p&gt;
&lt;p&gt;Downward, to the client. The browser is no longer dumb. WebAssembly runs near-native code. WebGPU puts the GPU to work on ML inference and image processing. Web Workers handle background computation. Service Workers intercept network requests and serve cached responses offline. CRDTs let the client own its data and sync when it feels like it.&lt;&#x2F;p&gt;
&lt;p&gt;The backend is caught in the middle. Squeezed from both sides. And what remains is not a “backend” in any traditional sense. It is a persistence layer. A place where data rests and syncs. The interesting work happens elsewhere.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;what-moved-to-the-edge&quot;&gt;What moved to the edge&lt;&#x2F;h2&gt;
&lt;h3 id=&quot;infrastructure-concerns&quot;&gt;Infrastructure concerns&lt;&#x2F;h3&gt;
&lt;p&gt;The first wave was obvious. Cross-cutting concerns that every service needed are better handled once, at the point of entry.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Authentication.&lt;&#x2F;strong&gt; Validating a JWT does not require application context. The token is self-contained: a signature, an issuer, an expiry, a set of claims. Parse it, verify the signature against a JWKS endpoint, check the expiry, extract the claims, attach them as headers. Done. The backend receives &lt;code&gt;X-User-Id: alice&lt;&#x2F;code&gt; and &lt;code&gt;X-User-Role: admin&lt;&#x2F;code&gt; instead of a raw Bearer token it has to decode itself.&lt;&#x2F;p&gt;
&lt;p&gt;This is not hypothetical. Here is what this looks like in practice:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;kdl&quot; class=&quot;language-kdl &quot;&gt;&lt;code class=&quot;language-kdl&quot; data-lang=&quot;kdl&quot;&gt;agent &amp;quot;auth&amp;quot; {
    type &amp;quot;auth&amp;quot;
    grpc address=&amp;quot;http:&amp;#x2F;&amp;#x2F;localhost:50051&amp;quot;
    events &amp;quot;request_headers&amp;quot;
    timeout-ms 100
    failure-mode &amp;quot;closed&amp;quot;
    max-concurrent-calls 100
}
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;That agent handles JWT, OIDC, SAML, mTLS, and API key validation. Every route behind it gets authentication for free. Every backend service trusts the edge to have done the work. The auth agent crashes? Failure mode is “closed”. Requests stop, but the proxy stays up.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Rate limiting.&lt;&#x2F;strong&gt; Token bucket algorithms with per-client keys. The edge layer sees every request before the backend does. It is the natural place to enforce rate limits because it can reject bad traffic before it consumes backend resources. A rejected request at the edge costs microseconds. A rejected request at the backend costs a database query, a connection slot, and whatever work happened before the check.&lt;&#x2F;p&gt;
&lt;p&gt;There are two flavors. Local rate limiting uses in-process token buckets. Fast, no network hops, but each edge node tracks its own counters. If you have 10 edge nodes and a limit of 100 requests per second, each node allows 100, so the effective limit is 1,000. For most use cases, this is fine. Abuse does not distribute itself evenly across your infrastructure.&lt;&#x2F;p&gt;
&lt;p&gt;Distributed rate limiting uses a shared store (Redis, typically). Accurate across nodes, but adds a network hop per request. The tradeoff is latency versus precision. I default to local rate limiting and switch to distributed only when the use case demands exact global limits, like API billing or token budgets for LLM inference.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Security inspection.&lt;&#x2F;strong&gt; WAFs used to be appliances. Expensive, opaque, binary. A request was either blocked or allowed. Modern WAFs use anomaly scoring. Each rule contributes a score, and the total determines the action:&lt;&#x2F;p&gt;
&lt;pre&gt;&lt;code&gt;Score 0-9:    Allow
Score 10-24:  Log (warning, investigate later)
Score 25+:    Block
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;This is a fundamentally different model than binary block&#x2F;allow. It lets you tune aggressively without breaking legitimate traffic. I run 285 detection rules at the edge and process 912K requests per second on clean traffic. That is 30x faster than ModSecurity’s C implementation. The performance gap matters because it means WAF inspection can happen on every request, not just suspicious ones.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;API validation.&lt;&#x2F;strong&gt; If your API has a JSON Schema, why validate request bodies in your application code? Validate at the edge. Reject malformed requests before they consume a connection, a goroutine, a database transaction. The backend receives only structurally valid payloads.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Observability.&lt;&#x2F;strong&gt; Trace context should originate at the edge, not at the application. The edge is where the request enters your system. It is where you assign a trace ID, start the clock, and record the first span. If you originate traces in your application, you miss everything that happened before: TLS negotiation time, WAF processing time, the fact that the request sat in a rate limit queue for 50ms. Starting traces at the edge gives you the full picture.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;the-isolation-problem&quot;&gt;The isolation problem&lt;&#x2F;h3&gt;
&lt;p&gt;You cannot put all of this in a monolithic proxy. That is how you end up with nginx and 47 modules where nobody understands the interaction effects. A WAF bug should not take down your routing. A slow auth provider should not block rate limit checks.&lt;&#x2F;p&gt;
&lt;p&gt;The answer is process isolation. Thin dataplane, crash-isolated external agents. Each agent runs as a separate process with its own failure domain:&lt;&#x2F;p&gt;
&lt;pre&gt;&lt;code&gt;┌──────────────────────────────────────────┐
│ Edge Proxy (thin dataplane)              │
│ Routing │ TLS │ Caching │ Load Balancing │
└─────┬──────────┬──────────┬──────────────┘
      │          │          │
      ▼          ▼          ▼
   [WAF]      [Auth]    [Rate Limit]
  process     process     process
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Each agent gets its own concurrency semaphore. A slow WAF cannot starve auth. Each agent has a circuit breaker. Three failures in 30 seconds and the circuit opens. Each agent has a configurable failure mode, and this is where the design gets interesting:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;kdl&quot; class=&quot;language-kdl &quot;&gt;&lt;code class=&quot;language-kdl&quot; data-lang=&quot;kdl&quot;&gt;agent &amp;quot;waf&amp;quot; {
    type &amp;quot;waf&amp;quot;
    timeout-ms 100
    failure-mode &amp;quot;closed&amp;quot;
    max-concurrent-calls 50
    circuit-breaker {
        failure-threshold 5
        success-threshold 2
        timeout-seconds 30
    }
}

agent &amp;quot;rate-limit&amp;quot; {
    type &amp;quot;rate-limit&amp;quot;
    timeout-ms 50
    failure-mode &amp;quot;open&amp;quot;
    max-concurrent-calls 200
}
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;The WAF fails closed. If it crashes or times out, requests are blocked. You lose availability to preserve security. Rate limiting fails open. If it crashes, requests are allowed. You lose rate enforcement to preserve availability. These are explicit choices per agent, not global defaults. The operator decides which tradeoff to make for each concern, and the decision is visible in the config, not buried in code.&lt;&#x2F;p&gt;
&lt;p&gt;Agents return decisions. The proxy merges them. A blocking decision from any agent wins. Otherwise, header mutations accumulate. The model is simple: agents advise, the proxy decides. No agent can override another agent’s block. No agent can force a request through. The proxy owns the final call.&lt;&#x2F;p&gt;
&lt;p&gt;This is not a workaround. It is the fundamental design choice. Complex logic lives outside the core, behind process boundaries. The proxy stays small, fast, and boring. The agents handle the interesting work in isolation. A bug in a Lua scripting agent does not corrupt the routing table. A memory leak in the WAF agent does not exhaust the proxy’s memory. The process boundary is the blast radius.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;edge-workers-business-logic-at-the-edge&quot;&gt;Edge Workers: business logic at the edge&lt;&#x2F;h3&gt;
&lt;p&gt;Infrastructure concerns were the first wave. The second wave is actual business logic.&lt;&#x2F;p&gt;
&lt;p&gt;Cloudflare Workers, Deno Deploy, Fastly Compute, Vercel Edge Functions. These are not just “serverless at the CDN.” They are full compute environments running at edge nodes around the world. V8 isolates spin up in under 5ms. Cold starts are measured in single-digit milliseconds, not seconds. Your code runs 50ms from the user instead of 200ms away in us-east-1.&lt;&#x2F;p&gt;
&lt;p&gt;The constraints matter, because they shape what belongs here. Typical Edge Worker limits: 10-50ms CPU time per request (not wall time, actual CPU), 128MB memory, no raw TCP sockets, no persistent file system. You get a request, key-value storage, and the ability to make sub-requests to origins. That is it. These constraints are not bugs. They are what makes sub-millisecond cold starts possible. V8 isolates are cheap because they are small and short-lived.&lt;&#x2F;p&gt;
&lt;p&gt;What fits within these constraints is surprisingly broad:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;API routing and transformation.&lt;&#x2F;strong&gt; A request comes in for &lt;code&gt;&#x2F;api&#x2F;v2&#x2F;users&lt;&#x2F;code&gt;. The edge Worker rewrites it, fans out to two backend services (user profiles from one, preferences from another), merges the responses, and returns a single payload. The backend services are simple data sources. The edge Worker is the API layer.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;A&#x2F;B testing and feature flags.&lt;&#x2F;strong&gt; Read the experiment cookie, hash the user ID, assign a variant, route to the right origin or rewrite the response. No round trip to a feature flag service. The decision happens in microseconds at the node closest to the user.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Personalization.&lt;&#x2F;strong&gt; Look up the user’s segment in KV storage, inject the right content block, set cache headers accordingly. The backend generated all variants at build time. The edge picks the right one per request.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Server-side rendering.&lt;&#x2F;strong&gt; Render HTML at the edge node closest to the user. Frameworks like Next.js and Remix already support this. React Server Components run at the edge. The “server” in server-side rendering is not your server. It is an edge node in 300 locations.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Authentication and session management.&lt;&#x2F;strong&gt; Validate tokens, refresh sessions, set secure cookies. The auth flow never touches your origin. Cloudflare Workers KV or Durable Objects store session state at the edge.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;The pattern: compute that depends on request context but not on deep application state moves to the edge. If you can do it with a request, a key-value lookup, and a response, it probably belongs here. If it needs a complex database query or a multi-step transaction, it does not.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;containers-at-the-edge&quot;&gt;Containers at the edge&lt;&#x2F;h3&gt;
&lt;p&gt;Edge Workers hit a ceiling when you need persistent connections, large memory, or long-running processes. For those workloads, containers at the edge.&lt;&#x2F;p&gt;
&lt;p&gt;Fly.io, Railway, and Lambda@Edge deploy containers or full processes to edge locations worldwide. Your application runs with real file systems, TCP connections, and whatever runtime you need. But it runs close to users, not in a centralized data center. Latency drops from 200ms to 20ms.&lt;&#x2F;p&gt;
&lt;p&gt;The interesting problem is data gravity. Compute is easy to distribute. Data is not. If your container runs in Tokyo but your database is in Frankfurt, you have not solved the latency problem. You have moved it from the user-to-server hop to the server-to-database hop. The solutions are still maturing: read replicas at the edge (Turso, Neon), embedded databases that sync (LiteFS, libSQL), and eventually-consistent stores designed for multi-region (DynamoDB Global Tables, CockroachDB).&lt;&#x2F;p&gt;
&lt;p&gt;This model makes sense when compute and data can be co-located:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Regional APIs&lt;&#x2F;strong&gt; that comply with data residency requirements. Run the container and the database replica in the same region. GDPR data stays in the EU. Japanese user data stays in Japan.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Real-time applications&lt;&#x2F;strong&gt; where 200ms round trips kill the experience. Collaborative editing, multiplayer, live dashboards. A WebSocket server 20ms away feels instant. One 200ms away feels sluggish.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Stateful edge compute&lt;&#x2F;strong&gt; where you need more than a request&#x2F;response cycle. Background processing, scheduled jobs, long-running connections.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;The line between “edge” and “origin” blurs. If your container runs in 30 regions and handles requests locally with a local database replica, is that an edge deployment or a distributed backend? The distinction stops mattering. What matters is that the compute and the data are close to the user.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;what-moved-to-the-client&quot;&gt;What moved to the client&lt;&#x2F;h2&gt;
&lt;p&gt;The other half of the migration goes downward. The browser is not the thin client it used to be.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;webassembly&quot;&gt;WebAssembly&lt;&#x2F;h3&gt;
&lt;p&gt;WASM runs at near-native speed in every modern browser. Not “fast for JavaScript.” Actually fast. Compiled from Rust, C++, Go, or any language with an LLVM backend. Sandboxed, portable, deterministic.&lt;&#x2F;p&gt;
&lt;p&gt;What this enables:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Image and video processing&lt;&#x2F;strong&gt; in the browser. No upload to a server, no round trip, no privacy concern. The pixels never leave the device.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Document parsing and transformation.&lt;&#x2F;strong&gt; PDF rendering, spreadsheet computation, file format conversion. Libraries compiled to WASM and running client-side.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Cryptographic operations.&lt;&#x2F;strong&gt; End-to-end encryption where the server never sees plaintext. Key derivation, signing, verification, all in the browser.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Compute offloading from the backend.&lt;&#x2F;strong&gt; This is the one that changes how you think about server sizing.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Full relational databases in the browser.&lt;&#x2F;strong&gt; This is the one that changes architectures.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;I build on this pattern directly. &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;cyanea.bio&quot;&gt;Cyanea&lt;&#x2F;a&gt;, a bioinformatics platform, uses WASM to offload computation from the backend to the client’s browser. Sequence analysis, structure visualization, dataset filtering, these are CPU-intensive operations that traditionally require beefy server infrastructure. Instead, the computation runs right there on the researcher’s device. The backend stays thin: it stores datasets and coordinates collaboration, but the heavy lifting happens in the browser. This means I can run the platform on a modest server and still deliver real computational capability, because the “compute fleet” is the users’ own machines.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;archipelag.io&quot;&gt;Archipelag&lt;&#x2F;a&gt; takes the same idea in a different direction. It is a distributed compute platform where users contribute their browser’s idle compute power via WASM. The workloads compile to WASM modules, ship to participating browsers, execute in the sandbox, and return results. The browser is not just a client consuming a service, it is a compute node in a distributed system. The WASM sandbox is what makes this safe: untrusted code runs in a constrained environment with no access to the host file system, network, or memory beyond what is explicitly granted.&lt;&#x2F;p&gt;
&lt;p&gt;SQLite compiled to WASM (via projects like sql.js, wa-sqlite, or the official SQLite WASM build) gives the browser a real relational database. Not a key-value store. Not IndexedDB’s awkward object store API. Actual SQL with joins, indexes, transactions, and triggers. Backed by the Origin Private File System (OPFS) for persistence, it survives page reloads and browser restarts.&lt;&#x2F;p&gt;
&lt;p&gt;The implications are significant. Your application can run complex queries locally. Filter, sort, aggregate, full-text search. All instant, all offline. The server becomes a sync endpoint. It ships a database snapshot down and accepts change sets back. The client does the querying. The server does the storing.&lt;&#x2F;p&gt;
&lt;p&gt;This pattern scales down elegantly. A note-taking app with SQLite-in-WASM needs no backend API for reads. A project management tool can filter and search 10,000 tasks without a network request. A CMS authoring interface can work fully offline and sync when the author reconnects. The read path is local. The write path syncs eventually.&lt;&#x2F;p&gt;
&lt;p&gt;WASI (WebAssembly System Interface) extends this further. It gives WASM modules controlled access to file systems, clocks, and network sockets outside the browser. WASM becomes a universal runtime: the same binary runs in the browser, at the edge (Cloudflare Workers use WASM under the hood), and on bare metal. Write once, deploy to every layer of the stack.&lt;&#x2F;p&gt;
&lt;p&gt;The pattern: anything that is CPU-bound, privacy-sensitive, or latency-sensitive is a candidate for client-side WASM. If the computation does not need server-side state, it should not round-trip to a server.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;webgpu&quot;&gt;WebGPU&lt;&#x2F;h3&gt;
&lt;p&gt;WebGPU landed in Chrome in 2023, in Firefox and Safari shortly after, and it changes the math on what the client can compute. This is not WebGL with a new name. WebGL exposes a graphics pipeline. WebGPU exposes compute shaders. Direct, general-purpose GPU computation from JavaScript or WASM.&lt;&#x2F;p&gt;
&lt;p&gt;The immediate application is ML inference. Run a language model, an image classifier, or a recommendation engine on the user’s GPU. No server call, no API cost per token, no latency. The model weights download once (cached by the browser) and run locally. Privacy by default, because the data never leaves the device.&lt;&#x2F;p&gt;
&lt;p&gt;This is not theoretical. Stable Diffusion generates images in the browser via WebGPU. Small language models (Phi-2, Gemma 2B, Llama 3.2 1B) run at usable speeds on consumer hardware. MediaPipe runs pose detection, face tracking, and hand gesture recognition in real time. The trajectory is clear: models get smaller through distillation and quantization, consumer GPUs get faster, and the gap between “cloud inference” and “local inference” narrows every quarter.&lt;&#x2F;p&gt;
&lt;p&gt;Both Cyanea and Archipelag use WebGPU alongside WASM. In Cyanea, WebGPU accelerates molecular visualization and large-scale dataset operations, the kind of parallel computation that bioinformatics demands but that would be prohibitively expensive to run server-side for every user session. In Archipelag, WebGPU-capable nodes can take on GPU-accelerated workloads from the compute pool, turning a user’s idle GPU into a productive resource. The combination of WASM for general compute and WebGPU for parallel workloads gives the browser a compute profile that would have required dedicated server hardware five years ago.&lt;&#x2F;p&gt;
&lt;p&gt;But inference is not the only use case. WebGPU handles any parallel computation: physics simulations for games, signal processing for audio applications, particle systems for data visualization, and large-scale matrix operations. Anything you would reach for CUDA or Metal for on native can now run in the browser. The compute budget of the client just increased by orders of magnitude.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;web-workers-and-service-workers&quot;&gt;Web Workers and Service Workers&lt;&#x2F;h3&gt;
&lt;p&gt;Web Workers give you background threads. Heavy computation does not block the UI. Parse a large file, run a simulation, index a search corpus. All off the main thread, all without janking the interface.&lt;&#x2F;p&gt;
&lt;p&gt;Service Workers sit between the browser and the network. They intercept every fetch request and decide what to do: serve from cache, go to network, do both and race them. This enables:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Offline-first applications.&lt;&#x2F;strong&gt; The app works without a network connection. Data syncs when connectivity returns.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Background sync.&lt;&#x2F;strong&gt; Queue mutations while offline, replay them when online.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Push notifications.&lt;&#x2F;strong&gt; Wake the app without the user having it open.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Intelligent caching.&lt;&#x2F;strong&gt; Cache API responses, serve stale data while revalidating, pre-fetch resources the user is likely to need.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;The Service Worker is the client-side equivalent of the edge proxy. It intercepts, caches, validates, and routes. It makes the client self-sufficient.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;local-first-and-crdts&quot;&gt;Local-first and CRDTs&lt;&#x2F;h3&gt;
&lt;p&gt;Here is where it gets interesting. If the client has compute (WASM, WebGPU, Web Workers) and storage (IndexedDB, OPFS) and offline capability (Service Workers), why does it need a server at all?&lt;&#x2F;p&gt;
&lt;p&gt;CRDTs (Conflict-free Replicated Data Types) answer the consistency question. Multiple clients can edit the same data independently, offline, with no coordination. When they reconnect, their changes merge automatically without conflicts. No server-mediated locking. No “last write wins” data loss. Mathematical guarantees that concurrent edits converge to the same state.&lt;&#x2F;p&gt;
&lt;p&gt;The architecture:&lt;&#x2F;p&gt;
&lt;pre&gt;&lt;code&gt;Client A (offline)     Client B (offline)
    │                      │
    ├── Local edits        ├── Local edits
    │   (CRDT ops)         │   (CRDT ops)
    │                      │
    └──────┐      ┌────────┘
           ▼      ▼
      ┌──────────────┐
      │ Sync service  │  (thin, stateless)
      │ (persistence  │
      │  + relay)     │
      └──────────────┘
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;The sync service is not a backend. It stores operations and relays them between clients. It does not run business logic. It does not validate (the CRDT handles consistency). It does not transform (the merge function is built into the data type). It is a persistence layer with a WebSocket attached.&lt;&#x2F;p&gt;
&lt;p&gt;I build systems like this. The concrete model: a document is a flat &lt;code&gt;HashMap&amp;lt;EntityId, Entity&amp;gt;&lt;&#x2F;code&gt; where each entity holds CRDT-typed fields. The field types determine how concurrent edits merge:&lt;&#x2F;p&gt;
&lt;table&gt;&lt;thead&gt;&lt;tr&gt;&lt;th&gt;CRDT type&lt;&#x2F;th&gt;&lt;th&gt;Merge behavior&lt;&#x2F;th&gt;&lt;th&gt;Use case&lt;&#x2F;th&gt;&lt;&#x2F;tr&gt;&lt;&#x2F;thead&gt;&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;LwwRegister&amp;lt;T&amp;gt;&lt;&#x2F;td&gt;&lt;td&gt;Last writer wins (by timestamp)&lt;&#x2F;td&gt;&lt;td&gt;Simple values: name, status, URL&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;GrowOnlySet&amp;lt;T&amp;gt;&lt;&#x2F;td&gt;&lt;td&gt;Union of both sides&lt;&#x2F;td&gt;&lt;td&gt;Tags, labels, immutable references&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;ObservedRemoveSet&amp;lt;T&amp;gt;&lt;&#x2F;td&gt;&lt;td&gt;Add wins over concurrent remove&lt;&#x2F;td&gt;&lt;td&gt;Collaborator lists, mutable collections&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;MaxRegister&lt;&#x2F;td&gt;&lt;td&gt;Higher value wins&lt;&#x2F;td&gt;&lt;td&gt;Version counters, progress indicators&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;MinRegister&lt;&#x2F;td&gt;&lt;td&gt;Lower value wins&lt;&#x2F;td&gt;&lt;td&gt;Earliest timestamps, priority values&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;&#x2F;tbody&gt;&lt;&#x2F;table&gt;
&lt;p&gt;Each field carries a hybrid logical clock (HLC) timestamp. The HLC combines physical time with a logical counter, so causality is preserved even when wall clocks drift. Two clients edit the same field at the “same” time? The HLC ordering is deterministic. Both clients converge to the same value without coordination.&lt;&#x2F;p&gt;
&lt;p&gt;The merge function has three properties that make this work: it is associative (grouping does not matter), commutative (order does not matter), and idempotent (applying the same operation twice has no additional effect). These are not implementation details. They are the mathematical foundation that makes server-free consistency possible. You can sync operations in any order, from any number of clients, through any number of intermediate relays, and every replica converges to the same state.&lt;&#x2F;p&gt;
&lt;p&gt;The client owns its data. The server is optional. When the server exists, it persists operations and relays them. It does not arbitrate, transform, or validate beyond authentication.&lt;&#x2F;p&gt;
&lt;p&gt;This is not a niche pattern for collaborative text editors. Any application where users create and modify data can benefit. Notes, task managers, project planning tools, CMS authoring, form builders. The question is not “should this be local-first?” The question is “does this need a server, and if so, for what?”&lt;&#x2F;p&gt;
&lt;h2 id=&quot;what-the-backend-becomes&quot;&gt;What the backend becomes&lt;&#x2F;h2&gt;
&lt;p&gt;If the edge handles infrastructure concerns and business logic that depends on request context, and the client handles computation, rendering, and local state, what is left for the backend?&lt;&#x2F;p&gt;
&lt;p&gt;A persistence layer.&lt;&#x2F;p&gt;
&lt;p&gt;The backend becomes the place where data rests between sessions and syncs between devices. Not an application server. A persistence layer.&lt;&#x2F;p&gt;
&lt;p&gt;Consider the spectrum of what “backend” looks like now:&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Static sites.&lt;&#x2F;strong&gt; This site is an example. raskell.io is built with Zola. Markdown files compile to HTML at build time and deploy to edge CDN nodes. No application server. No database. No runtime process. The “backend” is a git repository and a CI pipeline. Content lives as files. Serving happens at the edge. The total monthly infrastructure cost is the price of a domain name.&lt;&#x2F;p&gt;
&lt;p&gt;This is not limited to blogs. Documentation sites, marketing pages, product landing pages, e-commerce storefronts with pre-rendered product pages. Any content that changes at author-time rather than request-time can be static. The headless CMS (Contentful, Sanity, Strapi, or just a git repo) publishes content. The static site generator builds HTML. The CDN serves it. The “backend” runs at build time, not at request time.&lt;&#x2F;p&gt;
&lt;p&gt;I take this further than most. All of my projects are CDN-first, even the ones with dedicated backends. The principle: if the backend goes down, the user should still see something useful. The static layer is the safety net.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;raskell-io&#x2F;kurumi&quot;&gt;Kurumi&lt;&#x2F;a&gt;, a local-first second brain app, is the purest expression of this. It is a Progressive Web App served entirely from CDN edge nodes with Service Workers handling offline capability. There is no backend server. Notes sync between devices through CRDTs when connectivity exists, but the app works fully offline. The entire “infrastructure” is a static deployment and an optional sync relay.&lt;&#x2F;p&gt;
&lt;p&gt;But the CDN-first pattern also applies to applications that have real backends. Cyanea has a Phoenix&#x2F;Elixir backend that manages datasets, user accounts, and collaboration. But the public-facing surface, the landing pages, category pages, trending spaces and protocols and datasets, is statically generated. The backend exports JSON snapshots of its database objects on a timed interval. A static site generator picks up those snapshots and rebuilds the public pages: what labs are active, which protocols are trending, which datasets were recently published. The result is a set of HTML pages sitting on a CDN that stay current without depending on the backend being up at the moment a visitor arrives.&lt;&#x2F;p&gt;
&lt;pre&gt;&lt;code&gt;┌──────────────────┐     JSON export      ┌──────────────┐
│  Cyanea Backend  │ ──── (interval) ────&amp;gt; │  Static Site  │
│  (Phoenix&amp;#x2F;BEAM)  │                       │  Generator    │
│                  │                       │  (Zola)       │
│  - datasets      │                       │               │
│  - protocols     │                       │  → CDN edge   │
│  - labs          │                       │    nodes      │
│  - spaces        │                       │               │
└──────────────────┘                       └──────────────┘
         │                                        │
         │ dynamic app                    static pages
         ▼                                        ▼
   app.cyanea.bio                          cyanea.bio
   (logged-in users,                  (public, always up,
    real-time features)                fast, no backend
                                       dependency)
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;This is a genuinely resilient architecture. The backend can be down for maintenance, mid-deploy, or experiencing load, and the public site keeps serving. The static pages are never stale by more than one generation interval. For a site where “trending this week” is sufficient freshness, that interval can be hours. The CDN handles traffic spikes that would overwhelm a backend. The backend handles the dynamic work that requires real-time data.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Thin persistence APIs.&lt;&#x2F;strong&gt; For applications with dynamic data, the backend shrinks to a database with an API in front of it. Accept writes, serve reads, enforce schema constraints. GraphQL or REST over Postgres. No rendering. No business logic beyond data integrity. The API exists so that clients and edge workers have somewhere to store and retrieve state.&lt;&#x2F;p&gt;
&lt;p&gt;The interesting shift: even the persistence API is getting thinner. Services like Supabase, PlanetScale, and Turso expose the database directly over HTTP or WebSockets with built-in auth. Your “backend” becomes a hosted database with row-level security policies. No application code at all.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Sync relays.&lt;&#x2F;strong&gt; For local-first applications, the backend is even simpler. Accept CRDT operations from clients, persist them to durable storage, fan them out to other connected clients via WebSocket. No merge logic (the CRDT handles that). No transformation. No validation beyond authentication. The relay does not understand the data. It stores and forwards.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Event logs.&lt;&#x2F;strong&gt; Append-only storage. Clients sync by replaying events from their last known position. The log is the source of truth. Everything else (search indexes, analytics dashboards, recommendation models) is a materialized view built asynchronously. The hot path is the append. The read path is the replay. Both are simple.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Batch processors.&lt;&#x2F;strong&gt; The one place where traditional backend compute survives: jobs that require access to the full dataset. Analytics aggregation, report generation, search index building, ML model training. These run on schedules or triggers, not in the request path. They read from the event log or the database, compute, and write results back. The user never waits for them.&lt;&#x2F;p&gt;
&lt;p&gt;The common thread: the backend does not touch the hot path. User requests hit the edge and the client. The backend runs in the background, on its own schedule, when no one is waiting.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-architecture-that-makes-this-work&quot;&gt;The architecture that makes this work&lt;&#x2F;h2&gt;
&lt;p&gt;Pushing logic to the edge and the client is not free. Both environments have constraints, and ignoring them is how you build fragile systems.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;at-the-edge-bounded-resources&quot;&gt;At the edge: bounded resources&lt;&#x2F;h3&gt;
&lt;p&gt;Every operation at the edge needs explicit limits. No open-ended computations, no unbounded queues, no surprise behavior. This is not just good practice. It is existential. The edge proxy sits between the internet and your infrastructure. If it behaves unpredictably, everything behind it suffers.&lt;&#x2F;p&gt;
&lt;p&gt;Concretely:&lt;&#x2F;p&gt;
&lt;table&gt;&lt;thead&gt;&lt;tr&gt;&lt;th&gt;Resource&lt;&#x2F;th&gt;&lt;th&gt;Bound&lt;&#x2F;th&gt;&lt;th&gt;Why&lt;&#x2F;th&gt;&lt;&#x2F;tr&gt;&lt;&#x2F;thead&gt;&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;Agent concurrency&lt;&#x2F;td&gt;&lt;td&gt;Per-agent semaphore (default: 100)&lt;&#x2F;td&gt;&lt;td&gt;Prevents noisy neighbor between agents&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;Agent timeout&lt;&#x2F;td&gt;&lt;td&gt;100ms default&lt;&#x2F;td&gt;&lt;td&gt;Prevents latency cascade&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;Connection pool&lt;&#x2F;td&gt;&lt;td&gt;Explicit max (default: 10K)&lt;&#x2F;td&gt;&lt;td&gt;Prevents file descriptor exhaustion&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;Request body&lt;&#x2F;td&gt;&lt;td&gt;Streaming, not buffered&lt;&#x2F;td&gt;&lt;td&gt;Prevents memory exhaustion&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;Route cache&lt;&#x2F;td&gt;&lt;td&gt;LRU with size limit&lt;&#x2F;td&gt;&lt;td&gt;Prevents unbounded growth&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;Rate limit queues&lt;&#x2F;td&gt;&lt;td&gt;Bounded with max delay&lt;&#x2F;td&gt;&lt;td&gt;Prevents request pile-up&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;&#x2F;tbody&gt;&lt;&#x2F;table&gt;
&lt;p&gt;If you cannot articulate the bound for every resource your edge system uses, you do not have an architecture. You have an accident waiting for load.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;on-the-client-isolation-and-sandboxing&quot;&gt;On the client: isolation and sandboxing&lt;&#x2F;h3&gt;
&lt;p&gt;The client has different constraints. Battery life, memory pressure, the user closing the tab at any moment.&lt;&#x2F;p&gt;
&lt;p&gt;WASM runs in a sandbox. No file system access, no network access, no shared memory (unless explicitly granted). This is the security model that makes client-side compute viable. Untrusted code (your own, running on someone else’s device) cannot escape the sandbox.&lt;&#x2F;p&gt;
&lt;p&gt;Web Workers run in separate threads with message-passing. No shared mutable state. No locks. No data races. The isolation is enforced by the runtime, not by programmer discipline.&lt;&#x2F;p&gt;
&lt;p&gt;Service Workers have a lifecycle managed by the browser. They can be terminated at any time to save resources. Your offline logic must handle graceful shutdown. This means: durable state in IndexedDB, idempotent sync operations, no in-memory state that cannot be reconstructed.&lt;&#x2F;p&gt;
&lt;p&gt;CRDTs provide consistency guarantees without coordination. But they are not magic. They consume memory (tombstones for deleted items, version vectors for causal ordering). They need garbage collection. They need careful schema design because not every data model maps cleanly to CRDT primitives. A counter works. A last-writer-wins register works. A rich text document with formatting, comments, and embedded media requires careful thought.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;the-trust-boundary&quot;&gt;The trust boundary&lt;&#x2F;h3&gt;
&lt;p&gt;Here is the part most edge-computing articles skip: trust.&lt;&#x2F;p&gt;
&lt;p&gt;If the edge handles auth, the backend trusts the edge to have done auth correctly. If the client handles business logic, the server trusts the client to have computed correctly. These are real trust boundaries with real failure modes.&lt;&#x2F;p&gt;
&lt;p&gt;At the edge, trust is earned through:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Failure isolation.&lt;&#x2F;strong&gt; Agent crashes do not take down the proxy. Bad config is validated before activation.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Observability.&lt;&#x2F;strong&gt; Every decision is logged, metered, and traceable. If the WAF blocked a request, you can see exactly which rules fired and why.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Bounded behavior.&lt;&#x2F;strong&gt; No surprise modes. Every resource has explicit limits. Every failure mode is configured, not assumed.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;On the client, trust is conditional:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Never trust the client for security decisions.&lt;&#x2F;strong&gt; Validate at the edge or the backend. Client-side checks are UX, not security.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Trust the client for its own data.&lt;&#x2F;strong&gt; If the user is editing their own document, the client is authoritative. CRDTs handle consistency. The server persists, it does not arbitrate.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Verify at the boundary.&lt;&#x2F;strong&gt; When client data syncs to the server, validate schema and authorization. Trust the merge, verify the input.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h2 id=&quot;when-not-to-do-this&quot;&gt;When not to do this&lt;&#x2F;h2&gt;
&lt;p&gt;Not everything belongs at the edge or on the client. Here is what stays in the backend:&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Multi-service transactions.&lt;&#x2F;strong&gt; If an operation needs to read from three databases, check inventory, charge a payment, and send a notification, that is a backend workflow. Distributed transactions need coordination, and coordination needs a central authority.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Heavy data joins.&lt;&#x2F;strong&gt; If your query joins six tables with complex filters and aggregations, it runs next to the database, not at an edge node 200ms away from the data.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Regulatory requirements.&lt;&#x2F;strong&gt; Some industries mandate that data processing happens in specific locations, on specific infrastructure, with specific audit trails. Edge deployment may not satisfy these constraints.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Small teams with simple needs.&lt;&#x2F;strong&gt; If you have one backend, ten users, and no latency problems, this architecture is overhead. A Django app behind nginx is fine. Optimize when you have a reason to optimize, not before.&lt;&#x2F;p&gt;
&lt;p&gt;The edge handles cross-cutting concerns and request-context computation. The client handles local state and user-facing compute. The backend handles coordination, persistence, and anything that needs the full dataset. Know which is which.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;where-this-is-going&quot;&gt;Where this is going&lt;&#x2F;h2&gt;
&lt;p&gt;Five years ago, the stack was: browser (thin) renders server-generated HTML, backend (fat) runs everything, database stores state. The mental model was request&#x2F;response, and the backend was the center of gravity.&lt;&#x2F;p&gt;
&lt;p&gt;The stack now:&lt;&#x2F;p&gt;
&lt;pre&gt;&lt;code&gt;┌─────────────────────────────────────────────────────┐
│ Client                                               │
│ WASM │ WebGPU │ Web Workers │ Service Workers │ CRDT │
│ (compute, render, offline, local state)              │
└────────────────────┬────────────────────────────────┘
                     │
┌────────────────────┴────────────────────────────────┐
│ Edge                                                 │
│ Proxy │ Workers │ Containers │ KV │ Durable Objects  │
│ (auth, WAF, routing, SSR, API aggregation, policy)   │
└────────────────────┬────────────────────────────────┘
                     │
┌────────────────────┴────────────────────────────────┐
│ Backend                                              │
│ Database │ Sync relay │ Event log │ Batch processing  │
│ (persistence, coordination, async compute)           │
└─────────────────────────────────────────────────────┘
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;The client is fat. The edge is fat. The backend is thin. The center of gravity moved to both ends simultaneously.&lt;&#x2F;p&gt;
&lt;p&gt;Every year, this accelerates. Models get smaller and run on consumer GPUs. WASM runtimes get faster and gain more system APIs through WASI. Edge platforms add durable storage, queues, and cron triggers. CRDTs mature from academic curiosities to production libraries. SQLite-in-the-browser goes from experiment to default architecture for offline-capable apps.&lt;&#x2F;p&gt;
&lt;p&gt;The backend will not disappear. Data needs to live somewhere durable, and cross-device sync needs a relay. Coordination problems need a central authority. Batch processing needs access to the full dataset. But the backend’s role is narrowing to exactly these things. It is becoming infrastructure, not application. Plumbing, not logic.&lt;&#x2F;p&gt;
&lt;p&gt;I find myself building systems where the most interesting engineering happens at the boundaries. A reverse proxy (&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;zentinelproxy.io&quot;&gt;Zentinel&lt;&#x2F;a&gt;) that inspects 912K requests per second through 285 WAF rules, authenticates with sub-millisecond latency, and routes with crash-isolated agents. A bioinformatics platform (&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;cyanea.bio&quot;&gt;Cyanea&lt;&#x2F;a&gt;) where the browser runs the computation and the backend exports JSON for statically generated pages. A distributed compute platform (&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;archipelag.io&quot;&gt;Archipelag&lt;&#x2F;a&gt;) where users’ browsers are the compute fleet via WASM and WebGPU. A note-taking app (&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;raskell-io&#x2F;kurumi&quot;&gt;Kurumi&lt;&#x2F;a&gt;) that works fully offline with CRDTs and never touches a server for reads. Between all of them, a database, a sync relay, or just a CDN. Necessary and boring.&lt;&#x2F;p&gt;
&lt;p&gt;The backend is not dead. It is just not where the interesting work happens anymore.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;references-and-further-reading&quot;&gt;References and further reading&lt;&#x2F;h2&gt;
&lt;h3 id=&quot;edge-platforms-and-proxies&quot;&gt;Edge platforms and proxies&lt;&#x2F;h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;workers.cloudflare.com&#x2F;&quot;&gt;Cloudflare Workers&lt;&#x2F;a&gt; - V8 isolate-based edge compute&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;deno.com&#x2F;deploy&quot;&gt;Deno Deploy&lt;&#x2F;a&gt; - Edge runtime built on the Deno JavaScript runtime&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.fastly.com&#x2F;products&#x2F;edge-compute&quot;&gt;Fastly Compute&lt;&#x2F;a&gt; - Wasm-based edge compute platform&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;vercel.com&#x2F;docs&#x2F;functions&#x2F;edge-functions&quot;&gt;Vercel Edge Functions&lt;&#x2F;a&gt; - Edge compute integrated with Next.js&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;fly.io&quot;&gt;fly.io&lt;&#x2F;a&gt; - Container-based edge deployment platform&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;cloudflare&#x2F;pingora&quot;&gt;Pingora&lt;&#x2F;a&gt; - Cloudflare’s Rust framework for programmable proxies&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;zentinelproxy.io&quot;&gt;Zentinel&lt;&#x2F;a&gt; - Security-first reverse proxy with crash-isolated agents&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h3 id=&quot;client-side-compute&quot;&gt;Client-side compute&lt;&#x2F;h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;webassembly.org&#x2F;&quot;&gt;WebAssembly&lt;&#x2F;a&gt; - Portable binary instruction format for the web&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;wasi.dev&#x2F;&quot;&gt;WASI&lt;&#x2F;a&gt; - WebAssembly System Interface for running Wasm outside the browser&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.w3.org&#x2F;TR&#x2F;webgpu&#x2F;&quot;&gt;WebGPU specification&lt;&#x2F;a&gt; - W3C standard for GPU compute in the browser&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;developers.google.com&#x2F;mediapipe&quot;&gt;MediaPipe&lt;&#x2F;a&gt; - ML inference framework running client-side via Wasm and WebGPU&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;sqlite.org&#x2F;wasm&quot;&gt;SQLite Wasm&lt;&#x2F;a&gt; - Official SQLite build targeting WebAssembly&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;sql.js.org&#x2F;&quot;&gt;sql.js&lt;&#x2F;a&gt; - SQLite compiled to Wasm via Emscripten&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;developer.mozilla.org&#x2F;en-US&#x2F;docs&#x2F;Web&#x2F;API&#x2F;File_System_API&#x2F;Origin_private_file_system&quot;&gt;Origin Private File System&lt;&#x2F;a&gt; - MDN reference for persistent browser storage&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;developer.mozilla.org&#x2F;en-US&#x2F;docs&#x2F;Web&#x2F;API&#x2F;Service_Worker_API&quot;&gt;Service Worker API&lt;&#x2F;a&gt; - MDN reference for offline-capable web apps&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;developer.mozilla.org&#x2F;en-US&#x2F;docs&#x2F;Web&#x2F;API&#x2F;Web_Workers_API&quot;&gt;Web Workers API&lt;&#x2F;a&gt; - MDN reference for background threads in the browser&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h3 id=&quot;crdts-and-local-first&quot;&gt;CRDTs and local-first&lt;&#x2F;h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=x7drE24geUw&quot;&gt;CRDTs: The Hard Parts&lt;&#x2F;a&gt; - Martin Kleppmann’s talk on practical CRDT challenges&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.inkandswitch.com&#x2F;local-first&#x2F;&quot;&gt;Local-first software&lt;&#x2F;a&gt; - Ink and Switch research paper on local-first architectures&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;automerge.org&#x2F;&quot;&gt;Automerge&lt;&#x2F;a&gt; - CRDT library for collaborative applications&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;yjs.dev&#x2F;&quot;&gt;Yjs&lt;&#x2F;a&gt; - High-performance CRDT framework for the web&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;hal.inria.fr&#x2F;inria-00555588&#x2F;document&quot;&gt;A comprehensive study of CRDTs&lt;&#x2F;a&gt; - Shapiro et al., the foundational survey paper&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h3 id=&quot;databases-at-the-edge&quot;&gt;Databases at the edge&lt;&#x2F;h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;turso.tech&#x2F;&quot;&gt;Turso&lt;&#x2F;a&gt; - SQLite-compatible database with edge replicas&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;neon.tech&#x2F;&quot;&gt;Neon&lt;&#x2F;a&gt; - Serverless PostgreSQL with branching&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;superfly&#x2F;litefs&quot;&gt;LiteFS&lt;&#x2F;a&gt; - Distributed SQLite replication by Fly.io&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.cockroachlabs.com&#x2F;&quot;&gt;CockroachDB&lt;&#x2F;a&gt; - Distributed SQL database designed for multi-region&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;supabase.com&#x2F;&quot;&gt;Supabase&lt;&#x2F;a&gt; - Open-source Firebase alternative built on PostgreSQL&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h3 id=&quot;projects-referenced&quot;&gt;Projects referenced&lt;&#x2F;h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;cyanea.bio&quot;&gt;Cyanea&lt;&#x2F;a&gt; - Bioinformatics platform using Wasm and WebGPU for client-side compute&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;archipelag.io&quot;&gt;Archipelag&lt;&#x2F;a&gt; - Distributed compute platform with browser-based Wasm nodes&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;raskell-io&#x2F;kurumi&quot;&gt;Kurumi&lt;&#x2F;a&gt; - Local-first second brain app with CRDT sync&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;raskell-io&#x2F;conflux&quot;&gt;Conflux&lt;&#x2F;a&gt; - Schema-aware CRDT engine for deterministic merge&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
</description>
      </item>
      <item>
          <title>Looking back on 2025</title>
          <pubDate>Wed, 31 Dec 2025 00:00:00 +0000</pubDate>
          <author>Unknown</author>
          <link>https://raskell.io/articles/looking-back-on-2025/</link>
          <guid>https://raskell.io/articles/looking-back-on-2025/</guid>
          <description xml:base="https://raskell.io/articles/looking-back-on-2025/">&lt;p&gt;I spent part of this year on the shores of Okinawa. The water there is something else entirely, this impossible azure that shifts to turquoise in the shallows, so clear you can see the coral formations from the surface. I found myself thinking about systems while I was there, the way you do when you’re floating in salt water with nothing pressing to attend to.&lt;&#x2F;p&gt;
&lt;p&gt;Between swims, I read Tim Berners-Lee’s “This is for everyone.” I’ve been building web software for over a decade now, and I thought I understood what the web was. But reading TBL’s words while watching that reef ecosystem do its thing, thousands of species in constant exchange, no central coordinator, just emergent complexity from simple rules, something shifted in how I saw it all.&lt;&#x2F;p&gt;
&lt;p&gt;The web TBL imagined was supposed to work like that reef. A commons. Many small nodes, each doing their own thing, connected through open protocols. Information flowing freely. The beauty of it wasn’t in any single node but in the connections between them, the way the whole became more than the sum of its parts. The same principle that makes a reef resilient makes a network powerful: diversity, redundancy, local adaptation.&lt;&#x2F;p&gt;
&lt;p&gt;What we built instead looks more like industrial aquaculture. Five platforms. Algorithmic monoculture. Content optimized for engagement metrics rather than usefulness. We took a system designed for decentralization and built the most centralized information infrastructure in human history.&lt;&#x2F;p&gt;
&lt;p&gt;I keep thinking about how that happened. The web itself never changed. HTTP still works the same way, HTML still does what it always did. What changed was the economics. Publishing became free, but being &lt;em&gt;found&lt;&#x2F;em&gt; became expensive. The platforms positioned themselves as the gatekeepers of attention, and suddenly you couldn’t reach people without paying the toll, whether in ad spend or in algorithmic compliance or in the slow erosion of doing whatever it took to game SEO.&lt;&#x2F;p&gt;
&lt;p&gt;The thing about monocultures is they’re efficient right up until they’re not. A reef can lose a species and adapt. A monoculture gets one disease and collapses. We’ve been watching the web’s monoculture show stress fractures for years. The enshittification of platforms, the SEO content farms drowning out signal with noise, the way social media stopped being social and started being a feed of engagement-optimized content from strangers.&lt;&#x2F;p&gt;
&lt;p&gt;Then 2025 happened, and AI started breaking things in interesting ways.&lt;&#x2F;p&gt;
&lt;p&gt;The obvious take is that AI makes the content problem worse. And superficially, that’s true. If you thought SEO spam was bad before, wait until you see what happens when generating ten thousand pages of plausible-sounding garbage costs essentially nothing. The content farms went into overdrive. Social platforms filled with synthetic engagement.&lt;&#x2F;p&gt;
&lt;p&gt;But here’s the thing I keep coming back to: maybe that’s the fever that breaks the infection.&lt;&#x2F;p&gt;
&lt;p&gt;The old economics of the web depended on a particular scarcity. Human attention is finite, and the platforms controlled access to it. You wanted eyeballs, you played their game. SEO worked because Google was the gateway and you could optimize for what Google wanted. Platform distribution mattered because that’s where the people were.&lt;&#x2F;p&gt;
&lt;p&gt;AI disrupts this in ways that I think are genuinely interesting. When an AI assistant can synthesize information from across the web and deliver it directly to the user, the value of ranking first on Google diminishes. Why click through to a content farm when the answer is already in front of you? When AI agents can find and surface relevant content directly, you don’t need to be on the platform where the eyeballs gather. The middleman’s leverage starts to evaporate.&lt;&#x2F;p&gt;
&lt;p&gt;And crucially: when everyone can generate infinite content at zero marginal cost, content quantity becomes worthless. What matters is provenance. Accuracy. Usefulness. The things that are actually hard. The things that require a human perspective, or at least require &lt;em&gt;being right&lt;&#x2F;em&gt; in ways that matter.&lt;&#x2F;p&gt;
&lt;p&gt;I find myself unexpectedly optimistic about what comes next.&lt;&#x2F;p&gt;
&lt;p&gt;If AI breaks the distribution stranglehold that platforms have, the economics of the web could flip in interesting directions. The old model needed scale because reaching people was expensive. But if AI handles discovery, finding relevant content and bringing it to users, then maybe you don’t need scale anymore. Maybe small becomes viable again.&lt;&#x2F;p&gt;
&lt;p&gt;Think about what this means concretely. A static site costs nearly nothing to run. No databases to scale, no servers to babysit, just files sitting on edge nodes around the world. If you don’t need to capture user data for ad-driven personalization, you don’t need the complexity of the surveillance stack. If you don’t need platform distribution, you don’t need to play platform games.&lt;&#x2F;p&gt;
&lt;p&gt;There’s another piece to this that I think most people are missing: edge computing changes what personalization can mean. The conventional wisdom is that personalization requires surveillance, that you need to know everything about a user to show them relevant content. But that’s only true if personalization happens in a centralized database somewhere. If personalization happens at the edge, at the moment of request, you can adapt content to context without ever needing to know who the user is. The edge function doesn’t need a profile. It just needs to know what was asked for and what context it’s being asked in.&lt;&#x2F;p&gt;
&lt;p&gt;This is the architecture I keep thinking about: static content at the origin, edge functions that adapt it anonymously, AI agents that find and surface it based on actual relevance rather than SEO gaming. No surveillance required. No platform dependency. No scaling costs that force you into growth-at-all-costs mode.&lt;&#x2F;p&gt;
&lt;p&gt;It looks more like a reef than a fish farm.&lt;&#x2F;p&gt;
&lt;p&gt;I don’t want to oversell this. The transition, if it happens, won’t be clean. The platforms aren’t going to quietly cede control. The incentives that built the current web are still operating. And AI itself could go in directions that make things worse rather than better. There are plenty of dystopian paths from here.&lt;&#x2F;p&gt;
&lt;p&gt;But when I think about what I want to build toward, it’s that reef model. Many small, specialized nodes. Interconnected through open protocols. Resilient because distributed. Sustainable because the economics work at small scale.&lt;&#x2F;p&gt;
&lt;p&gt;This site is part of that bet. Static content, no tracking, no platform dependencies. The tools I’m working on (Zentinel, Sango, Ushio) are all about making edge infrastructure more accessible, making it easier to build and operate systems that are distributed and independent.&lt;&#x2F;p&gt;
&lt;p&gt;2025 was the year AI started breaking the old model. I don’t know exactly what grows in its place. But floating in that Okinawan water, watching the reef do what reefs do, I got a sense of what healthy systems look like. Diverse. Interconnected. Resilient. Not optimized for any single metric, but somehow working anyway.&lt;&#x2F;p&gt;
&lt;p&gt;That’s what I’m betting on.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;references-and-further-reading&quot;&gt;References and further reading&lt;&#x2F;h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.w3.org&#x2F;People&#x2F;Berners-Lee&#x2F;&quot;&gt;Tim Berners-Lee&lt;&#x2F;a&gt; - Creator of the World Wide Web and author of “This is for everyone”&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Enshittification&quot;&gt;Enshittification&lt;&#x2F;a&gt; - Cory Doctorow’s term for the pattern of platform decay&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;indieweb.org&#x2F;&quot;&gt;IndieWeb&lt;&#x2F;a&gt; - Community building the independent web with open standards&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.getzola.org&#x2F;&quot;&gt;Zola&lt;&#x2F;a&gt; - Static site generator used to build this site&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;zentinelproxy.io&quot;&gt;Zentinel&lt;&#x2F;a&gt; - Security-first reverse proxy for the open web&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;raskell-io&#x2F;sango&quot;&gt;Sango&lt;&#x2F;a&gt; - Edge diagnostics CLI for TLS, HTTP, and security header analysis&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;raskell-io&#x2F;ushio&quot;&gt;Ushio&lt;&#x2F;a&gt; - Deterministic edge traffic replay tool&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
</description>
      </item>
      <item>
          <title>Mise ate my Makefile</title>
          <pubDate>Sun, 14 Dec 2025 00:00:00 +0000</pubDate>
          <author>Unknown</author>
          <link>https://raskell.io/articles/mise-ate-my-makefile/</link>
          <guid>https://raskell.io/articles/mise-ate-my-makefile/</guid>
          <description xml:base="https://raskell.io/articles/mise-ate-my-makefile/">&lt;p&gt;I maintain around forty repositories across four GitHub organizations. &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;zentinelproxy.io&quot;&gt;Zentinel&lt;&#x2F;a&gt; alone accounts for over thirty: the core proxy, a Rust SDK, and a growing collection of agents for WAF inspection, auth, rate limiting, GraphQL security, and a dozen other edge concerns. &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;archipelag.io&quot;&gt;Archipelag&lt;&#x2F;a&gt; spans an Elixir coordinator, a Rust node agent, Python and TypeScript SDKs, mobile agents in Kotlin and Swift, and infrastructure-as-code repos. &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;cyanea.bio&quot;&gt;Cyanea&lt;&#x2F;a&gt; is Elixir with Rust NIFs and a separate Rust bioinformatics library. Then there are the standalone tools: &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;raskell-io&#x2F;conflux&quot;&gt;Conflux&lt;&#x2F;a&gt; (Rust CRDT engine), &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;raskell-io&#x2F;sango&quot;&gt;Sango&lt;&#x2F;a&gt; (Rust edge diagnostics CLI), &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;raskell-io&#x2F;shiioo&quot;&gt;Shiioo&lt;&#x2F;a&gt; (Rust agentic orchestrator), &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;raskell-io&#x2F;vela&quot;&gt;Vela&lt;&#x2F;a&gt; (Rust bare-metal deployment), &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;raskell-io&#x2F;refrakt&quot;&gt;Refrakt&lt;&#x2F;a&gt; (Gleam web framework), &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;raskell-io&#x2F;kurumi&quot;&gt;Kurumi&lt;&#x2F;a&gt; (Svelte local-first app), and this site you are reading (Zola).&lt;&#x2F;p&gt;
&lt;p&gt;The languages span Rust, Elixir, Gleam, Python, TypeScript, Kotlin, Swift, and whatever shell scripts accumulated over the years. Every project needs a toolchain. Most need task automation. All of them need to be approachable for a contributor who clones the repo for the first time.&lt;&#x2F;p&gt;
&lt;p&gt;The Makefile approach was breaking down. So was everything else I tried.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;what-was-failing&quot;&gt;What was failing&lt;&#x2F;h2&gt;
&lt;p&gt;The standard setup for most of my Rust projects was a Makefile with targets for &lt;code&gt;build&lt;&#x2F;code&gt;, &lt;code&gt;test&lt;&#x2F;code&gt;, &lt;code&gt;clippy&lt;&#x2F;code&gt;, &lt;code&gt;fmt&lt;&#x2F;code&gt;, and &lt;code&gt;release&lt;&#x2F;code&gt;. Simple enough for one repo. The problem surfaces when you maintain thirty of them.&lt;&#x2F;p&gt;
&lt;p&gt;GNU Make and BSD Make disagree on syntax in ways that cause silent failures. A Makefile that works on my Linux CI runner breaks on a contributor’s macOS laptop because of a conditional or a shell invocation difference. The fix is always “use GNU make,” but that means documenting it, adding a check, and fielding issues from people who forget.&lt;&#x2F;p&gt;
&lt;p&gt;Worse, Makefiles cannot declare tool dependencies. A Rust project needs a specific Rust version, maybe &lt;code&gt;protoc&lt;&#x2F;code&gt; for gRPC, maybe &lt;code&gt;cargo-watch&lt;&#x2F;code&gt; for development convenience. The Makefile assumes these tools exist. When they do not, the developer gets a cryptic error five minutes into their first build.&lt;&#x2F;p&gt;
&lt;p&gt;So projects accumulated scaffolding:&lt;&#x2F;p&gt;
&lt;pre&gt;&lt;code&gt;.rust-version
.tool-versions
Makefile
scripts&amp;#x2F;setup.sh
scripts&amp;#x2F;ci.sh
scripts&amp;#x2F;release.sh
.envrc
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Six files to express what amounts to: “this project uses Rust 1.83, needs protoc, and has five things you can run.” Multiply that by forty repos and you have a maintenance surface that nobody wants to touch. The &lt;code&gt;scripts&#x2F;&lt;&#x2F;code&gt; folder in particular had a way of growing silently. Someone adds a helper. Someone else copies it from another project with modifications. Six months later you have three slightly different versions of the same release script across three orgs.&lt;&#x2F;p&gt;
&lt;p&gt;The Elixir projects had it worse. Elixir needs Erlang&#x2F;OTP at a specific version, then Elixir itself at a matching version, then Node for asset compilation in Phoenix, then possibly Rust for NIFs (Cyanea compiles Rust bioinformatics code into the BEAM release). Four tool dependencies before you write a line of application code. &lt;code&gt;asdf&lt;&#x2F;code&gt; handled the version management, but slowly and without task automation, so you still needed a Makefile on top.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;why-not-nix&quot;&gt;Why not Nix&lt;&#x2F;h2&gt;
&lt;p&gt;I gave Nix a serious try. The promise is appealing: declare your entire development environment in a single file, get reproducible builds, never worry about system state. The Nix shell concept is genuinely elegant.&lt;&#x2F;p&gt;
&lt;p&gt;In practice, the cost was too high for my use case. Nix’s learning curve is steep even for experienced engineers. The language is its own thing. The documentation assumes you already understand the Nix store model. When something breaks, the error messages point at derivation hashes, not at the thing you actually did wrong.&lt;&#x2F;p&gt;
&lt;p&gt;The bigger issue was onboarding. If a contributor wants to fix a typo in a Zentinel agent’s README, asking them to install Nix and understand flakes is a non-starter. The tool that manages your development environment should not itself become a project you have to learn. Nix solves a harder problem than I have. I do not need bit-for-bit reproducible builds across machines. I need “install Rust 1.83 and run the tests.”&lt;&#x2F;p&gt;
&lt;h2 id=&quot;why-not-asdf&quot;&gt;Why not asdf&lt;&#x2F;h2&gt;
&lt;p&gt;asdf was my default for years. It handled the version management problem well enough. The plugin system meant I could manage Rust, Elixir, Erlang, Node, and Python versions with a single &lt;code&gt;.tool-versions&lt;&#x2F;code&gt; file.&lt;&#x2F;p&gt;
&lt;p&gt;Three things pushed me away.&lt;&#x2F;p&gt;
&lt;p&gt;First, speed. asdf is shell scripts. Every invocation pays the cost of sourcing plugins, resolving versions, and shimming binaries. On a fast machine you barely notice. On CI, where you run &lt;code&gt;asdf install&lt;&#x2F;code&gt; in a fresh environment, the overhead adds up. Mise is a compiled Rust binary. It is meaningfully faster at both installation and version resolution.&lt;&#x2F;p&gt;
&lt;p&gt;Second, no task automation. asdf manages tool versions. That is all it does. You still need Make or a scripts folder for project tasks. That means two tools, two configuration surfaces, two things to document.&lt;&#x2F;p&gt;
&lt;p&gt;Third, plugin quality varied. The core plugins for Node and Ruby were solid. Plugins for less mainstream tools could be stale, broken, or missing. Mise started as an asdf-compatible rewrite and inherited the plugin ecosystem, but its built-in backends for common tools (Rust, Node, Python, Go, Erlang, Elixir) are faster and more reliable than shelling out to plugins.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;what-mise-actually-does&quot;&gt;What mise actually does&lt;&#x2F;h2&gt;
&lt;p&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;mise.jdx.dev&#x2F;&quot;&gt;Mise&lt;&#x2F;a&gt; is a single Rust binary that combines tool version management and task running into one configuration file per project. It does asdf’s job and Make’s job in a single tool.&lt;&#x2F;p&gt;
&lt;p&gt;Here is this site’s configuration. The entire thing:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;toml&quot; class=&quot;language-toml &quot;&gt;&lt;code class=&quot;language-toml&quot; data-lang=&quot;toml&quot;&gt;# mise.toml (raskell.io)
[tools]
zola = &amp;quot;0.19&amp;quot;

[env]
_.file = &amp;quot;.env&amp;quot;

[tasks.serve]
description = &amp;quot;Start the Zola development server&amp;quot;
run = &amp;quot;zola serve&amp;quot;

[tasks.build]
description = &amp;quot;Build the site for production&amp;quot;
run = &amp;quot;zola build&amp;quot;

[tasks.check]
description = &amp;quot;Check the site for errors without building&amp;quot;
run = &amp;quot;zola check&amp;quot;

[tasks.new]
description = &amp;quot;Create a new article&amp;quot;
run = &amp;quot;&amp;quot;&amp;quot;
#!&amp;#x2F;usr&amp;#x2F;bin&amp;#x2F;env bash
if [ -z &amp;quot;$1&amp;quot; ]; then
  echo &amp;quot;Usage: mise run new &amp;lt;article-slug&amp;gt;&amp;quot;
  exit 1
fi
SLUG=&amp;quot;$1&amp;quot;
DATE=$(date +%Y-%m-%d)
FILE=&amp;quot;content&amp;#x2F;articles&amp;#x2F;${SLUG}.md&amp;quot;
cat &amp;gt; &amp;quot;$FILE&amp;quot; &amp;lt;&amp;lt; ARTICLE
+++
title = &amp;quot;&amp;quot;
date = ${DATE}
description = &amp;quot;&amp;quot;
[taxonomies]
tags = []
categories = []
[extra]
author = &amp;quot;Raffael&amp;quot;
+++
ARTICLE
echo &amp;quot;Created $FILE&amp;quot;
&amp;quot;&amp;quot;&amp;quot;
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;One file. Declares the tool (Zola 0.19), loads environment variables, and defines every task a contributor needs. &lt;code&gt;mise install&lt;&#x2F;code&gt; sets up the toolchain. &lt;code&gt;mise tasks&lt;&#x2F;code&gt; shows what is available. &lt;code&gt;mise run serve&lt;&#x2F;code&gt; starts the dev server. No Makefile. No scripts folder. No documentation page explaining how to get Zola at the right version.&lt;&#x2F;p&gt;
&lt;p&gt;For a Rust project like &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;raskell-io&#x2F;shiioo&quot;&gt;Shiioo&lt;&#x2F;a&gt; (the agentic orchestrator), the configuration is larger but follows the same pattern:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;toml&quot; class=&quot;language-toml &quot;&gt;&lt;code class=&quot;language-toml&quot; data-lang=&quot;toml&quot;&gt;# .mise.toml (shiioo)
[tools]
rust = &amp;quot;latest&amp;quot;

[env]
RUST_LOG = &amp;quot;info&amp;quot;
RUST_BACKTRACE = &amp;quot;1&amp;quot;
_.path = [&amp;quot;.&amp;#x2F;target&amp;#x2F;release&amp;quot;, &amp;quot;.&amp;#x2F;target&amp;#x2F;debug&amp;quot;]

[tasks.build]
description = &amp;quot;Build all crates in release mode&amp;quot;
run = &amp;quot;cargo build --release&amp;quot;

[tasks.test]
description = &amp;quot;Run all tests&amp;quot;
run = &amp;quot;cargo test&amp;quot;

[tasks.clippy]
description = &amp;quot;Run clippy lints&amp;quot;
run = &amp;quot;cargo clippy --all-targets -- -D warnings&amp;quot;

[tasks.fmt]
description = &amp;quot;Format code with rustfmt&amp;quot;
run = &amp;quot;cargo fmt --all&amp;quot;

[tasks.ci]
description = &amp;quot;CI pipeline: format check, clippy, test&amp;quot;
depends = [&amp;quot;fmt-check&amp;quot;, &amp;quot;clippy&amp;quot;, &amp;quot;test&amp;quot;]

[tasks.dev]
description = &amp;quot;Full development build and run&amp;quot;
depends = [&amp;quot;fmt&amp;quot;, &amp;quot;check&amp;quot;, &amp;quot;test&amp;quot;]
run = &amp;quot;cargo run -p shiioo-server&amp;quot;
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;The &lt;code&gt;depends&lt;&#x2F;code&gt; key is where mise replaces the one thing Make was genuinely good at: task dependency ordering. &lt;code&gt;mise run ci&lt;&#x2F;code&gt; runs format checking, then clippy, then tests, in sequence. If clippy fails, tests do not run. It is not as expressive as Make’s file-based dependency graph, but for project automation tasks (as opposed to build tasks, which cargo or mix handle), it covers what I actually need.&lt;&#x2F;p&gt;
&lt;p&gt;For a multi-language project like Cyanea, the value is even clearer. The Elixir app needs Erlang, Elixir, Node, and Rust. One &lt;code&gt;[tools]&lt;&#x2F;code&gt; section pins all four. One &lt;code&gt;mise install&lt;&#x2F;code&gt; gets a contributor from zero to a working environment. Without mise, that setup involved installing asdf, adding four plugins, running &lt;code&gt;asdf install&lt;&#x2F;code&gt;, then installing direnv for environment variables, then reading the Makefile to figure out how to run things. With mise, it is two commands: &lt;code&gt;mise install&lt;&#x2F;code&gt; and &lt;code&gt;mise run dev&lt;&#x2F;code&gt;.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-cross-project-pattern&quot;&gt;The cross-project pattern&lt;&#x2F;h2&gt;
&lt;p&gt;The real payoff is not in any single project. It is the consistency across all of them.&lt;&#x2F;p&gt;
&lt;p&gt;Every repo in every org follows the same contract:&lt;&#x2F;p&gt;
&lt;ol&gt;
&lt;li&gt;Clone the repo&lt;&#x2F;li&gt;
&lt;li&gt;Run &lt;code&gt;mise install&lt;&#x2F;code&gt;&lt;&#x2F;li&gt;
&lt;li&gt;Run &lt;code&gt;mise tasks&lt;&#x2F;code&gt; to see what is available&lt;&#x2F;li&gt;
&lt;li&gt;Run &lt;code&gt;mise run dev&lt;&#x2F;code&gt; or &lt;code&gt;mise run test&lt;&#x2F;code&gt;&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;p&gt;That is it. Whether the project is a Rust reverse proxy with thirty modules, an Elixir Phoenix application with LiveView and a NATS integration, a Gleam web framework, or a static site built with Zola, the entry point is identical. The person cloning the repo does not need to know which build system the project uses internally. They do not need to read a CONTRIBUTING.md to find out whether it is &lt;code&gt;make test&lt;&#x2F;code&gt; or &lt;code&gt;cargo test&lt;&#x2F;code&gt; or &lt;code&gt;mix test&lt;&#x2F;code&gt;. It is always &lt;code&gt;mise run test&lt;&#x2F;code&gt;.&lt;&#x2F;p&gt;
&lt;p&gt;This matters more than it sounds. When you maintain projects across four orgs and multiple languages, the cognitive overhead per context switch is the actual bottleneck. I work on Zentinel (Rust) in the morning, switch to Archipelag (Elixir) after lunch, then fix something on this site (Zola) in the evening. Without a consistent interface, each switch means recalling which project uses which conventions. With mise, the interface is always the same. The implementation behind &lt;code&gt;mise run test&lt;&#x2F;code&gt; differs (cargo, mix, zola check), but I do not care about that. I type the same command and the right thing happens.&lt;&#x2F;p&gt;
&lt;p&gt;For new contributors, the effect is more pronounced. Zentinel’s agent ecosystem has over twenty Rust repos. A contributor who submits a PR to the WAF agent and then wants to help with the auth agent does not need to learn a new setup process. Same structure, same task names, same workflow. The consistency compounds.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;what-mise-handles-that-make-does-not&quot;&gt;What mise handles that Make does not&lt;&#x2F;h2&gt;
&lt;p&gt;&lt;strong&gt;Environment variables.&lt;&#x2F;strong&gt; Mise loads environment from the config file or from &lt;code&gt;.env&lt;&#x2F;code&gt; files, scoped to the project directory. When I &lt;code&gt;cd&lt;&#x2F;code&gt; into a project, the right environment is active. When I leave, it deactivates. No direnv, no &lt;code&gt;.envrc&lt;&#x2F;code&gt;, no &lt;code&gt;source .env&lt;&#x2F;code&gt; in every shell session.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Tool installation.&lt;&#x2F;strong&gt; &lt;code&gt;mise install&lt;&#x2F;code&gt; in a fresh clone gets every tool the project needs at the exact specified version. Make cannot do this. Make assumes the tools exist. That assumption breaks on new machines, in CI, and for every new contributor.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Task discovery.&lt;&#x2F;strong&gt; &lt;code&gt;mise tasks&lt;&#x2F;code&gt; lists every available task with its description. Make has &lt;code&gt;make help&lt;&#x2F;code&gt; patterns, but those are conventions, not built-in features. With mise, discoverability is the default.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;File-based tasks.&lt;&#x2F;strong&gt; Any executable file in &lt;code&gt;.mise&#x2F;tasks&#x2F;&lt;&#x2F;code&gt; becomes a task automatically. No registration, no config entry needed. For tasks that outgrow a one-liner in TOML but do not warrant a standalone script in &lt;code&gt;scripts&#x2F;&lt;&#x2F;code&gt;, this is the right middle ground. The task is discoverable through &lt;code&gt;mise tasks&lt;&#x2F;code&gt; but lives as a normal shell script you can test independently.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;what-breaks&quot;&gt;What breaks&lt;&#x2F;h2&gt;
&lt;p&gt;Mise is not perfect. Honest assessment after running it across forty repos:&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Dynamic dependencies.&lt;&#x2F;strong&gt; Make can express “rebuild this if that file changed.” Mise tasks are imperative: they run or they do not. If you need file-level dependency tracking, you still need a build system (cargo, mix, webpack). Mise orchestrates tasks. It does not replace the build tool.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Ecosystem maturity.&lt;&#x2F;strong&gt; Mise is younger than Make and asdf. The documentation is good but not exhaustive. Some features (like hooks and watch mode) are recent additions. The pace of development is fast, which means features arrive quickly but occasionally change between minor versions.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Team familiarity.&lt;&#x2F;strong&gt; Make is universal. Every engineer has encountered a Makefile. Mise is still relatively unknown. Introducing it to a team requires a short pitch, but the pitch is easy: “it is Make plus asdf in one tool, configured in TOML.”&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Complex shell tasks.&lt;&#x2F;strong&gt; When a task grows beyond a few lines, the inline TOML string syntax gets awkward. The workaround is file-based tasks in &lt;code&gt;.mise&#x2F;tasks&#x2F;&lt;&#x2F;code&gt;, which works well but means the task definition lives in two places (TOML for metadata and task list, shell file for implementation).&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-migration&quot;&gt;The migration&lt;&#x2F;h2&gt;
&lt;p&gt;If you are moving an existing project, here is the approach I settled on after migrating across all four orgs:&lt;&#x2F;p&gt;
&lt;ol&gt;
&lt;li&gt;Add a &lt;code&gt;mise.toml&lt;&#x2F;code&gt; (or &lt;code&gt;.mise.toml&lt;&#x2F;code&gt;) at the project root. Start with just &lt;code&gt;[tools]&lt;&#x2F;code&gt; to declare the required versions.&lt;&#x2F;li&gt;
&lt;li&gt;Move the most-used Make targets to &lt;code&gt;[tasks]&lt;&#x2F;code&gt; one at a time. Keep the Makefile around until everything is ported.&lt;&#x2F;li&gt;
&lt;li&gt;Add &lt;code&gt;[env]&lt;&#x2F;code&gt; entries to replace &lt;code&gt;.envrc&lt;&#x2F;code&gt; or &lt;code&gt;.env.example&lt;&#x2F;code&gt; files.&lt;&#x2F;li&gt;
&lt;li&gt;Move standalone scripts from &lt;code&gt;scripts&#x2F;&lt;&#x2F;code&gt; to &lt;code&gt;.mise&#x2F;tasks&#x2F;&lt;&#x2F;code&gt; as file-based tasks.&lt;&#x2F;li&gt;
&lt;li&gt;Delete the Makefile last.&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;p&gt;Do not try to migrate everything at once. Start with the three tasks developers use daily (usually &lt;code&gt;dev&lt;&#x2F;code&gt;, &lt;code&gt;test&lt;&#x2F;code&gt;, and &lt;code&gt;build&lt;&#x2F;code&gt;). The rest can move incrementally. I also settled on a few naming conventions that help across projects: use clear verb-noun prefixes like &lt;code&gt;db-reset&lt;&#x2F;code&gt;, &lt;code&gt;cache-clear&lt;&#x2F;code&gt;, &lt;code&gt;test-unit&lt;&#x2F;code&gt;. Consistent naming makes task discovery predictable even before you run &lt;code&gt;mise tasks&lt;&#x2F;code&gt;.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-bottom-line&quot;&gt;The bottom line&lt;&#x2F;h2&gt;
&lt;p&gt;Mise is not a revolutionary tool. It does not do anything that was previously impossible. You could always install the right Rust version, write a Makefile, set up direnv, and maintain a scripts folder. What mise does is collapse all of that into a single file that is readable, portable, and consistent.&lt;&#x2F;p&gt;
&lt;p&gt;The compound effect is what matters. Forty repositories, four organizations, six languages, one pattern. Clone, install, run. No guessing which build system this particular project uses. No debugging a Makefile that works on Linux but breaks on macOS. No explaining to a contributor that they need asdf plus three plugins plus direnv plus GNU make before they can run the tests.&lt;&#x2F;p&gt;
&lt;p&gt;Every new project starts with a &lt;code&gt;mise.toml&lt;&#x2F;code&gt;. Setup takes two commands instead of a page of instructions. Contributors do not message me asking how to run things. They run &lt;code&gt;mise tasks&lt;&#x2F;code&gt; and figure it out.&lt;&#x2F;p&gt;
&lt;p&gt;That is the tool working.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;references-and-further-reading&quot;&gt;References and further reading&lt;&#x2F;h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;mise.jdx.dev&#x2F;&quot;&gt;mise&lt;&#x2F;a&gt; - Official documentation and installation guide&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;jdx&#x2F;mise&quot;&gt;mise source code&lt;&#x2F;a&gt; - GitHub repository and issue tracker&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;asdf-vm.com&#x2F;&quot;&gt;asdf&lt;&#x2F;a&gt; - The version manager mise was originally inspired by&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;nixos.org&#x2F;&quot;&gt;Nix&lt;&#x2F;a&gt; - Reproducible builds and development environments&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.gnu.org&#x2F;software&#x2F;make&#x2F;&quot;&gt;GNU Make&lt;&#x2F;a&gt; - The build tool mise replaces for task automation&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;toml.io&#x2F;&quot;&gt;TOML specification&lt;&#x2F;a&gt; - The configuration format mise uses&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;direnv.net&#x2F;&quot;&gt;direnv&lt;&#x2F;a&gt; - Environment variable manager that mise’s &lt;code&gt;[env]&lt;&#x2F;code&gt; section replaces&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;raskell-io&#x2F;shiioo&quot;&gt;Shiioo&lt;&#x2F;a&gt; - Real-world mise configuration referenced in this article&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;raskell-io&#x2F;mise-hx&quot;&gt;mise-hx&lt;&#x2F;a&gt; - Example of a custom mise plugin (for the hx Haskell toolchain)&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
</description>
      </item>
      <item>
          <title>Disk space maintenance on Void Linux</title>
          <pubDate>Wed, 01 May 2024 00:00:00 +0000</pubDate>
          <author>Unknown</author>
          <link>https://raskell.io/articles/disk-space-void-linux-maintenance/</link>
          <guid>https://raskell.io/articles/disk-space-void-linux-maintenance/</guid>
          <description xml:base="https://raskell.io/articles/disk-space-void-linux-maintenance/">&lt;h2 id=&quot;monday-morning-surprise&quot;&gt;Monday morning surprise&lt;&#x2F;h2&gt;
&lt;p&gt;As I spent most time doing stuff with my computer rather than configuring my beloved Linux distribution, Void Linux, I have developed the tendency to not really bother about Void at all until something crucial becomes unusable. After almost two years of having switched from Arch to Void, I have actually never encountered any major problem and felt I had made the right decision.&lt;&#x2F;p&gt;
&lt;p&gt;I checked my disk usage out of curiosity if the 250GB solid-state disk would be enough. And there came the surprise:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;shell&quot; class=&quot;language-shell &quot;&gt;&lt;code class=&quot;language-shell&quot; data-lang=&quot;shell&quot;&gt;$ df -H
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        8.4G     0  8.4G   0% &amp;#x2F;dev
tmpfs           8.4G  1.9M  8.4G   1% &amp;#x2F;dev&amp;#x2F;shm
tmpfs           8.4G  1.4M  8.4G   1% &amp;#x2F;run
&amp;#x2F;dev&amp;#x2F;nvme0n1p3  138G  117G   21G  85% &amp;#x2F;
efivarfs        158k   85k   69k  56% &amp;#x2F;sys&amp;#x2F;firmware&amp;#x2F;efi&amp;#x2F;efivars
cgroup          8.4G     0  8.4G   0% &amp;#x2F;sys&amp;#x2F;fs&amp;#x2F;cgroup
&amp;#x2F;dev&amp;#x2F;nvme0n1p4  366G   34G  332G  10% &amp;#x2F;home
&amp;#x2F;dev&amp;#x2F;nvme0n1p1  536M  152k  536M   1% &amp;#x2F;boot&amp;#x2F;efi
tmpfs           8.4G   25k  8.4G   1% &amp;#x2F;tmp
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;My root partition was full, way too full in my opinion. Did I miss something? Is Void not what I was looking for after all? I don’t enjoy baby sitting my OS &lt;em&gt;du jour&lt;&#x2F;em&gt;.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-painless-solution&quot;&gt;The painless solution&lt;&#x2F;h2&gt;
&lt;p&gt;After a quick Brave search, I ended up finding what I was looking for. Some kind fellow software engineer from China didn’t shy away to make a blog post about his journey when he faced the very same problem. Out of annoyance of having to deal with that, I copy-pasted as quickly as possible, not minding what kind of side-effects I might run into, these three commands.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;1-cleaning-the-package-cache&quot;&gt;1. Cleaning the package cache&lt;&#x2F;h3&gt;
&lt;p&gt;All the knowledge I was lacking was to be found with the man page of &lt;code&gt;xbps-remove&lt;&#x2F;code&gt;.&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;shell&quot; class=&quot;language-shell &quot;&gt;&lt;code class=&quot;language-shell&quot; data-lang=&quot;shell&quot;&gt;# xbps-remove -yO
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;The &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;man.voidlinux.org&#x2F;xbps-remove.1#O,&quot;&gt;man page&lt;&#x2F;a&gt; of &lt;code&gt;xbps-remove&lt;&#x2F;code&gt; tells us the &lt;code&gt;-O&lt;&#x2F;code&gt; parameter takes care of &lt;em&gt;cleaning the cache directory removing obsolete binary packages.&lt;&#x2F;em&gt; Obsolete binary packages? Good riddance! I was surprised this to learn that solely this step freed up almost half of my used root partition disk space.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;2-removing-orphaned-packages&quot;&gt;2. Removing orphaned packages&lt;&#x2F;h3&gt;
&lt;pre data-lang=&quot;shell&quot; class=&quot;language-shell &quot;&gt;&lt;code class=&quot;language-shell&quot; data-lang=&quot;shell&quot;&gt;# xbps-remove -yo
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Here the same &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;man.voidlinux.org&#x2F;xbps-remove.1#o,&quot;&gt;man page&lt;&#x2F;a&gt; tells us that the &lt;code&gt;-o&lt;&#x2F;code&gt; parameter takes care of &lt;em&gt;removing installed package orphans that were installed automatically (as dependencies) and are not currently dependencies of any installed package.&lt;&#x2F;em&gt; As before, good riddance!&lt;&#x2F;p&gt;
&lt;h3 id=&quot;3-purging-old-unused-kernels&quot;&gt;3. Purging old, unused kernels&lt;&#x2F;h3&gt;
&lt;p&gt;This one is interesting. While I knew about the circumstance that the people behind Void had developed their own package management ecosystem, I hadn’t fully realized there were other utilities that came along with the upstream Void installation which were there for me to manage my beloved OS. So, apparently, one of these is a &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;void-linux&#x2F;void-packages&#x2F;blob&#x2F;master&#x2F;srcpkgs&#x2F;base-files&#x2F;files&#x2F;vkpurge&quot;&gt;shell script&lt;&#x2F;a&gt; name &lt;code&gt;vkpurge&lt;&#x2F;code&gt;, I must assume as a short name for &lt;code&gt;Void&#x27;s Kernel purging&lt;&#x2F;code&gt; tool. I like this type of naming heavily implying its functionality.&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;shell&quot; class=&quot;language-shell &quot;&gt;&lt;code class=&quot;language-shell&quot; data-lang=&quot;shell&quot;&gt;# vkpurge rm all
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;It performed as expected. Old kernel files (and modules?) were indeed purged and freed up even more disk space. I should add that this step is optional as it is always useful to have some old kernels at hand when things hit the fan (which for me, they haven’t in a very, very long time).&lt;&#x2F;p&gt;
&lt;h2 id=&quot;result&quot;&gt;Result&lt;&#x2F;h2&gt;
&lt;p&gt;I couldn’t be happier.&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;shell&quot; class=&quot;language-shell &quot;&gt;&lt;code class=&quot;language-shell&quot; data-lang=&quot;shell&quot;&gt;$ df -H
Filesystem      Size  Used Avail Use% Mounted on
...
&amp;#x2F;dev&amp;#x2F;nvme0n1p3  138G   45G   93G  33% &amp;#x2F;
...
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;h2 id=&quot;renewal-of-faith&quot;&gt;Renewal of faith&lt;&#x2F;h2&gt;
&lt;p&gt;Overall, why am I even writing this if some other fellow engineer already figured this out? Simply, because I would therefore be able to explain why I have enjoyed my journey with Void as my go-to Linux distribution. It keeps things simple. Some well-documented utilities. As simple that a simple Brave search suffices to find the answer to my problems.&lt;&#x2F;p&gt;
&lt;p&gt;This very aspect of Void is worthwhile highlighing. I remember more arcane Linux distributions that had me in their grip in figuring things out. Many Googles searches were necessary and even more trial and errors attempts to get simple things fixed.&lt;&#x2F;p&gt;
&lt;p&gt;Now back to my Monday morning.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;references-and-further-reading&quot;&gt;References and further reading&lt;&#x2F;h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;voidlinux.org&#x2F;&quot;&gt;Void Linux&lt;&#x2F;a&gt; - Official project site&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;docs.voidlinux.org&#x2F;xbps&#x2F;index.html&quot;&gt;Void Linux Handbook: XBPS&lt;&#x2F;a&gt; - Official documentation for the XBPS package manager&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;man.voidlinux.org&#x2F;xbps-remove.1&quot;&gt;xbps-remove(1) man page&lt;&#x2F;a&gt; - Manual page for package removal and cache cleaning&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;void-linux&#x2F;void-packages&#x2F;blob&#x2F;master&#x2F;srcpkgs&#x2F;base-files&#x2F;files&#x2F;vkpurge&quot;&gt;vkpurge source&lt;&#x2F;a&gt; - Shell script for purging old kernels on Void Linux&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;docs.voidlinux.org&#x2F;about&#x2F;faq.html&quot;&gt;Void Linux FAQ&lt;&#x2F;a&gt; - Common questions about running and maintaining Void&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;hr &#x2F;&gt;
&lt;div class=&quot;footnote-definition&quot; id=&quot;1&quot;&gt;&lt;sup class=&quot;footnote-definition-label&quot;&gt;1&lt;&#x2F;sup&gt;
&lt;p&gt;Painting in header image is “Seaside” by Aleksandr Deyneka&lt;&#x2F;p&gt;
&lt;&#x2F;div&gt;
</description>
      </item>
      <item>
          <title>All beginning is Haskell</title>
          <pubDate>Mon, 06 Mar 2023 00:00:00 +0000</pubDate>
          <author>Unknown</author>
          <link>https://raskell.io/articles/all-beginning-is-haskell/</link>
          <guid>https://raskell.io/articles/all-beginning-is-haskell/</guid>
          <description xml:base="https://raskell.io/articles/all-beginning-is-haskell/">&lt;p&gt;This site is called raskell.io. That is not an accident.&lt;&#x2F;p&gt;
&lt;p&gt;I started learning Haskell because I liked mathematics and someone told me there was a programming language built on top of it. Not “inspired by” in the loose way that every language claims some mathematical foundation. Actually built on lambda calculus, category theory, and type theory, in a way where the math is not decoration but structure.&lt;&#x2F;p&gt;
&lt;p&gt;What I did not expect was how thoroughly it would rewire the way I think about building software. Not because Haskell is the best language for every task. It is not, and I write far more Rust than Haskell these days. But because Haskell teaches you to think about programs in a way that makes you better at everything else.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;what-haskell-actually-teaches-you&quot;&gt;What Haskell actually teaches you&lt;&#x2F;h2&gt;
&lt;p&gt;Most introductions to Haskell talk about pure functions, immutability, and monads. They are not wrong, but they miss the point. The point is not any single feature. It is how those features combine into a way of thinking about programs as compositions of well-typed transformations.&lt;&#x2F;p&gt;
&lt;p&gt;In an imperative language, you think about sequences of steps. Do this, then that, then check a condition, then loop. The program is a recipe. In Haskell, you think about transformations. What goes in, what comes out, what shape does the data have at each stage. The program is a pipeline.&lt;&#x2F;p&gt;
&lt;p&gt;This sounds abstract until you see it in practice. Suppose you need to process a list of user records: filter out inactive users, extract their email addresses, and normalize them to lowercase.&lt;&#x2F;p&gt;
&lt;p&gt;In an imperative style, you write a loop with conditions and mutations. In Haskell:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;haskell&quot; class=&quot;language-haskell &quot;&gt;&lt;code class=&quot;language-haskell&quot; data-lang=&quot;haskell&quot;&gt;activeEmails :: [User] -&amp;gt; [Email]
activeEmails = map (normalize . email) . filter isActive
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;One line. Read it right to left: filter active users, then map over the result, extracting and normalizing emails. The type signature tells you what goes in (&lt;code&gt;[User]&lt;&#x2F;code&gt;) and what comes out (&lt;code&gt;[Email]&lt;&#x2F;code&gt;). No mutation. No intermediate variables. No place for off-by-one errors or null pointer exceptions.&lt;&#x2F;p&gt;
&lt;p&gt;The type signature is not just documentation. It is a contract enforced by the compiler. If &lt;code&gt;isActive&lt;&#x2F;code&gt; expects a &lt;code&gt;User&lt;&#x2F;code&gt; and you pass it a &lt;code&gt;String&lt;&#x2F;code&gt;, the program will not compile. If &lt;code&gt;normalize&lt;&#x2F;code&gt; returns an &lt;code&gt;Email&lt;&#x2F;code&gt; but you try to use it as a &lt;code&gt;String&lt;&#x2F;code&gt;, the program will not compile. The compiler is your first reviewer, and it is tireless.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;types-as-design-tools&quot;&gt;Types as design tools&lt;&#x2F;h2&gt;
&lt;p&gt;The deeper lesson is that types are not just error catchers. They are design tools.&lt;&#x2F;p&gt;
&lt;p&gt;When I design a system in Haskell, I start with the types. What are the entities? What are the relationships? What transformations are valid? The type system forces you to be precise about these questions before you write any logic. This precision surfaces design problems early, when they are cheap to fix.&lt;&#x2F;p&gt;
&lt;p&gt;Consider modeling a document that can be in one of several states:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;haskell&quot; class=&quot;language-haskell &quot;&gt;&lt;code class=&quot;language-haskell&quot; data-lang=&quot;haskell&quot;&gt;data Document
  = Draft { content :: Text, author :: UserId }
  | UnderReview { content :: Text, author :: UserId, reviewer :: UserId }
  | Published { content :: Text, author :: UserId, publishedAt :: UTCTime }
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;This is an algebraic data type. Each variant carries exactly the data that makes sense for that state. A &lt;code&gt;Draft&lt;&#x2F;code&gt; has no reviewer. A &lt;code&gt;Published&lt;&#x2F;code&gt; document has a timestamp. You cannot accidentally access a reviewer on a draft because the type system will not let you. The invalid state is unrepresentable.&lt;&#x2F;p&gt;
&lt;p&gt;This pattern, making illegal states unrepresentable, is perhaps the most valuable idea I took from Haskell. I use it in Rust constantly. Rust’s &lt;code&gt;enum&lt;&#x2F;code&gt; with associated data is directly descended from Haskell’s algebraic data types, and the same design principle applies: encode your invariants in the type system and let the compiler enforce them.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-monad-is-not-the-point&quot;&gt;The monad is not the point&lt;&#x2F;h2&gt;
&lt;p&gt;Every Haskell introduction eventually gets to monads, usually with a metaphor involving burritos or boxes. I will skip the metaphor.&lt;&#x2F;p&gt;
&lt;p&gt;A monad is a pattern for sequencing computations that carry some context. The &lt;code&gt;IO&lt;&#x2F;code&gt; monad carries the context of interacting with the outside world. The &lt;code&gt;Maybe&lt;&#x2F;code&gt; monad carries the context of possible failure. The &lt;code&gt;State&lt;&#x2F;code&gt; monad carries the context of mutable state. The pattern is the same in each case: take a value in a context, apply a function that produces a new value in a context, get back a combined context.&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;haskell&quot; class=&quot;language-haskell &quot;&gt;&lt;code class=&quot;language-haskell&quot; data-lang=&quot;haskell&quot;&gt;lookupUser :: UserId -&amp;gt; IO (Maybe User)
lookupUser uid = do
  conn &amp;lt;- getConnection
  result &amp;lt;- query conn &amp;quot;SELECT * FROM users WHERE id = ?&amp;quot; [uid]
  return (listToMaybe result)
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;The &lt;code&gt;IO&lt;&#x2F;code&gt; monad here sequences database operations. The &lt;code&gt;Maybe&lt;&#x2F;code&gt; handles the case where no user is found. The types tell you both things at a glance: this function does I&#x2F;O and might not return a result.&lt;&#x2F;p&gt;
&lt;p&gt;The point of monads is not that they are clever. The point is that they make effects explicit and composable. In most languages, a function can do I&#x2F;O, throw exceptions, mutate global state, or launch missiles, and you cannot tell from its signature. In Haskell, the type signature tells you exactly what effects a function can have. &lt;code&gt;Int -&amp;gt; Int&lt;&#x2F;code&gt; is pure. &lt;code&gt;Int -&amp;gt; IO Int&lt;&#x2F;code&gt; does I&#x2F;O. &lt;code&gt;Int -&amp;gt; Maybe Int&lt;&#x2F;code&gt; can fail. The information is right there, enforced by the compiler.&lt;&#x2F;p&gt;
&lt;p&gt;This discipline, making effects explicit, changed how I design APIs even in languages that do not enforce it. When I write a Rust function that returns &lt;code&gt;Result&amp;lt;T, E&amp;gt;&lt;&#x2F;code&gt;, I am using the same pattern: making failure explicit in the type rather than hiding it behind an exception. Rust learned this from Haskell, and so did I.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;why-i-am-still-building-haskell-tooling&quot;&gt;Why I am still building Haskell tooling&lt;&#x2F;h2&gt;
&lt;p&gt;If Haskell taught me so much, why do I mostly write Rust?&lt;&#x2F;p&gt;
&lt;p&gt;The honest answer: Haskell’s ecosystem has gaps. The language itself is excellent. GHC is one of the most sophisticated compilers ever built. The type system is unmatched in its expressiveness among production languages. But the surrounding infrastructure, the package management, the build tooling, the deployment story, has not kept pace.&lt;&#x2F;p&gt;
&lt;p&gt;Dependency management in Haskell is fragmented. Cabal and Stack coexist with overlapping but incompatible approaches. Build times are long. Cross-compilation is painful. Setting up a Haskell development environment from scratch still involves more friction than it should in 2026.&lt;&#x2F;p&gt;
&lt;p&gt;This is why hx exists. hx is a Haskell toolchain CLI that I am building in Rust. The choice of implementation language is deliberate. Haskell’s tooling problems are partly caused by tooling that is itself written in Haskell, creating bootstrap problems and long compile times for the tools themselves. A Rust binary starts instantly, compiles to a single static executable, and cross-compiles trivially. The tool should not have the same dependencies as the thing it manages.&lt;&#x2F;p&gt;
&lt;p&gt;hx is distributed through &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;raskell.io&#x2F;articles&#x2F;mise-ate-my-makefile&#x2F;&quot;&gt;mise&lt;&#x2F;a&gt; (naturally), as well as through AUR, Homebrew, Scoop, and Chocolatey. The goal is that setting up a Haskell project should be as frictionless as setting up a Rust project: one command to install the toolchain, one command to build.&lt;&#x2F;p&gt;
&lt;p&gt;On the other end of the spectrum, bhc (the Basel Haskell Compiler) is an experiment in taking Haskell in a direction GHC was never designed for: compiling Haskell for low-latency runtimes without a garbage collector. The target is workloads like tensor pipelines and real-time systems where GC pauses are not acceptable. bhc is early and ambitious, but it comes from the same conviction: Haskell’s ideas deserve better infrastructure than they currently have.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-haskell-in-my-rust&quot;&gt;The Haskell in my Rust&lt;&#x2F;h2&gt;
&lt;p&gt;I write Rust the way Haskell taught me to think.&lt;&#x2F;p&gt;
&lt;p&gt;Rust’s ownership model is not the same as Haskell’s purity, but it serves a similar purpose: it forces you to think about data flow explicitly. In Haskell, you cannot mutate a value because the language will not let you. In Rust, you can mutate, but the borrow checker forces you to be explicit about who owns the data and who can see it. Both languages make you think before you write.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;raskell-io&#x2F;conflux&quot;&gt;Conflux&lt;&#x2F;a&gt;, my CRDT engine, uses algebraic data types for its merge semantics. Each CRDT field type (LwwRegister, GrowOnlySet, ObservedRemoveSet) is an enum variant with associated data, exactly the pattern I described above. The merge function is associative, commutative, and idempotent. These are mathematical properties that I learned to care about from Haskell, where such properties are often expressed as type class laws.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;zentinelproxy.io&quot;&gt;Zentinel&lt;&#x2F;a&gt;, the reverse proxy, uses Rust’s type system to enforce that WAF decisions are handled in the correct pipeline stage. An &lt;code&gt;AgentDecision&lt;&#x2F;code&gt; is either &lt;code&gt;Allow&lt;&#x2F;code&gt;, &lt;code&gt;Block&lt;&#x2F;code&gt;, or &lt;code&gt;Modify&lt;&#x2F;code&gt;, and the proxy’s merge logic ensures that a &lt;code&gt;Block&lt;&#x2F;code&gt; from any agent cannot be overridden. The pattern is a monoid (decisions combine associatively with &lt;code&gt;Block&lt;&#x2F;code&gt; as the absorbing element), though nobody would call it that in the Rust codebase. The concept came from Haskell. The implementation is pure Rust.&lt;&#x2F;p&gt;
&lt;p&gt;Even &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;raskell-io&#x2F;shiioo&quot;&gt;Shiioo&lt;&#x2F;a&gt;, the agentic orchestrator, uses Haskell-influenced patterns. DAG workflows are compositions of typed transformations. Events are algebraic data types with exhaustive pattern matching. The event-sourcing model treats state as a fold over an event stream. &lt;code&gt;foldl&lt;&#x2F;code&gt; in Haskell, &lt;code&gt;Iterator::fold&lt;&#x2F;code&gt; in Rust. Same idea, different syntax.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;why-raskell&quot;&gt;Why “raskell”&lt;&#x2F;h2&gt;
&lt;p&gt;The name is a portmanteau. Raffael plus Haskell. I chose it because Haskell is where my engineering thinking started to take its current shape. Not the first language I learned, but the first one that changed how I think about all the others.&lt;&#x2F;p&gt;
&lt;p&gt;I do not believe you need to write Haskell to benefit from Haskell. But I believe that learning it, really learning it, not just reading about monads but building something real with algebraic data types and type classes and higher-order functions, will make you a better engineer in whatever language you actually use.&lt;&#x2F;p&gt;
&lt;p&gt;All beginning is Haskell. The rest is implementation.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;references-and-further-reading&quot;&gt;References and further reading&lt;&#x2F;h2&gt;
&lt;h3 id=&quot;learning-haskell&quot;&gt;Learning Haskell&lt;&#x2F;h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.haskell.org&#x2F;&quot;&gt;Haskell Language&lt;&#x2F;a&gt; - Official site with documentation and community links&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;learnyouahaskell.com&#x2F;&quot;&gt;Learn You a Haskell for Great Good!&lt;&#x2F;a&gt; - Approachable illustrated introduction&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;book.realworldhaskell.org&#x2F;&quot;&gt;Real World Haskell&lt;&#x2F;a&gt; - Practical Haskell for production use&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;wiki.haskell.org&#x2F;&quot;&gt;Haskell Wiki&lt;&#x2F;a&gt; - Community-maintained reference and tutorials&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;wiki.haskell.org&#x2F;Typeclassopedia&quot;&gt;Typeclassopedia&lt;&#x2F;a&gt; - Comprehensive guide to Haskell’s type class hierarchy&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h3 id=&quot;type-systems-and-theory&quot;&gt;Type systems and theory&lt;&#x2F;h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Haskell_Curry&quot;&gt;Haskell Curry&lt;&#x2F;a&gt; - The logician the language is named after&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Lambda_calculus&quot;&gt;Lambda calculus&lt;&#x2F;a&gt; - Alonzo Church’s formal system underlying Haskell&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Hindley%E2%80%93Milner_type_system&quot;&gt;Hindley-Milner type system&lt;&#x2F;a&gt; - The type inference algorithm at the core of Haskell and ML&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;blog.janestreet.com&#x2F;effective-ml-revisited&#x2F;&quot;&gt;Making illegal states unrepresentable&lt;&#x2F;a&gt; - Yaron Minsky’s influential talk on using types for correctness&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;wiki.haskell.org&#x2F;Algebraic_data_type&quot;&gt;Algebraic data types&lt;&#x2F;a&gt; - Haskell wiki reference on sum and product types&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h3 id=&quot;monads-and-effects&quot;&gt;Monads and effects&lt;&#x2F;h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;homepages.inf.ed.ac.uk&#x2F;wadler&#x2F;papers&#x2F;marktoberdorf&#x2F;baastad.pdf&quot;&gt;Philip Wadler, “Monads for functional programming”&lt;&#x2F;a&gt; - The foundational paper on monads in programming&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;wiki.haskell.org&#x2F;All_About_Monads&quot;&gt;All About Monads&lt;&#x2F;a&gt; - Haskell wiki guide to monadic programming&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h3 id=&quot;haskell-tooling&quot;&gt;Haskell tooling&lt;&#x2F;h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.haskell.org&#x2F;ghc&#x2F;&quot;&gt;GHC&lt;&#x2F;a&gt; - The Glasgow Haskell Compiler&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.haskell.org&#x2F;cabal&#x2F;&quot;&gt;Cabal&lt;&#x2F;a&gt; - Haskell’s build and package system&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;docs.haskellstack.org&#x2F;&quot;&gt;Stack&lt;&#x2F;a&gt; - Alternative build tool with curated package sets&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;raskell-io&#x2F;mise-hx&quot;&gt;mise-hx&lt;&#x2F;a&gt; - mise plugin for the hx Haskell toolchain CLI&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h3 id=&quot;projects-referenced&quot;&gt;Projects referenced&lt;&#x2F;h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;raskell-io&#x2F;conflux&quot;&gt;Conflux&lt;&#x2F;a&gt; - CRDT engine using algebraic data types for merge semantics&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;zentinelproxy.io&quot;&gt;Zentinel&lt;&#x2F;a&gt; - Reverse proxy with monoid-based decision merging&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;raskell-io&#x2F;shiioo&quot;&gt;Shiioo&lt;&#x2F;a&gt; - Agentic orchestrator using event-sourced state folds&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
</description>
      </item>
      <item>
          <title>My OpenBSD journey: Getting it virtualized with libvirt (1)</title>
          <pubDate>Mon, 06 Feb 2023 00:00:00 +0000</pubDate>
          <author>Unknown</author>
          <link>https://raskell.io/articles/my-openbsd-journey-getting-it-virtualized-with-libvirt-1/</link>
          <guid>https://raskell.io/articles/my-openbsd-journey-getting-it-virtualized-with-libvirt-1/</guid>
          <description xml:base="https://raskell.io/articles/my-openbsd-journey-getting-it-virtualized-with-libvirt-1/">&lt;h2 id=&quot;void-linux-as-my-daily-driver&quot;&gt;Void Linux as my daily driver&lt;&#x2F;h2&gt;
&lt;p&gt;Around six months ago, I decided to ditch my long in the tooth Arch-based setup on my belovest Thinkpad X1 Carbon. I’ve been very loyal over the years, and almost came to belive that Arch will be a constant in my adult life. While I kept up with upcoming technologies, I somehow lost track of the ever so diversifying landscape of Linux distributions. It took me a while of constantly coming across a generically named reference to what seemed to be yet another Linux distribution. That outwardly generic sounding name, &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;voidlinux.org&#x2F;&quot;&gt;Void Linux&lt;&#x2F;a&gt;, kept poking my curiosity by supposedly feeling like Arch Linux in the old days, while sharing some substantial DNA with the BSD operating systems. Yet, that’s another story I might tell another day, but to remain brief, the BSDs, in particular the infamous OpenBSD with its quite infamous lead developer Theo De Raadt, always were what I considered the endgame. The holy grail of Unix operating systems, so did I think over the decades, FreeBSD, NetBSD and OpenBSD, have always been on my personal radar and I felt I had to earn the intellectual capacity to be able to properly put them at use one day. Last year, when I made the (almost painless) switch from Arch Linux to Void Linux, the simplicity and especially the barebone experience of Void reignited the fascination and the admiration I always had for the BSD operating systems and their philosophy.&lt;&#x2F;p&gt;
&lt;p&gt;While I could write (and definitely will in the near future) about how my journey onto Void Linux and how it has been so far, I preferred to write down, in some like diary, and document every step on how one can approach and ultimately use &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.openbsd.org&#x2F;&quot;&gt;OpenBSD&lt;&#x2F;a&gt; in 2023. Big disclaimer, I’ve yet to install OpenBSD on some baremetal server I ordered some days ago, but dabbled around in the meantime with OS-level virtualization in order to get it running. That’s what brings me to &lt;em&gt;libvirt&lt;&#x2F;em&gt; and my surprise to learn that I wouldn’t need some full-fledged virtualization solution like the ones offered by VirtualBox or VMWare to efficiently run a virtualized OpenBSD machine. So, ok, let’s recap, so far we got the following bill of materials:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;Void Linux as the host system&lt;&#x2F;li&gt;
&lt;li&gt;OpenBSD to be virtualized on that host system&lt;&#x2F;li&gt;
&lt;li&gt;libvirt as the glue that makes virtualization feel like black magic&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h3 id=&quot;from-void-to-openbsd&quot;&gt;From Void to OpenBSD&lt;&#x2F;h3&gt;
&lt;p&gt;Before I tell you more about the history of how things went down while setting up OpenBSD, let me give you some basic notions about both Void Linux, being one out of many Linux distributions for the sake of simplicity representing them all as it ended up being my distribution of choice, and OpenBSD. As already mentioned earlier, Void Linux and OpenBSD are both Unix-like operating systems, but they feature enough differences to make them noteworthy. Here are a few similarities and differences between the two:&lt;&#x2F;p&gt;
&lt;p&gt;What is similar:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;Both are free and open-source operating systems.&lt;&#x2F;li&gt;
&lt;li&gt;Both use a package manager for software management. Void Linux uses XBPS, while OpenBSD uses pkg_add.&lt;&#x2F;li&gt;
&lt;li&gt;Both prioritize security and stability in their development and design.&lt;&#x2F;li&gt;
&lt;li&gt;Both feature a version control based package repository, meaning that changes in build definition are managed by pull requests from users.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;What is different:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;License: Void Linux is licensed under the MIT License, while OpenBSD is licensed under the ISC License.&lt;&#x2F;li&gt;
&lt;li&gt;Philosophy: OpenBSD prioritizes security and privacy, while Void Linux prioritizes simplicity and modularity.&lt;&#x2F;li&gt;
&lt;li&gt;Package Management: Void Linux uses the XBPS binary package manager, while OpenBSD uses pkg_add (also binary). OpenBSD additionally has a ports system for building from source.&lt;&#x2F;li&gt;
&lt;li&gt;Package Repository: Void Linux has a large and diverse repository, while OpenBSD has a smaller and more curated repository.&lt;&#x2F;li&gt;
&lt;li&gt;Init System: Void Linux uses runit as its init system, while OpenBSD uses rc. None uses the infamous systemd init system.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h2 id=&quot;what-is-openbsd-in-a-nutshell&quot;&gt;What is OpenBSD in a nutshell&lt;&#x2F;h2&gt;
&lt;p&gt;OpenBSD is a free and open-source operating system that focuses on security, standardization, and robustness. It is based on the Berkeley Software Distribution (BSD) Unix operating system and is developed by a global community of volunteers. OpenBSD aims to provide a secure platform for both personal and enterprise use by implementing strong security features, including access control mechanisms, encryption, and auditing.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Theo_de_Raadt&quot;&gt;Theo de Raadt&lt;&#x2F;a&gt; is the founder and lead developer of OpenBSD. His main objective with OpenBSD is to create a secure operating system that is free from backdoors, vulnerabilities, and other security weaknesses. He is committed to auditing the source code of the operating system and third-party software included with it, to identify and remove any potential security risks. De Raadt is also dedicated to improving the overall quality of the codebase and ensuring compatibility with a wide range of hardware and software.&lt;&#x2F;p&gt;
&lt;p&gt;What makes OpenBSD really special and stand out is that is developed a suite of tools that got adopted by other OSs like Linux, macOS or even Windows. One of the most famous instances of such adoption is the now de facto standard openssh suite. It actually emerged from within the development circle of the OpenBSD project. OpenBSD also implemented a wide range of OS features that are by now considered staples among other OSs, things like Linux-based OS-level containerization done via the means of cgroups, something that OpenBSD already pioneered and solved with a different spin many years before Linux with pledge and unveil. Go check out &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;why-openbsd.rocks&#x2F;fact&#x2F;freezero&#x2F;&quot;&gt;Why OpenBSD rocks&lt;&#x2F;a&gt; to get a feel what makes OpenBSD so unique and interesting.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;virtualization-with-libvirt&quot;&gt;Virtualization with libvirt&lt;&#x2F;h3&gt;
&lt;p&gt;So now, let’s get back to our virtualization endeavour where we would like to virtualize OpenBSD on a Void Linux installation. If you happen to be using another Linux distribution, most of the individual steps would be very similar. That brings me to the next technology we should explain a bit more here, and that is libvirt.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;libvirt.org&#x2F;&quot;&gt;libvirt&lt;&#x2F;a&gt; is an open-source virtualization management library that provides a simple and unified API for managing virtualization technologies, including KVM, QEMU, Xen, and others. It aims to simplify the process of creating, managing, and migrating virtual machines, storage, and networks, and to make it easier for administrators to manage virtual environments.&lt;&#x2F;p&gt;
&lt;p&gt;To virtualize an operating system like OpenBSD with libvirt, you need to follow these steps:&lt;&#x2F;p&gt;
&lt;ol&gt;
&lt;li&gt;Install libvirt and the virtualization technology you want to use, such as KVM.&lt;&#x2F;li&gt;
&lt;li&gt;Download the OpenBSD iso file and place it in a location accessible by libvirt.&lt;&#x2F;li&gt;
&lt;li&gt;Create a new virtual machine in libvirt with the OpenBSD ISO as the installation media. This can be done through the command line or using a graphical user interface such as virt-manager.&lt;&#x2F;li&gt;
&lt;li&gt;Configure the virtual machine, including the amount of memory, CPU, and disk space, to meet the requirements of OpenBSD.&lt;&#x2F;li&gt;
&lt;li&gt;Start the virtual machine and install OpenBSD as you would on a physical machine.&lt;&#x2F;li&gt;
&lt;li&gt;Once the installation is complete, you can configure the virtual network, storage, and other settings as required.&lt;&#x2F;li&gt;
&lt;li&gt;Finally, you can use the libvirt API or the command line to manage and control the virtual machine, including starting, stopping, migrating, and snapshotting.&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;h3 id=&quot;step-by-step-guide&quot;&gt;Step-by-step guide&lt;&#x2F;h3&gt;
&lt;p&gt;Let’s first install the &lt;code&gt;libvirt&lt;&#x2F;code&gt; package and some related packages which we need in order to connect via VNC. The VNC will provide us with the possibility to use the graphical interface of the running OpenBSD instance.&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;bash&quot; class=&quot;language-bash &quot;&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;$ sudo xbps-install -S dbus qemu libvirt virt-manager virt-viewer tigervnc
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Now, we need to add our user, in my case &lt;code&gt;raskell&lt;&#x2F;code&gt;, to the &lt;code&gt;libvirt&lt;&#x2F;code&gt; group which got simultanously created with the installation of libvirt.&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;bash&quot; class=&quot;language-bash &quot;&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;$ sudo usermod -aG libvirt raskell
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;OpenBSD features a release cycle of six months. We would need to update our system every six month to keep up with the latest packages. During a given release, only security and bug fix patches are applied to the curated packages maintained by pkg_add. Therefore, in February 2023, we’re using the &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.openbsd.org&#x2F;72.html&quot;&gt;OpenBSD 7.2&lt;&#x2F;a&gt; release version. As I’m living in Switzerland, I chose to pull the iso image from a Swiss mirror, in this case from &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;mirror.ungleich.ch&#x2F;pub&#x2F;OpenBSD&#x2F;7.2&#x2F;&quot;&gt;&lt;code&gt;mirror.ungleich.ch&#x2F;pub&#x2F;OpenBSD&lt;&#x2F;code&gt;&lt;&#x2F;a&gt; (check what mirror is closest to you to get the best download rate).&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;bash&quot; class=&quot;language-bash &quot;&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;# cd &amp;#x2F;var&amp;#x2F;lib&amp;#x2F;libvirt&amp;#x2F;boot&amp;#x2F;
# sudo wget https:&amp;#x2F;&amp;#x2F;mirror.ungleich.ch&amp;#x2F;pub&amp;#x2F;OpenBSD&amp;#x2F;7.2&amp;#x2F;amd64&amp;#x2F;install72.iso
--2023-01-12 20:48:15--  https:&amp;#x2F;&amp;#x2F;mirror.ungleich.ch&amp;#x2F;pub&amp;#x2F;OpenBSD&amp;#x2F;7.2&amp;#x2F;amd64&amp;#x2F;install72.iso
Resolving mirror.ungleich.ch (mirror.ungleich.ch)... 2a0a:e5c0:2:2:400:c8ff:fe68:bef3, 185.203.114.135
Connecting to mirror.ungleich.ch (mirror.ungleich.ch)|2a0a:e5c0:2:2:400:c8ff:fe68:bef3|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 583352320 (556M) [application&amp;#x2F;octet-stream]
Saving to: ‘install72.iso.1’

install72.iso.1                      100%[====================================================================&amp;gt;] 556.33M  7.11MB&amp;#x2F;s    in 77s     

2023-01-12 20:49:33 (7.19 MB&amp;#x2F;s) - ‘install72.iso.1’ saved [583352320&amp;#x2F;583352320]
sudo wget https:&amp;#x2F;&amp;#x2F;mirror.ungleich.ch&amp;#x2F;pub&amp;#x2F;OpenBSD&amp;#x2F;7.2&amp;#x2F;amd64&amp;#x2F;SHA256
--2023-01-12 20:47:38--  https:&amp;#x2F;&amp;#x2F;mirror.ungleich.ch&amp;#x2F;pub&amp;#x2F;OpenBSD&amp;#x2F;7.2&amp;#x2F;amd64&amp;#x2F;SHA256
Resolving mirror.ungleich.ch (mirror.ungleich.ch)... 2a0a:e5c0:2:2:400:c8ff:fe68:bef3, 185.203.114.135
Connecting to mirror.ungleich.ch (mirror.ungleich.ch)|2a0a:e5c0:2:2:400:c8ff:fe68:bef3|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1992 (1.9K) [application&amp;#x2F;octet-stream]
Saving to: ‘SHA256.1’

SHA256.1                             100%[====================================================================&amp;gt;]   1.95K  --.-KB&amp;#x2F;s    in 0s      

2023-01-12 20:47:39 (742 MB&amp;#x2F;s) - ‘SHA256.1’ saved [1992&amp;#x2F;1992]
# grep install63.iso SHA256 &amp;gt; &amp;#x2F;tmp&amp;#x2F;x
# sha256sum -c &amp;#x2F;tmp&amp;#x2F;x
# rm &amp;#x2F;tmp&amp;#x2F;x
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Before we can start the virtualization server and get running our OpenBSD instance, we need to define the configuraiton on how to virtualize and ultimately boot the system with. This is done with &lt;code&gt;virt-install&lt;&#x2F;code&gt;. Noteworthy here is that we &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.qemu.org&#x2F;&quot;&gt;QEMU&lt;&#x2F;a&gt; as our emulation solution of choice, we allocate up to 4GB of RAM and 4 CPU cores to the machine.&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;bash&quot; class=&quot;language-bash &quot;&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;$ sudo virt-install \
      --name=openbsd \
      --virt-type=qemu \
      --memory=2048,maxmemory=4096 \
      --vcpus=2,maxvcpus=4 \
      --cpu host \
      --os-variant=openbsd7.0 \
      --cdrom=&amp;#x2F;var&amp;#x2F;lib&amp;#x2F;libvirt&amp;#x2F;boot&amp;#x2F;install72.iso \
      --network=bridge=virbr0,model=virtio \
      --graphics=vnc \
      --disk path=&amp;#x2F;var&amp;#x2F;lib&amp;#x2F;libvirt&amp;#x2F;images&amp;#x2F;openbsd.qcow2,size=40,bus=virtio,format=qcow2
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Once, it is up and running, we can use a vnc solution to connect to the running machine. In this case, I chose &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.libvirt.org&#x2F;manpages&#x2F;virsh.html&quot;&gt;virsh&lt;&#x2F;a&gt; to do the job. virsh is a command-line interface tool for managing virtualization environments created with libvirt. It allows us to manage virtual machines, storage pools, and network interfaces, as well as other virtualization components, from the command line.&lt;&#x2F;p&gt;
&lt;p&gt;To establish a VNC connection with a running libvirt virtualized OpenBSD instance, you can use the following steps:&lt;&#x2F;p&gt;
&lt;ol&gt;
&lt;li&gt;Start the virtual machine in libvirt: You can start the virtual machine using the virsh command &lt;strong&gt;&lt;code&gt;virsh start &amp;lt;vm-name&amp;gt;&lt;&#x2F;code&gt;&lt;&#x2F;strong&gt;, where &lt;strong&gt;&lt;code&gt;&amp;lt;vm-name&amp;gt;&lt;&#x2F;code&gt;&lt;&#x2F;strong&gt; is the name of the virtual machine you want to start.&lt;&#x2F;li&gt;
&lt;li&gt;Find the VNC display: Once the virtual machine is running, you can find the VNC display number for the virtual machine using the virsh command &lt;strong&gt;&lt;code&gt;virsh vncdisplay &amp;lt;vm-name&amp;gt;&lt;&#x2F;code&gt;&lt;&#x2F;strong&gt;.&lt;&#x2F;li&gt;
&lt;li&gt;Connect to the VNC display: You can connect to the VNC display using a VNC client, such as &lt;strong&gt;&lt;code&gt;vncviewer&lt;&#x2F;code&gt;&lt;&#x2F;strong&gt;, and specify the IP address of the host running the virtual machine and the VNC display number. For example, if the host’s IP address is &lt;strong&gt;&lt;code&gt;192.168.0.100&lt;&#x2F;code&gt;&lt;&#x2F;strong&gt; and the VNC display number is &lt;strong&gt;&lt;code&gt;:0&lt;&#x2F;code&gt;&lt;&#x2F;strong&gt;, the command to connect would be &lt;strong&gt;&lt;code&gt;vncviewer 192.168.0.100:0&lt;&#x2F;code&gt;&lt;&#x2F;strong&gt;.&lt;&#x2F;li&gt;
&lt;li&gt;Authenticate to the VNC server: You may need to enter a password to authenticate to the VNC server. The password is set when the virtual machine is created in libvirt.&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;p&gt;With these steps, you can establish a VNC connection with a running libvirt virtualized OpenBSD instance and interact with the virtual machine’s graphical user interface.&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;bash&quot; class=&quot;language-bash &quot;&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;$ virsh dumpxml openbsd | grep vnc
&amp;lt;graphics type=&amp;#x27;vnc&amp;#x27; port=&amp;#x27;5900&amp;#x27; autoport=&amp;#x27;yes&amp;#x27; listen=&amp;#x27;127.0.0.1&amp;#x27;&amp;gt;
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Like I did, most of you would like to interact with a graphical interface such as with X11. For that, we yet another tool, a so-called VNC viewer. A very simple implementation of such a vnc viewer is &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;tigervnc.org&#x2F;&quot;&gt;tigervnc&lt;&#x2F;a&gt; (simply install it with &lt;code&gt;$ sudo xbps-install -S tigervnc&lt;&#x2F;code&gt;).&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;bash&quot; class=&quot;language-bash &quot;&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;$ sudo virsh --connect
qemu:&amp;#x2F;&amp;#x2F;&amp;#x2F;system start openbsd
Domain &amp;#x27;openbsd&amp;#x27; started
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;h2 id=&quot;references-and-further-reading&quot;&gt;References and further reading&lt;&#x2F;h2&gt;
&lt;h3 id=&quot;openbsd&quot;&gt;OpenBSD&lt;&#x2F;h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.openbsd.org&#x2F;&quot;&gt;OpenBSD&lt;&#x2F;a&gt; - Official project site&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.openbsd.org&#x2F;faq&#x2F;&quot;&gt;OpenBSD FAQ&lt;&#x2F;a&gt; - Comprehensive installation and usage guide&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;why-openbsd.rocks&#x2F;&quot;&gt;Why OpenBSD Rocks&lt;&#x2F;a&gt; - Collection of OpenBSD innovations and features&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.openssh.com&#x2F;&quot;&gt;OpenSSH&lt;&#x2F;a&gt; - The SSH suite that originated from the OpenBSD project&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Theo_de_Raadt&quot;&gt;Theo de Raadt&lt;&#x2F;a&gt; - OpenBSD founder and lead developer&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;man.openbsd.org&#x2F;pledge.2&quot;&gt;pledge(2)&lt;&#x2F;a&gt; - OpenBSD’s system call for restricting process capabilities&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;man.openbsd.org&#x2F;unveil.2&quot;&gt;unveil(2)&lt;&#x2F;a&gt; - OpenBSD’s system call for restricting filesystem visibility&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h3 id=&quot;void-linux&quot;&gt;Void Linux&lt;&#x2F;h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;voidlinux.org&#x2F;&quot;&gt;Void Linux&lt;&#x2F;a&gt; - Official project site&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;docs.voidlinux.org&#x2F;&quot;&gt;Void Linux Handbook&lt;&#x2F;a&gt; - Official documentation&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;docs.voidlinux.org&#x2F;xbps&#x2F;index.html&quot;&gt;XBPS package manager&lt;&#x2F;a&gt; - Void’s binary package management system&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h3 id=&quot;virtualization&quot;&gt;Virtualization&lt;&#x2F;h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;libvirt.org&#x2F;&quot;&gt;libvirt&lt;&#x2F;a&gt; - Virtualization management library and API&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.qemu.org&#x2F;&quot;&gt;QEMU&lt;&#x2F;a&gt; - Open-source machine emulator and virtualizer&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.libvirt.org&#x2F;manpages&#x2F;virsh.html&quot;&gt;virsh(1)&lt;&#x2F;a&gt; - Command-line interface for managing libvirt guests&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;tigervnc.org&#x2F;&quot;&gt;TigerVNC&lt;&#x2F;a&gt; - VNC implementation for remote desktop access&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h3 id=&quot;guides&quot;&gt;Guides&lt;&#x2F;h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;voidlinux&#x2F;comments&#x2F;ghwvv5&#x2F;guide_how_to_setup_qemukvm_emulation_on_void_linux&#x2F;&quot;&gt;[Guide] How to setup QEMU&#x2F;KVM emulation on Void Linux&lt;&#x2F;a&gt; - Community guide on r&#x2F;voidlinux&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.cyberciti.biz&#x2F;faq&#x2F;kvmvirtualization-virt-install-openbsd-unix-guest&#x2F;&quot;&gt;KVM virt-install: Install OpenBSD as Guest Operating System&lt;&#x2F;a&gt; - Step-by-step KVM installation guide&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.skreutz.com&#x2F;posts&#x2F;autoinstall-openbsd-on-qemu&#x2F;&quot;&gt;Auto-install OpenBSD on QEMU&lt;&#x2F;a&gt; - Automated OpenBSD installation on QEMU&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
</description>
      </item>
      <item>
          <title>Hello and outlook</title>
          <pubDate>Tue, 20 Sep 2022 00:00:00 +0000</pubDate>
          <author>Unknown</author>
          <link>https://raskell.io/articles/hello-and-outlook/</link>
          <guid>https://raskell.io/articles/hello-and-outlook/</guid>
          <description xml:base="https://raskell.io/articles/hello-and-outlook/">&lt;h2 id=&quot;hello-world&quot;&gt;Hello world&lt;&#x2F;h2&gt;
&lt;p&gt;This is the first post on raskell.io. My name is Raffael. I build software, mostly in the space of platform automation, edge infrastructure, and applied security. I also have a long-running interest in open-source software, operating systems, and the kind of hardware you find in recycling bins and give a second life to.&lt;&#x2F;p&gt;
&lt;p&gt;This site exists to document real work and share patterns that other engineers can use. Not thought leadership. Not personal branding. Just notes from the workshop.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;what-to-expect&quot;&gt;What to expect&lt;&#x2F;h2&gt;
&lt;p&gt;At the time of writing, I planned to cover:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;Linux and BSD systems, particularly &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;voidlinux.org&#x2F;&quot;&gt;Void Linux&lt;&#x2F;a&gt; and &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.openbsd.org&#x2F;&quot;&gt;OpenBSD&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;li&gt;Open-source tooling and infrastructure&lt;&#x2F;li&gt;
&lt;li&gt;Platform automation and operability&lt;&#x2F;li&gt;
&lt;li&gt;Whatever hard problem I happened to be stuck on&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;Looking back from 2026, the scope grew to include edge systems, AI-assisted engineering, and a few deep dives I did not expect to write. The thread that held it together was always the same: systems under pressure, and the tools that keep them running.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;references&quot;&gt;References&lt;&#x2F;h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;voidlinux.org&#x2F;&quot;&gt;Void Linux&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.openbsd.org&#x2F;&quot;&gt;OpenBSD&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;raskell-io&#x2F;www.raskell.io&quot;&gt;raskell.io source&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
</description>
      </item>
    </channel>
</rss>
