<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
    <channel>
      <title>Raskell</title>
      <link>https://raskell.io</link>
      <description>Writing about platform automation, edge systems, applied security, and open standards. Building automation-first platforms that survive production reality.</description>
      <generator>Zola</generator>
      <language>en</language>
      <atom:link href="https://raskell.io/rss.xml" rel="self" type="application/rss+xml"/>
      <lastBuildDate>Fri, 13 Mar 2026 00:00:00 +0000</lastBuildDate>
      <item>
          <title>Archipelag.io Is in Open Beta: Here&#x27;s Why I Built It</title>
          <pubDate>Fri, 13 Mar 2026 00:00:00 +0000</pubDate>
          <author>Unknown</author>
          <link>https://raskell.io/articles/archipelag-io-distributed-compute-from-mining-rigs-to-open-beta/</link>
          <guid>https://raskell.io/articles/archipelag-io-distributed-compute-from-mining-rigs-to-open-beta/</guid>
          <description xml:base="https://raskell.io/articles/archipelag-io-distributed-compute-from-mining-rigs-to-open-beta/">&lt;p&gt;There is an abandoned factory building in Glarus, a small town wedged between mountains in eastern Switzerland. In 2018, the building was loud. Not machinery-loud, fan-loud. Rows of bare motherboards bolted to open-air frames, each bristling with GPUs and daisy-chained power supplies. The air tasted like warm dust and ozone. Cables ran everywhere, held in place by zip ties and optimism. This was an Ethereum mining operation, and I was standing in the middle of it, watching people I knew convert their gaming rigs, hardware they loved, into money-printing machines.&lt;&#x2F;p&gt;
&lt;p&gt;I was there because Vitalik Buterin had decided to visit. He had flown in on a private jet to Geneva, driven up in a black limousine with tinted windows, and walked into this dusty, chaotic space to see what Swiss miners were building. It was surreal. The creator of Ethereum, stepping over power cables in an industrial ruin, nodding at rack after rack of GPUs humming away at proof-of-work hashes. I do not think he was impressed by the elegance of the setup. Nobody was. But something about that scene stuck with me.&lt;&#x2F;p&gt;
&lt;p&gt;People were willing to sacrifice their gaming entertainment, their &lt;em&gt;leisure hardware&lt;&#x2F;em&gt;, to chase the dream of sovereign financial independence using fundamentally nerdy equipment: PCs, internet connections, blockchain protocols, and GPU graphics cards. They were converting consumer-grade technology into economic infrastructure, and they were doing it themselves. No data center leases. No vendor contracts. No permission from anyone. Just people, hardware, and a protocol that made it worth their while.&lt;&#x2F;p&gt;
&lt;p&gt;I had skin in the game too. I invested (gambled, honestly) in crypto during that era. I watched the charts, rode the swings, felt the dopamine spikes and the stomach-drops. The financial side was wild and ultimately unsustainable for most people. But the &lt;em&gt;infrastructure&lt;&#x2F;em&gt; side, the part where ordinary humans turned their homes into compute nodes and got paid for it: that part was real, and that part stayed with me long after the crypto hype faded and the rigs went quiet.&lt;&#x2F;p&gt;
&lt;p&gt;This is the story of how that factory visit turned into &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;archipelag.io&quot;&gt;Archipelag.io&lt;&#x2F;a&gt;, a distributed compute network that entered open beta today. It has been eight years of thinking, two years of building, and a lot of being wrong about the right things at the wrong time.&lt;&#x2F;p&gt;
</description>
      </item>
      <item>
          <title>How AI Makes Bare Metal Viable Again</title>
          <pubDate>Sun, 08 Mar 2026 00:00:00 +0000</pubDate>
          <author>Unknown</author>
          <link>https://raskell.io/articles/how-ai-makes-bare-metal-viable-again/</link>
          <guid>https://raskell.io/articles/how-ai-makes-bare-metal-viable-again/</guid>
          <description xml:base="https://raskell.io/articles/how-ai-makes-bare-metal-viable-again/">&lt;p&gt;I was paying over two hundred dollars a month to run two apps that had zero paying users.&lt;&#x2F;p&gt;
&lt;p&gt;Not because the apps were complex. Not because they needed high availability across regions. Because I was running Kubernetes on DigitalOcean, and Kubernetes has opinions about how much infrastructure you need. A control plane. Worker nodes. Load balancers. Persistent volumes. Managed databases. Each line item modest on its own, adding up to a bill that felt absurd for two Phoenix applications in their bootstrapping phase.&lt;&#x2F;p&gt;
&lt;p&gt;The apps are &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;archipelag.io&quot;&gt;archipelag.io&lt;&#x2F;a&gt; and &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;cyanea.bio&quot;&gt;cyanea.bio&lt;&#x2F;a&gt;. Both are Elixir&#x2F;Phoenix projects. Archipelag uses PostgreSQL and NATS for its messaging layer. Cyanea uses SQLite. Neither gets meaningful traffic yet. Both are real products I am actively building, not side projects I will abandon next month. But they are pre-revenue, and every dollar I spend on infrastructure is a dollar I am betting against future income that does not exist yet.&lt;&#x2F;p&gt;
&lt;p&gt;Something had to change.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-kubernetes-trap&quot;&gt;The Kubernetes trap&lt;&#x2F;h2&gt;
&lt;p&gt;Here is the thing about Kubernetes: it solves problems you might not have. If you are running fifty microservices across three regions with autoscaling requirements and a platform team to manage it, Kubernetes earns its keep. If you are running two BEAM applications that each consume less than 512 MB of memory, you are paying a complexity tax for infrastructure capabilities you will never touch.&lt;&#x2F;p&gt;
&lt;p&gt;My K8s setup on DigitalOcean looked like this: a managed cluster with two worker nodes (the minimum for any reasonable availability), a managed PostgreSQL instance for Archipelag, a load balancer for ingress, persistent volumes for Cyanea’s SQLite database. Each component had its own monthly cost. The cluster management fee alone was more than what I would eventually pay for an entire bare metal server.&lt;&#x2F;p&gt;
&lt;p&gt;The operational overhead was worse than the cost. Helm charts. Ingress controllers. Certificate managers. Pod disruption budgets. Every time I wanted to deploy a new version, I was wrangling YAML files that described infrastructure concerns my apps did not care about. A Phoenix release does not need a pod spec. It needs a port, an environment, and someone to restart it if it crashes.&lt;&#x2F;p&gt;
&lt;p&gt;And the YAML, my God, the YAML. A simple Phoenix app that listens on a port and serves HTTP needs, at minimum, a Deployment manifest, a Service manifest, and an Ingress manifest. Add a ConfigMap for environment variables, a Secret for credentials, a PersistentVolumeClaim if you need disk, a HorizontalPodAutoscaler if you want autoscaling. For Cyanea alone, I had six Kubernetes manifests totaling a few hundred lines of YAML, all to describe an application that boils down to: run this binary, give it a port, point a domain at it.&lt;&#x2F;p&gt;
&lt;p&gt;The cognitive load compounds. You learn the Kubernetes resource model, then the DigitalOcean-specific annotations for their load balancer, then the cert-manager CRDs for TLS, then the quirks of persistent volumes on managed K8s (spoiler: they are not as persistent as you think if you do not get the reclaim policy right). Each layer has its own documentation, its own failure modes, its own upgrade cycle. I spent more time debugging infrastructure than building product.&lt;&#x2F;p&gt;
&lt;p&gt;The irony is not lost on me. Kubernetes was designed for teams running hundreds of services at Google-scale. I was running two apps. The orchestrator had more moving parts than the things it was orchestrating. It was like hiring a logistics fleet to deliver two packages across town.&lt;&#x2F;p&gt;
&lt;p&gt;I knew I was over-engineered. But the alternative, at the time, seemed like a step backward.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-nomad-detour&quot;&gt;The Nomad detour&lt;&#x2F;h2&gt;
&lt;p&gt;I should mention that Kubernetes was never the only orchestrator I considered. For the past five years, while the industry went all-in on K8s, I had been quietly admiring HashiCorp’s &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.nomadproject.io&#x2F;&quot;&gt;Nomad&lt;&#x2F;a&gt;. Where Kubernetes is a sprawling ecosystem of CRDs, operators, and control loops, Nomad is refreshingly minimal. A single binary. A simple job spec. No opinions about networking, no built-in service mesh, no mandatory etcd cluster. You tell it what to run, it runs it.&lt;&#x2F;p&gt;
&lt;p&gt;That minimalism appealed to me. Nomad treats workload scheduling as the core problem and stays out of everything else. No built-in networking layer means you bring your own, which sounds like a drawback until you realize it means you are not locked into someone else’s networking model.&lt;&#x2F;p&gt;
&lt;p&gt;And I happened to have my own networking layer already. I had been building &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;zentinelproxy.io&#x2F;&quot;&gt;Zentinel&lt;&#x2F;a&gt; in parallel, a security-first reverse proxy built on Cloudflare’s &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;cloudflare&#x2F;pingora&quot;&gt;Pingora&lt;&#x2F;a&gt; framework in Rust. Zentinel handles TLS termination, WAF inspection, rate limiting, domain-based routing, all the edge concerns I care about. It also supports sleepable ops, where backend instances can be suspended and woken on demand, which is perfect for apps that do not need to be running 24&#x2F;7.&lt;&#x2F;p&gt;
&lt;p&gt;So I tried pairing them. Nomad for workload scheduling, Zentinel for the network layer. And it worked. The combination gave me a lightweight orchestrator that did not try to own every concern, paired with a reverse proxy that handled edge traffic the way I wanted. Two focused tools, each doing one thing well.&lt;&#x2F;p&gt;
&lt;p&gt;But then IBM acquired HashiCorp, and the calculus changed.&lt;&#x2F;p&gt;
&lt;p&gt;The acquisition itself was not the problem. Companies get acquired. It happens. The problem was the trajectory. HashiCorp had already re-licensed Terraform from MPL to BSL (Business Source License) in 2023, a move that fractured the community and spawned the &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;opentofu.org&#x2F;&quot;&gt;OpenTofu&lt;&#x2F;a&gt; fork. The pattern was familiar: open-source project gains adoption, company monetizes through enterprise features, company gets acquired, new owner tightens the screws. I had watched it happen with Redis, with Elasticsearch, with MongoDB. Each time the community forks, there is a period of uncertainty, split maintenance effort, and feature divergence.&lt;&#x2F;p&gt;
&lt;p&gt;I did not want to build my infrastructure on a foundation where the governance could shift at any time. Nomad is still open source today. But “still open source” and “will remain open source” are different statements, and after the Terraform situation, I was not confident in the latter. The BSL license change had been a signal, and IBM’s acquisition amplified it. I did not need to go down that road with another HashiCorp product.&lt;&#x2F;p&gt;
&lt;p&gt;The Nomad experiment did teach me something valuable, though. It confirmed that the KISS approach to deployment was right. You do not need the full Kubernetes machinery. A scheduler that starts processes, checks their health, and restarts them when they crash is sufficient for a wide range of workloads. And a dedicated reverse proxy that handles TLS and routing is cleaner than bundling networking into the orchestrator.&lt;&#x2F;p&gt;
&lt;p&gt;That insight, Nomad’s minimalism plus Zentinel’s Pingora-based proxy architecture, became the design seed for what I would eventually build.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-fly-io-middle-ground&quot;&gt;The fly.io middle ground&lt;&#x2F;h2&gt;
&lt;p&gt;With Nomad off the table as a long-term bet, I migrated to &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;fly.io&quot;&gt;fly.io&lt;&#x2F;a&gt; in late 2025. It was genuinely better than K8s for my use case. Fly understands BEAM applications at a fundamental level. The BEAM runtime is designed for the kind of lightweight, long-lived processes that Fly’s infrastructure optimizes for. You push a release, it runs it. No YAML. No ingress controllers. No cluster management.&lt;&#x2F;p&gt;
&lt;p&gt;Fly also made the service dependencies painless. Managed Postgres with a few commands. NATS was straightforward to set up. Tigris (Fly’s S3-compatible object storage) handled blob storage for Cyanea’s file uploads. The developer experience was genuinely excellent, and I mean that without reservation. The Fly team has built something thoughtful.&lt;&#x2F;p&gt;
&lt;p&gt;The cost dropped meaningfully. No cluster management fee. No minimum node count. Pay-per-VM pricing that scales down to fractions of a shared CPU. Fly’s model is honest about what small applications actually need, and the pricing reflects that. I went from over two hundred dollars a month on DigitalOcean K8s to roughly a quarter of that.&lt;&#x2F;p&gt;
&lt;p&gt;For a while, it was the right answer. And if I had been scaling horizontally, adding regions, needing the kind of elastic compute that cloud-native platforms excel at, I would have stayed. If my apps suddenly got traction and I needed instances in Tokyo, Frankfurt, and Virginia, Fly would be the obvious choice. The multi-region story is one of Fly’s genuine strengths. You deploy once, it runs everywhere. That is hard to replicate.&lt;&#x2F;p&gt;
&lt;p&gt;But I was not scaling horizontally. I was running two apps in one location. On a good day, they handled maybe a few hundred requests. The compute they needed was trivial, a fraction of a shared CPU core. And I was still paying for a platform designed to scale to thousands of instances across dozens of regions, even though I needed exactly one instance of each app, in exactly one place, doing very little work.&lt;&#x2F;p&gt;
&lt;p&gt;There is also a subtler cost that managed platforms carry: the abstraction tax. When something goes wrong on Fly (and it did, occasionally, things like deployment timeouts or the odd networking hiccup), you are debugging at the platform level, not the system level. You file a support ticket or check the status page. You do not SSH in and look at processes, because there are no processes you can see. The platform is the intermediary, and the intermediary has its own failure modes that you cannot inspect or fix.&lt;&#x2F;p&gt;
&lt;p&gt;The cloud-native model, even the lean version that Fly offers, has a floor. You are always paying for the platform’s capabilities, not just your usage of them. When your usage is “two small apps, one location, no scale,” that floor matters. And when the platform sits between you and your processes, you lose the ability to debug at the level where the answers actually live.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-bare-metal-math&quot;&gt;The bare metal math&lt;&#x2F;h2&gt;
&lt;p&gt;I started looking at dedicated servers. Not VPS instances, not cloud VMs. Actual hardware you can SSH into, where your processes run on real cores and your data sits on real disks.&lt;&#x2F;p&gt;
&lt;p&gt;Hetzner runs a &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.hetzner.com&#x2F;sb&#x2F;&quot;&gt;server auction&lt;&#x2F;a&gt; where they sell refurbished dedicated machines at steep discounts. These are servers that have been running in Hetzner’s data centers, got rotated out of customer contracts, and are resold at prices that make cloud compute look like a luxury good. The hardware is used but maintained, and Hetzner’s data centers are well-run, proper cooling, redundant power, good network connectivity.&lt;&#x2F;p&gt;
&lt;p&gt;I found a box with a multi-core Intel CPU, 128 GB of DDR4 RAM, and two 1 TB NVMe drives that I configured in RAID 1 for redundancy. EUR 38 a month. About forty-two dollars. Fixed price. No bandwidth metering (Hetzner includes 20 TB of traffic on dedicated servers, which for my workload might as well be unlimited). No surprises on the bill.&lt;&#x2F;p&gt;
&lt;p&gt;Let that sink in for a moment. For less than what I was paying for managed Postgres alone on either platform, I could have an entire server with more RAM than I know what to do with, fast NVMe storage with mirror redundancy, and enough compute headroom to run not two but twenty applications without breaking a sweat. The two NVMe drives alone, if bought retail, would cost more than a year of hosting.&lt;&#x2F;p&gt;
&lt;p&gt;I ran the numbers on capacity. My two Phoenix apps, even under load, would use maybe 1-2 GB of RAM combined. PostgreSQL with a modest dataset, another gig or two. NATS, negligible. That leaves well over 120 GB of RAM sitting idle. The CPU tells a similar story. Phoenix on the BEAM is remarkably efficient with CPU resources, the scheduler does its own preemptive multitasking across lightweight processes, and my workloads are I&#x2F;O-bound, not compute-bound. I could run my entire current stack and barely register on a load graph.&lt;&#x2F;p&gt;
&lt;p&gt;The headroom is the point. On a cloud platform, headroom costs money. More RAM, higher tier. More CPU, higher tier. On bare metal, the headroom is already paid for. Growing from two apps to ten does not change my monthly bill. Adding a staging environment does not change my monthly bill. Running background workers, a metrics stack, a CI runner, none of it changes my monthly bill. The marginal cost of additional workloads on existing hardware is zero.&lt;&#x2F;p&gt;
&lt;p&gt;The math was obvious. The problem was everything else.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;why-bare-metal-was-hard&quot;&gt;Why bare metal was hard&lt;&#x2F;h2&gt;
&lt;p&gt;Bare metal has always been cheap. That was never the issue. The issue was everything you had to build and maintain yourself.&lt;&#x2F;p&gt;
&lt;p&gt;On a managed platform, you get deployment pipelines, TLS certificate management, process supervision, reverse proxying, log aggregation, health checks, and rollback mechanisms out of the box. On bare metal, you get a Linux login prompt and a blinking cursor.&lt;&#x2F;p&gt;
&lt;p&gt;Historically, going bare metal for web applications meant weeks of setup:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;Install and configure nginx or HAProxy as a reverse proxy&lt;&#x2F;li&gt;
&lt;li&gt;Set up Certbot or acme.sh for Let’s Encrypt certificates, and hope the renewal cron does not silently break&lt;&#x2F;li&gt;
&lt;li&gt;Write deployment scripts (rsync, symlinks, restart commands) and debug them for months&lt;&#x2F;li&gt;
&lt;li&gt;Configure systemd services for each app, with the right restart policies and environment files&lt;&#x2F;li&gt;
&lt;li&gt;Build a process supervision layer that handles crashes, port allocation, and graceful shutdowns&lt;&#x2F;li&gt;
&lt;li&gt;Figure out zero-downtime deploys (which means running two instances, health checking the new one, swapping traffic, draining the old one)&lt;&#x2F;li&gt;
&lt;li&gt;Set up log rotation, monitoring, backups&lt;&#x2F;li&gt;
&lt;li&gt;Harden the server (firewall, SSH config, automatic security updates)&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;Each of these is a solved problem in isolation. There are blog posts and Stack Overflow answers for every one of them. But stitching them together into a coherent, reliable deployment system is a full-time job for a week or two, and maintaining it is an ongoing tax on your attention.&lt;&#x2F;p&gt;
&lt;p&gt;This is why the cloud won. Not because bare metal is expensive. Because the operational cost of doing it yourself was prohibitive for small teams. The cloud sold you a package deal: we handle the infrastructure, you handle the application. Worth it, even at a premium.&lt;&#x2F;p&gt;
&lt;p&gt;But what if that operational cost dropped to near zero?&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-ai-shift&quot;&gt;The AI shift&lt;&#x2F;h2&gt;
&lt;p&gt;I had Claude Code with Opus 4.6 available. I had spent months working with it on other projects. Compilers, CRDT engines, reverse proxies. I knew what it could do with a clear spec and a well-defined problem domain.&lt;&#x2F;p&gt;
&lt;p&gt;And deploying web applications to bare metal is a well-defined problem domain.&lt;&#x2F;p&gt;
&lt;p&gt;The core requirements are straightforward: upload an artifact, start it on a port, check that it is healthy, route traffic to it, stop the old one. Everything else, TLS, process supervision, rollback, log capture, is layered on top of that core loop. The problem space is wide but shallow. Lots of features, few genuinely novel algorithms.&lt;&#x2F;p&gt;
&lt;p&gt;This is exactly the kind of work where AI shines. Not because it writes perfect code on the first try. But because it can iterate through a feature list at a pace that would take a solo developer weeks, producing working implementations in hours. The feedback loop is tight: describe what you want, get code, test it, refine. The domain knowledge exists in a thousand deployment tools that came before. The AI has seen all of them.&lt;&#x2F;p&gt;
&lt;p&gt;So I decided to build my own deployment tool. From scratch. With AI as my co-engineer.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;building-vela&quot;&gt;Building Vela&lt;&#x2F;h2&gt;
&lt;p&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;raskell-io&#x2F;vela&quot;&gt;Vela&lt;&#x2F;a&gt; is what came out of that process. A single Rust binary that handles everything I listed above: reverse proxy, auto-TLS, process supervision, zero-downtime deploys, health checks, secret management, log streaming, rollbacks. No containers. No Docker. No YAML.&lt;&#x2F;p&gt;
&lt;p&gt;The design draws from both of its ancestors. From Nomad, the suckless philosophy: a single binary, minimal configuration, no opinions about things that are not its problem. From Zentinel, the Pingora-inspired proxy architecture: hyper-based reverse proxy with TLS termination, domain-based routing, and WebSocket support baked into the same process. Vela is what happens when you take the best ideas from tools you admire and combine them into something purpose-built for your exact workload.&lt;&#x2F;p&gt;
&lt;p&gt;The design philosophy is blunt: one binary, two modes.&lt;&#x2F;p&gt;
&lt;pre&gt;&lt;code&gt;┌─────────────────────────────────────────────┐
│  Your server                                │
│                                             │
│  Vela daemon                                │
│  ├── Reverse proxy (:80&amp;#x2F;:443, auto-TLS)     │
│  ├── Process manager (start, health, swap)  │
│  └── IPC socket                             │
│                                             │
│  Apps                                       │
│  ├── cyanea.bio      → :10001              │
│  └── archipelag.io   → :10002              │
└─────────────────────────────────────────────┘

┌─────────────────────────────────────────────┐
│  Your laptop                                │
│                                             │
│  vela deploy  →  scp + ssh  →  server       │
└─────────────────────────────────────────────┘
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;&lt;code&gt;vela serve&lt;&#x2F;code&gt; runs on the server. It is the reverse proxy, the process manager, and the IPC daemon, all in one process. &lt;code&gt;vela deploy&lt;&#x2F;code&gt; runs on your laptop. It reads a manifest, uploads your artifact over SSH, and tells the server to activate it.&lt;&#x2F;p&gt;
&lt;p&gt;SSH is the control plane. No tokens, no API keys, no custom authentication layer. If you can SSH into the server, you can deploy. This is a deliberate choice. SSH key management is a solved problem. Every developer already has it configured. Every server already has it running. Building a custom auth system on top would be adding complexity for no practical gain.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;the-manifest&quot;&gt;The manifest&lt;&#x2F;h3&gt;
&lt;p&gt;Each app gets a &lt;code&gt;Vela.toml&lt;&#x2F;code&gt; in its project root:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;toml&quot; class=&quot;language-toml &quot;&gt;&lt;code class=&quot;language-toml&quot; data-lang=&quot;toml&quot;&gt;[app]
name = &amp;quot;cyanea&amp;quot;
domain = &amp;quot;app.cyanea.bio&amp;quot;

[deploy]
server = &amp;quot;deploy@my-server&amp;quot;
type = &amp;quot;beam&amp;quot;
binary = &amp;quot;server&amp;quot;
health = &amp;quot;&amp;#x2F;health&amp;quot;
strategy = &amp;quot;sequential&amp;quot;
pre_start = &amp;quot;bin&amp;#x2F;cyanea eval &amp;#x27;Cyanea.Release.migrate()&amp;#x27;&amp;quot;

[env]
DATABASE_PATH = &amp;quot;${data_dir}&amp;#x2F;cyanea.db&amp;quot;
SECRET_KEY_BASE = &amp;quot;${secret:SECRET_KEY_BASE}&amp;quot;
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;That is the entire deploy configuration. The app type tells Vela how to start it (&lt;code&gt;beam&lt;&#x2F;code&gt; runs Elixir releases, &lt;code&gt;binary&lt;&#x2F;code&gt; runs compiled executables). The health path tells it where to check. The strategy tells it how to swap traffic. The &lt;code&gt;pre_start&lt;&#x2F;code&gt; hook runs database migrations before the new instance starts, and if migrations fail, the deploy aborts and the old instance keeps running.&lt;&#x2F;p&gt;
&lt;p&gt;Environment variables support two substitution patterns: &lt;code&gt;${data_dir}&lt;&#x2F;code&gt; expands to the app’s persistent data directory (which survives deploys), and &lt;code&gt;${secret:KEY}&lt;&#x2F;code&gt; pulls from the server-side secret store. Secrets never live in your repo.&lt;&#x2F;p&gt;
&lt;p&gt;Deploying looks like this:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;bash&quot; class=&quot;language-bash &quot;&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;MIX_ENV=prod mix release
vela deploy .&amp;#x2F;_build&amp;#x2F;prod&amp;#x2F;rel&amp;#x2F;cyanea
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Two commands. The artifact goes up, the health check passes, traffic swaps, done.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;zero-downtime-deploys&quot;&gt;Zero-downtime deploys&lt;&#x2F;h3&gt;
&lt;p&gt;Vela supports two deploy strategies, and the choice matters.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Blue-green&lt;&#x2F;strong&gt; is the default. The new instance starts alongside the old one on a fresh port. Vela runs a health check against it (30 retries, one per second, five-second timeout per attempt). Once the health check passes, the reverse proxy atomically swaps the route table entry for that domain to point at the new port. The old instance gets a configurable drain period to finish in-flight requests, then receives SIGTERM. If it does not exit within the drain window, SIGKILL.&lt;&#x2F;p&gt;
&lt;pre&gt;&lt;code&gt;Time ──────────────────────────────────────────►

Old instance     ████████████████████░░░░  (draining)
New instance              ░░░░████████████████████
                          ▲   ▲
                     start │   │ health passes, swap
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Zero downtime. The user never sees a blip. This works for stateless apps and apps backed by PostgreSQL (where both instances can connect to the same database simultaneously).&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Sequential&lt;&#x2F;strong&gt; is for SQLite apps. You cannot have two processes writing to the same SQLite database (WAL mode helps, but concurrent writers from separate instances is asking for trouble). So Vela stops the old instance first, starts the new one, health checks it, and activates it. Sub-second blip. Acceptable for apps where the alternative is write contention.&lt;&#x2F;p&gt;
&lt;pre&gt;&lt;code&gt;Time ──────────────────────────────────────────►

Old instance     ████████████████████
New instance                          ░░░░████████████████████
                                 ▲   ▲
                            stop │   │ start + health check
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;The decision is per-app, configured in the manifest. Cyanea uses sequential (SQLite). Archipelag uses blue-green (PostgreSQL).&lt;&#x2F;p&gt;
&lt;h3 id=&quot;process-supervision&quot;&gt;Process supervision&lt;&#x2F;h3&gt;
&lt;p&gt;Vela does not just start your app and walk away. It supervises it. If a process crashes, Vela detects the exit (via non-blocking &lt;code&gt;try_wait&lt;&#x2F;code&gt; on the child process handle), logs it, and restarts from the stored launch configuration:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;rust&quot; class=&quot;language-rust &quot;&gt;&lt;code class=&quot;language-rust&quot; data-lang=&quot;rust&quot;&gt;pub async fn check_and_restart(&amp;amp;mut self) -&amp;gt; Vec&amp;lt;String&amp;gt; {
    let mut to_restart = Vec::new();

    for (key, process) in &amp;amp;mut self.running {
        match process.child.try_wait() {
            Ok(Some(status)) if !status.success() =&amp;gt; {
                &amp;#x2F;&amp;#x2F; Process exited unexpectedly
                to_restart.push((
                    key.clone(),
                    process.launch_config.clone(),
                ));
            }
            _ =&amp;gt; {}
        }
    }

    for (key, config) in to_restart {
        &amp;#x2F;&amp;#x2F; Restart on same port if available, allocate new otherwise
        self.restart_from_config(&amp;amp;key, &amp;amp;config).await;
    }
}
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Each app’s &lt;code&gt;LaunchConfig&lt;&#x2F;code&gt; (release directory, binary name, app type, environment variables, data directory) is stored so that restarts use the exact same configuration. The daemon also persists app state to disk, so if Vela itself restarts (server reboot, daemon upgrade), it restores all running apps from their saved configurations.&lt;&#x2F;p&gt;
&lt;p&gt;This is the kind of feature that would take a day to specify and a week to implement if you were writing it from scratch. With Claude, it took about an hour of iteration, including the edge cases around port reallocation and the pending&#x2F;active state split during deploys.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;built-in-services&quot;&gt;Built-in services&lt;&#x2F;h3&gt;
&lt;p&gt;Both of my apps have service dependencies. Archipelag needs PostgreSQL and NATS. Rather than managing these separately, Vela handles service provisioning directly:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;toml&quot; class=&quot;language-toml &quot;&gt;&lt;code class=&quot;language-toml&quot; data-lang=&quot;toml&quot;&gt;[services.postgres]
version = &amp;quot;17&amp;quot;
databases = [&amp;quot;archipelag_prod&amp;quot;]

[services.nats]
version = &amp;quot;2.10&amp;quot;
jetstream = true
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;On first deploy, Vela installs PostgreSQL (via apt), creates the database with a generated password, and injects &lt;code&gt;DATABASE_URL&lt;&#x2F;code&gt; into the app’s environment. For NATS, it downloads the binary, generates a config, and starts it as a supervised child process with &lt;code&gt;NATS_URL&lt;&#x2F;code&gt; injected. Service credentials persist across deploys and daemon restarts.&lt;&#x2F;p&gt;
&lt;p&gt;This was one of those features where the AI really earned its keep. The NATS lifecycle management alone, downloading the right binary for the platform, generating config, supervising the process, health-checking the monitoring endpoint, persisting credentials, involved touching six or seven modules. Claude handled the plumbing while I focused on the design decisions.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;the-reverse-proxy&quot;&gt;The reverse proxy&lt;&#x2F;h3&gt;
&lt;p&gt;Vela embeds its own reverse proxy built on hyper. It handles TLS termination (auto-provisioned via Let’s Encrypt ACME HTTP-01, or static certificates for Cloudflare setups), domain-based routing, WebSocket upgrades, and HTTP-to-HTTPS redirects.&lt;&#x2F;p&gt;
&lt;p&gt;The routing model is simple. A thread-safe hash map from domain to port:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;rust&quot; class=&quot;language-rust &quot;&gt;&lt;code class=&quot;language-rust&quot; data-lang=&quot;rust&quot;&gt;pub struct RouteTable {
    routes: Arc&amp;lt;RwLock&amp;lt;HashMap&amp;lt;String, u16&amp;gt;&amp;gt;&amp;gt;,
}
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;When a request arrives, Vela extracts the Host header, looks up the port, and forwards the request to &lt;code&gt;localhost:{port}&lt;&#x2F;code&gt;. When a deploy swaps traffic, it is a single write-lock on the hash map to update the port number. Atomic. No configuration reload. No proxy restart.&lt;&#x2F;p&gt;
&lt;p&gt;For WebSocket connections (which both Phoenix apps use for LiveView), Vela detects the &lt;code&gt;Upgrade: websocket&lt;&#x2F;code&gt; header and switches to raw TCP tunneling with bidirectional I&#x2F;O. This was important for my use case, Phoenix LiveView is WebSocket-native, and if the proxy does not handle upgrades correctly, the entire UI breaks.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;from-empty-box-to-production-in-a-day&quot;&gt;From empty box to production in a day&lt;&#x2F;h2&gt;
&lt;p&gt;Here is the timeline of the actual migration. I bought the Hetzner server and within about 48 hours, both apps were running in production with HTTPS, process supervision, automated backups, and daily health reports.&lt;&#x2F;p&gt;
&lt;p&gt;The sequence went roughly like this:&lt;&#x2F;p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Hardware validation&lt;&#x2F;strong&gt;: Check NVMe drive health, run memory tests, verify RAID configuration. The drives had about 25,000 power-on hours (these are auction servers, they have been used), but SMART health passed and wear levels were well within acceptable range.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;OS provisioning&lt;&#x2F;strong&gt;: Debian, RAID 1 across both NVMe drives. Straightforward.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Server hardening&lt;&#x2F;strong&gt;: Firewall rules, SSH hardening (key-only auth, non-default port, rate limiting), automatic security updates, intrusion detection. This is the part I am deliberately vague about. If you are running a public-facing server, hardening is non-negotiable, but I am not going to publish my exact firewall configuration.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Vela installation&lt;&#x2F;strong&gt;: Download the binary, create a config file, install the systemd service. Five minutes.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;First app deployed (Cyanea)&lt;&#x2F;strong&gt;: Built the Elixir release on the server, set secrets, ran migrations, deployed. The entire build-and-deploy cycle for a Phoenix app with a Rust NIF took about fifteen minutes, most of which was compilation.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Second app deployed (Archipelag)&lt;&#x2F;strong&gt;: Same flow, plus provisioning PostgreSQL and restoring a database dump from Fly, plus setting up NATS. About thirty minutes.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;TLS certificates&lt;&#x2F;strong&gt;: Updated DNS records, Let’s Encrypt certificates issued automatically. Vela handles the ACME challenge internally, no Certbot, no cron job, no manual cert management.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Monitoring&lt;&#x2F;strong&gt;: A daily health report script that checks system metrics, service status, and app health, then emails a summary. Simple but effective.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;p&gt;The most time-consuming part was not the tooling. It was migrating the PostgreSQL data from Fly and verifying that both apps behaved correctly in their new environment. The infrastructure setup itself, the part that would have taken weeks without Vela, took hours.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-broader-thesis&quot;&gt;The broader thesis&lt;&#x2F;h2&gt;
&lt;p&gt;Here is what I think is happening, and I think it is bigger than my personal infrastructure bill.&lt;&#x2F;p&gt;
&lt;p&gt;The cloud won because it sold a bundle: compute, networking, storage, deployment, monitoring, scaling, security, all integrated, all managed. The alternative was building each piece yourself, and the labor cost made that prohibitive for small teams. Managed infrastructure was cheaper than an ops engineer.&lt;&#x2F;p&gt;
&lt;p&gt;AI changes that equation. Not by making the cloud cheaper, but by making bespoke tooling economically viable.&lt;&#x2F;p&gt;
&lt;p&gt;Consider what I got with Vela. A deployment tool that does exactly what I need and nothing more. No container orchestration, because I do not use containers. No multi-region routing, because I run in one location. No autoscaling, because two apps do not need to autoscale. Every feature exists because I needed it. Every feature works with my specific stack (Elixir&#x2F;BEAM, Rust, SQLite, PostgreSQL, NATS). The tool is tailored to my workload the way a bespoke suit is tailored to a body.&lt;&#x2F;p&gt;
&lt;p&gt;This kind of custom tooling used to be a luxury. You needed either a platform team that could invest weeks of engineering time, or the rare individual who was both a skilled systems programmer and willing to spend their evenings writing deployment tools instead of building products. The economics did not make sense for a solo founder or a two-person team.&lt;&#x2F;p&gt;
&lt;p&gt;With AI, the cost of building bespoke tooling drops by an order of magnitude. Not to zero, you still need to know what you want, you still need to test and iterate, you still need to understand the domain well enough to evaluate the output. But the gap between “I know what I need” and “I have a working implementation” shrinks from weeks to hours.&lt;&#x2F;p&gt;
&lt;p&gt;And when bespoke tooling is cheap, the cloud’s bundle becomes less compelling. You do not need the managed Kubernetes service if you can build a deployment tool that fits your exact needs. You do not need the managed database service if you can install PostgreSQL yourself and the AI helps you set up backups, monitoring, and failover. You do not need the managed TLS service if your deployment tool handles ACME natively.&lt;&#x2F;p&gt;
&lt;p&gt;What you are left paying for is compute and bandwidth. And for compute and bandwidth, bare metal is drastically cheaper than the cloud.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-api-is-bespoke&quot;&gt;The API is bespoke&lt;&#x2F;h2&gt;
&lt;p&gt;There is a subtlety here that I think is worth calling out. When people talk about the cloud’s advantages, they often point to the API-driven experience. Infrastructure as code. Declarative configuration. Programmable everything. And that is real. The cloud’s API layer is genuinely valuable.&lt;&#x2F;p&gt;
&lt;p&gt;But the API does not have to come from a cloud provider. It can come from your own tooling.&lt;&#x2F;p&gt;
&lt;p&gt;Vela gives me an API-driven experience. I declare my app’s configuration in a TOML manifest. I run a single command to deploy. I can check status, stream logs, manage secrets, trigger backups, and roll back releases, all from my laptop, all through a CLI that speaks SSH to a daemon on the server. The experience is not worse than Fly or Heroku. In some ways it is better, because the tool does exactly what I need and nothing else, and when something goes wrong, I can read the source code.&lt;&#x2F;p&gt;
&lt;p&gt;The difference is that my “API” is a 5,000-line Rust binary instead of a multi-billion-dollar cloud platform. And that is fine. I do not need the platform. I need the interface. AI lets me build the interface.&lt;&#x2F;p&gt;
&lt;p&gt;This is, I think, the pattern that will play out more broadly. The cloud’s value was never just compute. It was the operational layer on top of compute, the tooling that made raw hardware usable. AI makes it possible to build that operational layer yourself, tailored to your needs, at a fraction of the cost. The cloud becomes optional. The server becomes a commodity. The differentiator is the tooling, and the tooling is something AI can help you build.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;what-the-numbers-look-like&quot;&gt;What the numbers look like&lt;&#x2F;h2&gt;
&lt;p&gt;Let me be concrete about costs, because this is ultimately an economic argument.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Kubernetes on DigitalOcean&lt;&#x2F;strong&gt; (my original setup):&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;Managed K8s cluster: ~$12&#x2F;month (control plane fee)&lt;&#x2F;li&gt;
&lt;li&gt;Worker nodes (2x smallest): ~$24&#x2F;month&lt;&#x2F;li&gt;
&lt;li&gt;Managed PostgreSQL: ~$15&#x2F;month&lt;&#x2F;li&gt;
&lt;li&gt;Load balancer: ~$12&#x2F;month&lt;&#x2F;li&gt;
&lt;li&gt;Persistent volumes, bandwidth, extras: ~$15&#x2F;month&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Total: ~$78-80&#x2F;month&lt;&#x2F;strong&gt; (and this was after I trimmed it)&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;&lt;strong&gt;Fly.io&lt;&#x2F;strong&gt; (the middle ground):&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;Two Phoenix apps (shared-cpu-1x, 256MB each): ~$14&#x2F;month&lt;&#x2F;li&gt;
&lt;li&gt;Managed Postgres: ~$25&#x2F;month&lt;&#x2F;li&gt;
&lt;li&gt;Managed NATS: ~$20&#x2F;month&lt;&#x2F;li&gt;
&lt;li&gt;Bandwidth, extras: ~$10&#x2F;month&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Total: ~$70&#x2F;month&lt;&#x2F;strong&gt;&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;The managed services were the killer. Fly’s compute pricing is fair, but managed Postgres and managed NATS added up fast. And that was at near-zero traffic. Egress pricing on Fly is metered, so if either app had started getting real user load, the bandwidth bill alone would have pushed the total well past a hundred dollars a month.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Hetzner bare metal&lt;&#x2F;strong&gt; (current):&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;Dedicated server (auction): EUR 38&#x2F;month (~$42)&lt;&#x2F;li&gt;
&lt;li&gt;That is it. PostgreSQL, NATS, TLS, everything runs on the box.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Total: ~$42&#x2F;month&lt;&#x2F;strong&gt;&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;The Hetzner box is cheaper than Fly right now, and the gap only widens as usage grows. But the raw dollar comparison understates the difference. Look at what I am getting. 128 GB of RAM versus 512 MB. Multi-core CPU versus shared fractional cores. Two terabytes of NVMe storage versus a few gigs. Bandwidth that is essentially unlimited (Hetzner includes 20 TB of traffic) versus metered egress that scales with every user you add.&lt;&#x2F;p&gt;
&lt;p&gt;The capacity gap is the real story. On Fly, scaling from two apps to ten means linearly increasing costs, more VMs, more managed database instances, more bandwidth charges. On my Hetzner box, scaling from two apps to ten means… nothing. The resources are already there. I paid for them. PostgreSQL, NATS, any other service I want to run, it all fits on the same box with room to spare.&lt;&#x2F;p&gt;
&lt;p&gt;And there is no surprise bill. No bandwidth overage. No “your database exceeded the row limit” fee. No managed service add-on creep. Thirty-eight euros a month, every month, regardless of what I run on it.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;when-this-does-not-apply&quot;&gt;When this does not apply&lt;&#x2F;h2&gt;
&lt;p&gt;I would be dishonest if I pretended bare metal is the right answer for everyone. It is not.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;If you need multi-region presence&lt;&#x2F;strong&gt;, the cloud still wins. Running your own hardware in three continents is a different kind of problem. Edge computing, CDN-native architectures (which I &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;raskell.io&#x2F;articles&#x2F;edge-systems-are-the-new-backend&#x2F;&quot;&gt;wrote about previously&lt;&#x2F;a&gt;), and platforms like Fly or Cloudflare Workers are the right tools for workloads that need to be close to users worldwide.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;If you need elastic scaling&lt;&#x2F;strong&gt;, bare metal does not flex. A server has fixed resources. If your traffic spikes 10x for an hour, you cannot add capacity on demand. You can over-provision (and at these prices, generous over-provisioning is affordable), but it is not the same as true elasticity.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;If you do not understand the operational basics&lt;&#x2F;strong&gt;, bare metal will bite you. Server hardening, backup strategies, disk monitoring, security patching, these are your responsibility. The cloud abstracts them away. On bare metal, a missed security update is your problem. A full disk is your problem. A failed drive (RAID helps, but is not magic) is your problem.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;If your team is large and needs guardrails&lt;&#x2F;strong&gt;, managed platforms provide consistency and governance that bare metal does not. Kubernetes is complex, but it is complex in a standardized way. Everyone knows how to deploy to K8s. Everyone knows how to debug a pod. Your custom Vela setup is legible to exactly the people who built it.&lt;&#x2F;p&gt;
&lt;p&gt;The sweet spot for bare metal, especially AI-assisted bare metal, is small teams building products that need reliability but not scale, performance but not elasticity, control but not standardization. Solo founders. Two-person startups. Side projects that might become real businesses.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;what-i-learned&quot;&gt;What I learned&lt;&#x2F;h2&gt;
&lt;p&gt;The migration took about 48 hours from “I have an empty server” to “both apps are in production with HTTPS, monitoring, and automated backups.” Most of that time was data migration and validation, not infrastructure setup.&lt;&#x2F;p&gt;
&lt;p&gt;Vela is now at version 0.5.0 with a feature list I am genuinely proud of: blue-green and sequential deploys, process supervision with auto-restart, built-in reverse proxy with auto-TLS, service dependency management (Postgres and NATS), secret management, log streaming, rollbacks, remote builds, scheduled backups, deploy hooks, and machine-readable status output for monitoring integration.&lt;&#x2F;p&gt;
&lt;p&gt;I built most of it in a few focused sessions with Claude Code. Not because the code is trivial, it is about 4,000 lines of Rust with async IPC, Unix socket communication, ACME certificate management, process lifecycle handling, and a reverse proxy with WebSocket support. But because the problem domain is well-understood, the requirements were clear, and AI is remarkably good at turning clear requirements into working implementations.&lt;&#x2F;p&gt;
&lt;p&gt;The thing I keep coming back to: the cloud was never selling compute. It was selling convenience. And convenience used to require a company with thousands of engineers to build platforms that abstracted away the hard parts. Now, a developer with a clear idea of what they need and an AI that can write systems code can build a fit-for-purpose operational layer in a weekend.&lt;&#x2F;p&gt;
&lt;p&gt;That does not make the cloud irrelevant. It makes the cloud optional for a much larger class of workloads than it was before.&lt;&#x2F;p&gt;
&lt;p&gt;Buy a server. Build your tools. Ship your product. The infrastructure should be boring. With AI, it finally can be.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;references-and-further-reading&quot;&gt;References and further reading&lt;&#x2F;h2&gt;
&lt;h3 id=&quot;tools-and-platforms&quot;&gt;Tools and platforms&lt;&#x2F;h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;raskell-io&#x2F;vela&quot;&gt;Vela&lt;&#x2F;a&gt; - The bare-metal deployment tool built in this article&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.hetzner.com&#x2F;sb&#x2F;&quot;&gt;Hetzner Server Auction&lt;&#x2F;a&gt; - Refurbished dedicated servers at steep discounts&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;fly.io&quot;&gt;fly.io&lt;&#x2F;a&gt; - The managed platform I migrated from&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.nomadproject.io&#x2F;&quot;&gt;Nomad&lt;&#x2F;a&gt; - HashiCorp’s minimal workload orchestrator&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;opentofu.org&#x2F;&quot;&gt;OpenTofu&lt;&#x2F;a&gt; - Community fork of Terraform after the BSL relicense&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;cloudflare&#x2F;pingora&quot;&gt;Pingora&lt;&#x2F;a&gt; - Cloudflare’s Rust framework for building programmable proxies&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;hyper.rs&#x2F;&quot;&gt;hyper&lt;&#x2F;a&gt; - Rust HTTP library powering Vela’s reverse proxy&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;letsencrypt.org&#x2F;&quot;&gt;Let’s Encrypt&lt;&#x2F;a&gt; - Free TLS certificates, automated via ACME in Vela&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;nats.io&#x2F;&quot;&gt;NATS&lt;&#x2F;a&gt; - Lightweight messaging system used by Archipelag&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h3 id=&quot;frameworks-and-runtimes&quot;&gt;Frameworks and runtimes&lt;&#x2F;h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.phoenixframework.org&#x2F;&quot;&gt;Phoenix Framework&lt;&#x2F;a&gt; - Elixir web framework powering both apps&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.erlang.org&#x2F;&quot;&gt;Erlang&#x2F;OTP&lt;&#x2F;a&gt; - The BEAM virtual machine that runs Phoenix and Elixir&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.rust-lang.org&#x2F;&quot;&gt;Rust&lt;&#x2F;a&gt; - Systems language Vela and Zentinel are written in&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h3 id=&quot;projects-referenced&quot;&gt;Projects referenced&lt;&#x2F;h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;zentinelproxy.io&quot;&gt;Zentinel&lt;&#x2F;a&gt; - Security-first reverse proxy built on Pingora&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;archipelag.io&quot;&gt;Archipelag&lt;&#x2F;a&gt; - Distributed compute platform&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;cyanea.bio&quot;&gt;Cyanea&lt;&#x2F;a&gt; - Bioinformatics platform&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;anthropics&#x2F;claude-code&quot;&gt;Claude Code&lt;&#x2F;a&gt; - AI coding tool used to build Vela&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
</description>
      </item>
      <item>
          <title>Edge Systems Are the New Backend</title>
          <pubDate>Wed, 11 Feb 2026 00:00:00 +0000</pubDate>
          <author>Unknown</author>
          <link>https://raskell.io/articles/edge-systems-are-the-new-backend/</link>
          <guid>https://raskell.io/articles/edge-systems-are-the-new-backend/</guid>
          <description xml:base="https://raskell.io/articles/edge-systems-are-the-new-backend/">&lt;p&gt;A request arrives at your system. In the next 50 milliseconds, before any application code runs, this happens: TLS termination, route matching, WAF inspection against 285 detection rules, JWT validation, rate limit evaluation, request body validation against a JSON schema, and trace context generation. The request either dies at the edge or arrives at your backend pre-authenticated, pre-validated, and pre-authorized.&lt;&#x2F;p&gt;
&lt;p&gt;Five years ago, your backend did all of this. Every service validated its own tokens, enforced its own rate limits, ran its own security checks. Today, the backend might not even exist in the form you expect. It might be a static site served from edge nodes, a thin persistence API, or a headless CMS that publishes content to a CDN and never handles a user request directly.&lt;&#x2F;p&gt;
&lt;p&gt;Something shifted. Not just at the edge. On both ends.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-three-tier-past&quot;&gt;The three-tier past&lt;&#x2F;h2&gt;
&lt;p&gt;The architecture most of us learned was simple. Client, backend, database. The browser rendered HTML, maybe ran some jQuery. The backend did everything: authentication, authorization, business logic, rendering, validation, rate limiting, session management. The database stored state. Clean separation, one direction, easy to reason about.&lt;&#x2F;p&gt;
&lt;p&gt;This model worked because the browser was dumb. It could render markup and submit forms. Any real computation had to happen on the server. The backend was fat by necessity, not by design.&lt;&#x2F;p&gt;
&lt;p&gt;Microservices made it worse. Consider a typical setup: a user service, an order service, a payment service, a notification service, an inventory service. Each one needs to validate JWTs. Each one needs to enforce rate limits. Each one needs input validation, request logging, and error handling. That is five services times six concerns. Thirty implementations of logic that should exist exactly once.&lt;&#x2F;p&gt;
&lt;p&gt;Now multiply. Real organizations have 15, 50, 200 services. Each team implements auth slightly differently. One uses a shared library, one copied the code two years ago, one rolled their own because the library did not support their token format. The rate limiting configurations drift. The logging formats diverge. A security patch to the JWT validation logic means PRs across every repository, coordinated deployments, and someone asking “did we get all of them?”&lt;&#x2F;p&gt;
&lt;pre&gt;&lt;code&gt;                 ┌──────────┬──────────┬──────────┐
                 │ Users    │ Orders   │ Payments │
                 │ Service  │ Service  │ Service  │
                 ├──────────┼──────────┼──────────┤
  Auth           │ ✓ (v2.1) │ ✓ (v1.9) │ ✓ (v2.0)│
  Rate limiting  │ ✓ (lib)  │ ✓ (copy) │ ✗ (none)│
  Validation     │ ✓        │ ✓        │ ✓       │
  WAF&amp;#x2F;Security   │ ✗        │ ✗        │ ✗       │
  Logging        │ JSON     │ text     │ JSON    │
  Tracing        │ ✓        │ ✗        │ ✓       │
                 └──────────┴──────────┴──────────┘
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Libraries helped. Service meshes helped more. But the complexity was still distributed across every service, in every team’s codebase, in every deployment pipeline. The mesh moved networking concerns to a sidecar. It did not move application-level concerns like auth, validation, or security inspection.&lt;&#x2F;p&gt;
&lt;p&gt;The edge was an afterthought. A reverse proxy. TLS termination. Maybe Varnish for caching. Maybe a CDN for static assets. It was infrastructure plumbing, not a place where decisions happened.&lt;&#x2F;p&gt;
&lt;p&gt;That model is over.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;two-migrations-one-hollowing&quot;&gt;Two migrations, one hollowing&lt;&#x2F;h2&gt;
&lt;p&gt;Here is the thing I keep coming back to: business logic is migrating in two directions simultaneously.&lt;&#x2F;p&gt;
&lt;p&gt;Upward, to the edge. Infrastructure concerns like auth, WAF, and rate limiting now execute at the edge layer, before requests reach any backend. But it goes further than that. Edge Workers run actual application code. Containers deploy at the edge. Server-side rendering happens at edge nodes 50ms from the user, not in a data center 200ms away.&lt;&#x2F;p&gt;
&lt;p&gt;Downward, to the client. The browser is no longer dumb. WebAssembly runs near-native code. WebGPU puts the GPU to work on ML inference and image processing. Web Workers handle background computation. Service Workers intercept network requests and serve cached responses offline. CRDTs let the client own its data and sync when it feels like it.&lt;&#x2F;p&gt;
&lt;p&gt;The backend is caught in the middle. Squeezed from both sides. And what remains is not a “backend” in any traditional sense. It is a persistence layer. A place where data rests and syncs. The interesting work happens elsewhere.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;what-moved-to-the-edge&quot;&gt;What moved to the edge&lt;&#x2F;h2&gt;
&lt;h3 id=&quot;infrastructure-concerns&quot;&gt;Infrastructure concerns&lt;&#x2F;h3&gt;
&lt;p&gt;The first wave was obvious. Cross-cutting concerns that every service needed are better handled once, at the point of entry.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Authentication.&lt;&#x2F;strong&gt; Validating a JWT does not require application context. The token is self-contained: a signature, an issuer, an expiry, a set of claims. Parse it, verify the signature against a JWKS endpoint, check the expiry, extract the claims, attach them as headers. Done. The backend receives &lt;code&gt;X-User-Id: alice&lt;&#x2F;code&gt; and &lt;code&gt;X-User-Role: admin&lt;&#x2F;code&gt; instead of a raw Bearer token it has to decode itself.&lt;&#x2F;p&gt;
&lt;p&gt;This is not hypothetical. Here is what this looks like in practice:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;kdl&quot; class=&quot;language-kdl &quot;&gt;&lt;code class=&quot;language-kdl&quot; data-lang=&quot;kdl&quot;&gt;agent &amp;quot;auth&amp;quot; {
    type &amp;quot;auth&amp;quot;
    grpc address=&amp;quot;http:&amp;#x2F;&amp;#x2F;localhost:50051&amp;quot;
    events &amp;quot;request_headers&amp;quot;
    timeout-ms 100
    failure-mode &amp;quot;closed&amp;quot;
    max-concurrent-calls 100
}
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;That agent handles JWT, OIDC, SAML, mTLS, and API key validation. Every route behind it gets authentication for free. Every backend service trusts the edge to have done the work. The auth agent crashes? Failure mode is “closed”. Requests stop, but the proxy stays up.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Rate limiting.&lt;&#x2F;strong&gt; Token bucket algorithms with per-client keys. The edge layer sees every request before the backend does. It is the natural place to enforce rate limits because it can reject bad traffic before it consumes backend resources. A rejected request at the edge costs microseconds. A rejected request at the backend costs a database query, a connection slot, and whatever work happened before the check.&lt;&#x2F;p&gt;
&lt;p&gt;There are two flavors. Local rate limiting uses in-process token buckets. Fast, no network hops, but each edge node tracks its own counters. If you have 10 edge nodes and a limit of 100 requests per second, each node allows 100, so the effective limit is 1,000. For most use cases, this is fine. Abuse does not distribute itself evenly across your infrastructure.&lt;&#x2F;p&gt;
&lt;p&gt;Distributed rate limiting uses a shared store (Redis, typically). Accurate across nodes, but adds a network hop per request. The tradeoff is latency versus precision. I default to local rate limiting and switch to distributed only when the use case demands exact global limits, like API billing or token budgets for LLM inference.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Security inspection.&lt;&#x2F;strong&gt; WAFs used to be appliances. Expensive, opaque, binary. A request was either blocked or allowed. Modern WAFs use anomaly scoring. Each rule contributes a score, and the total determines the action:&lt;&#x2F;p&gt;
&lt;pre&gt;&lt;code&gt;Score 0-9:    Allow
Score 10-24:  Log (warning, investigate later)
Score 25+:    Block
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;This is a fundamentally different model than binary block&#x2F;allow. It lets you tune aggressively without breaking legitimate traffic. I run 285 detection rules at the edge and process 912K requests per second on clean traffic. That is 30x faster than ModSecurity’s C implementation. The performance gap matters because it means WAF inspection can happen on every request, not just suspicious ones.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;API validation.&lt;&#x2F;strong&gt; If your API has a JSON Schema, why validate request bodies in your application code? Validate at the edge. Reject malformed requests before they consume a connection, a goroutine, a database transaction. The backend receives only structurally valid payloads.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Observability.&lt;&#x2F;strong&gt; Trace context should originate at the edge, not at the application. The edge is where the request enters your system. It is where you assign a trace ID, start the clock, and record the first span. If you originate traces in your application, you miss everything that happened before: TLS negotiation time, WAF processing time, the fact that the request sat in a rate limit queue for 50ms. Starting traces at the edge gives you the full picture.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;the-isolation-problem&quot;&gt;The isolation problem&lt;&#x2F;h3&gt;
&lt;p&gt;You cannot put all of this in a monolithic proxy. That is how you end up with nginx and 47 modules where nobody understands the interaction effects. A WAF bug should not take down your routing. A slow auth provider should not block rate limit checks.&lt;&#x2F;p&gt;
&lt;p&gt;The answer is process isolation. Thin dataplane, crash-isolated external agents. Each agent runs as a separate process with its own failure domain:&lt;&#x2F;p&gt;
&lt;pre&gt;&lt;code&gt;┌──────────────────────────────────────────┐
│ Edge Proxy (thin dataplane)              │
│ Routing │ TLS │ Caching │ Load Balancing │
└─────┬──────────┬──────────┬──────────────┘
      │          │          │
      ▼          ▼          ▼
   [WAF]      [Auth]    [Rate Limit]
  process     process     process
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Each agent gets its own concurrency semaphore. A slow WAF cannot starve auth. Each agent has a circuit breaker. Three failures in 30 seconds and the circuit opens. Each agent has a configurable failure mode, and this is where the design gets interesting:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;kdl&quot; class=&quot;language-kdl &quot;&gt;&lt;code class=&quot;language-kdl&quot; data-lang=&quot;kdl&quot;&gt;agent &amp;quot;waf&amp;quot; {
    type &amp;quot;waf&amp;quot;
    timeout-ms 100
    failure-mode &amp;quot;closed&amp;quot;
    max-concurrent-calls 50
    circuit-breaker {
        failure-threshold 5
        success-threshold 2
        timeout-seconds 30
    }
}

agent &amp;quot;rate-limit&amp;quot; {
    type &amp;quot;rate-limit&amp;quot;
    timeout-ms 50
    failure-mode &amp;quot;open&amp;quot;
    max-concurrent-calls 200
}
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;The WAF fails closed. If it crashes or times out, requests are blocked. You lose availability to preserve security. Rate limiting fails open. If it crashes, requests are allowed. You lose rate enforcement to preserve availability. These are explicit choices per agent, not global defaults. The operator decides which tradeoff to make for each concern, and the decision is visible in the config, not buried in code.&lt;&#x2F;p&gt;
&lt;p&gt;Agents return decisions. The proxy merges them. A blocking decision from any agent wins. Otherwise, header mutations accumulate. The model is simple: agents advise, the proxy decides. No agent can override another agent’s block. No agent can force a request through. The proxy owns the final call.&lt;&#x2F;p&gt;
&lt;p&gt;This is not a workaround. It is the fundamental design choice. Complex logic lives outside the core, behind process boundaries. The proxy stays small, fast, and boring. The agents handle the interesting work in isolation. A bug in a Lua scripting agent does not corrupt the routing table. A memory leak in the WAF agent does not exhaust the proxy’s memory. The process boundary is the blast radius.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;edge-workers-business-logic-at-the-edge&quot;&gt;Edge Workers: business logic at the edge&lt;&#x2F;h3&gt;
&lt;p&gt;Infrastructure concerns were the first wave. The second wave is actual business logic.&lt;&#x2F;p&gt;
&lt;p&gt;Cloudflare Workers, Deno Deploy, Fastly Compute, Vercel Edge Functions. These are not just “serverless at the CDN.” They are full compute environments running at edge nodes around the world. V8 isolates spin up in under 5ms. Cold starts are measured in single-digit milliseconds, not seconds. Your code runs 50ms from the user instead of 200ms away in us-east-1.&lt;&#x2F;p&gt;
&lt;p&gt;The constraints matter, because they shape what belongs here. Typical Edge Worker limits: 10-50ms CPU time per request (not wall time, actual CPU), 128MB memory, no raw TCP sockets, no persistent file system. You get a request, key-value storage, and the ability to make sub-requests to origins. That is it. These constraints are not bugs. They are what makes sub-millisecond cold starts possible. V8 isolates are cheap because they are small and short-lived.&lt;&#x2F;p&gt;
&lt;p&gt;What fits within these constraints is surprisingly broad:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;API routing and transformation.&lt;&#x2F;strong&gt; A request comes in for &lt;code&gt;&#x2F;api&#x2F;v2&#x2F;users&lt;&#x2F;code&gt;. The edge Worker rewrites it, fans out to two backend services (user profiles from one, preferences from another), merges the responses, and returns a single payload. The backend services are simple data sources. The edge Worker is the API layer.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;A&#x2F;B testing and feature flags.&lt;&#x2F;strong&gt; Read the experiment cookie, hash the user ID, assign a variant, route to the right origin or rewrite the response. No round trip to a feature flag service. The decision happens in microseconds at the node closest to the user.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Personalization.&lt;&#x2F;strong&gt; Look up the user’s segment in KV storage, inject the right content block, set cache headers accordingly. The backend generated all variants at build time. The edge picks the right one per request.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Server-side rendering.&lt;&#x2F;strong&gt; Render HTML at the edge node closest to the user. Frameworks like Next.js and Remix already support this. React Server Components run at the edge. The “server” in server-side rendering is not your server. It is an edge node in 300 locations.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Authentication and session management.&lt;&#x2F;strong&gt; Validate tokens, refresh sessions, set secure cookies. The auth flow never touches your origin. Cloudflare Workers KV or Durable Objects store session state at the edge.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;The pattern: compute that depends on request context but not on deep application state moves to the edge. If you can do it with a request, a key-value lookup, and a response, it probably belongs here. If it needs a complex database query or a multi-step transaction, it does not.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;containers-at-the-edge&quot;&gt;Containers at the edge&lt;&#x2F;h3&gt;
&lt;p&gt;Edge Workers hit a ceiling when you need persistent connections, large memory, or long-running processes. For those workloads, containers at the edge.&lt;&#x2F;p&gt;
&lt;p&gt;Fly.io, Railway, and Lambda@Edge deploy containers or full processes to edge locations worldwide. Your application runs with real file systems, TCP connections, and whatever runtime you need. But it runs close to users, not in a centralized data center. Latency drops from 200ms to 20ms.&lt;&#x2F;p&gt;
&lt;p&gt;The interesting problem is data gravity. Compute is easy to distribute. Data is not. If your container runs in Tokyo but your database is in Frankfurt, you have not solved the latency problem. You have moved it from the user-to-server hop to the server-to-database hop. The solutions are still maturing: read replicas at the edge (Turso, Neon), embedded databases that sync (LiteFS, libSQL), and eventually-consistent stores designed for multi-region (DynamoDB Global Tables, CockroachDB).&lt;&#x2F;p&gt;
&lt;p&gt;This model makes sense when compute and data can be co-located:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Regional APIs&lt;&#x2F;strong&gt; that comply with data residency requirements. Run the container and the database replica in the same region. GDPR data stays in the EU. Japanese user data stays in Japan.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Real-time applications&lt;&#x2F;strong&gt; where 200ms round trips kill the experience. Collaborative editing, multiplayer, live dashboards. A WebSocket server 20ms away feels instant. One 200ms away feels sluggish.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Stateful edge compute&lt;&#x2F;strong&gt; where you need more than a request&#x2F;response cycle. Background processing, scheduled jobs, long-running connections.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;The line between “edge” and “origin” blurs. If your container runs in 30 regions and handles requests locally with a local database replica, is that an edge deployment or a distributed backend? The distinction stops mattering. What matters is that the compute and the data are close to the user.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;what-moved-to-the-client&quot;&gt;What moved to the client&lt;&#x2F;h2&gt;
&lt;p&gt;The other half of the migration goes downward. The browser is not the thin client it used to be.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;webassembly&quot;&gt;WebAssembly&lt;&#x2F;h3&gt;
&lt;p&gt;WASM runs at near-native speed in every modern browser. Not “fast for JavaScript.” Actually fast. Compiled from Rust, C++, Go, or any language with an LLVM backend. Sandboxed, portable, deterministic.&lt;&#x2F;p&gt;
&lt;p&gt;What this enables:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Image and video processing&lt;&#x2F;strong&gt; in the browser. No upload to a server, no round trip, no privacy concern. The pixels never leave the device.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Document parsing and transformation.&lt;&#x2F;strong&gt; PDF rendering, spreadsheet computation, file format conversion. Libraries compiled to WASM and running client-side.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Cryptographic operations.&lt;&#x2F;strong&gt; End-to-end encryption where the server never sees plaintext. Key derivation, signing, verification, all in the browser.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Compute offloading from the backend.&lt;&#x2F;strong&gt; This is the one that changes how you think about server sizing.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Full relational databases in the browser.&lt;&#x2F;strong&gt; This is the one that changes architectures.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;I build on this pattern directly. &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;cyanea.bio&quot;&gt;Cyanea&lt;&#x2F;a&gt;, a bioinformatics platform, uses WASM to offload computation from the backend to the client’s browser. Sequence analysis, structure visualization, dataset filtering, these are CPU-intensive operations that traditionally require beefy server infrastructure. Instead, the computation runs right there on the researcher’s device. The backend stays thin: it stores datasets and coordinates collaboration, but the heavy lifting happens in the browser. This means I can run the platform on a modest server and still deliver real computational capability, because the “compute fleet” is the users’ own machines.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;archipelag.io&quot;&gt;Archipelag&lt;&#x2F;a&gt; takes the same idea in a different direction. It is a distributed compute platform where users contribute their browser’s idle compute power via WASM. The workloads compile to WASM modules, ship to participating browsers, execute in the sandbox, and return results. The browser is not just a client consuming a service, it is a compute node in a distributed system. The WASM sandbox is what makes this safe: untrusted code runs in a constrained environment with no access to the host file system, network, or memory beyond what is explicitly granted.&lt;&#x2F;p&gt;
&lt;p&gt;SQLite compiled to WASM (via projects like sql.js, wa-sqlite, or the official SQLite WASM build) gives the browser a real relational database. Not a key-value store. Not IndexedDB’s awkward object store API. Actual SQL with joins, indexes, transactions, and triggers. Backed by the Origin Private File System (OPFS) for persistence, it survives page reloads and browser restarts.&lt;&#x2F;p&gt;
&lt;p&gt;The implications are significant. Your application can run complex queries locally. Filter, sort, aggregate, full-text search. All instant, all offline. The server becomes a sync endpoint. It ships a database snapshot down and accepts change sets back. The client does the querying. The server does the storing.&lt;&#x2F;p&gt;
&lt;p&gt;This pattern scales down elegantly. A note-taking app with SQLite-in-WASM needs no backend API for reads. A project management tool can filter and search 10,000 tasks without a network request. A CMS authoring interface can work fully offline and sync when the author reconnects. The read path is local. The write path syncs eventually.&lt;&#x2F;p&gt;
&lt;p&gt;WASI (WebAssembly System Interface) extends this further. It gives WASM modules controlled access to file systems, clocks, and network sockets outside the browser. WASM becomes a universal runtime: the same binary runs in the browser, at the edge (Cloudflare Workers use WASM under the hood), and on bare metal. Write once, deploy to every layer of the stack.&lt;&#x2F;p&gt;
&lt;p&gt;The pattern: anything that is CPU-bound, privacy-sensitive, or latency-sensitive is a candidate for client-side WASM. If the computation does not need server-side state, it should not round-trip to a server.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;webgpu&quot;&gt;WebGPU&lt;&#x2F;h3&gt;
&lt;p&gt;WebGPU landed in Chrome in 2023, in Firefox and Safari shortly after, and it changes the math on what the client can compute. This is not WebGL with a new name. WebGL exposes a graphics pipeline. WebGPU exposes compute shaders. Direct, general-purpose GPU computation from JavaScript or WASM.&lt;&#x2F;p&gt;
&lt;p&gt;The immediate application is ML inference. Run a language model, an image classifier, or a recommendation engine on the user’s GPU. No server call, no API cost per token, no latency. The model weights download once (cached by the browser) and run locally. Privacy by default, because the data never leaves the device.&lt;&#x2F;p&gt;
&lt;p&gt;This is not theoretical. Stable Diffusion generates images in the browser via WebGPU. Small language models (Phi-2, Gemma 2B, Llama 3.2 1B) run at usable speeds on consumer hardware. MediaPipe runs pose detection, face tracking, and hand gesture recognition in real time. The trajectory is clear: models get smaller through distillation and quantization, consumer GPUs get faster, and the gap between “cloud inference” and “local inference” narrows every quarter.&lt;&#x2F;p&gt;
&lt;p&gt;Both Cyanea and Archipelag use WebGPU alongside WASM. In Cyanea, WebGPU accelerates molecular visualization and large-scale dataset operations, the kind of parallel computation that bioinformatics demands but that would be prohibitively expensive to run server-side for every user session. In Archipelag, WebGPU-capable nodes can take on GPU-accelerated workloads from the compute pool, turning a user’s idle GPU into a productive resource. The combination of WASM for general compute and WebGPU for parallel workloads gives the browser a compute profile that would have required dedicated server hardware five years ago.&lt;&#x2F;p&gt;
&lt;p&gt;But inference is not the only use case. WebGPU handles any parallel computation: physics simulations for games, signal processing for audio applications, particle systems for data visualization, and large-scale matrix operations. Anything you would reach for CUDA or Metal for on native can now run in the browser. The compute budget of the client just increased by orders of magnitude.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;web-workers-and-service-workers&quot;&gt;Web Workers and Service Workers&lt;&#x2F;h3&gt;
&lt;p&gt;Web Workers give you background threads. Heavy computation does not block the UI. Parse a large file, run a simulation, index a search corpus. All off the main thread, all without janking the interface.&lt;&#x2F;p&gt;
&lt;p&gt;Service Workers sit between the browser and the network. They intercept every fetch request and decide what to do: serve from cache, go to network, do both and race them. This enables:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Offline-first applications.&lt;&#x2F;strong&gt; The app works without a network connection. Data syncs when connectivity returns.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Background sync.&lt;&#x2F;strong&gt; Queue mutations while offline, replay them when online.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Push notifications.&lt;&#x2F;strong&gt; Wake the app without the user having it open.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Intelligent caching.&lt;&#x2F;strong&gt; Cache API responses, serve stale data while revalidating, pre-fetch resources the user is likely to need.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;The Service Worker is the client-side equivalent of the edge proxy. It intercepts, caches, validates, and routes. It makes the client self-sufficient.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;local-first-and-crdts&quot;&gt;Local-first and CRDTs&lt;&#x2F;h3&gt;
&lt;p&gt;Here is where it gets interesting. If the client has compute (WASM, WebGPU, Web Workers) and storage (IndexedDB, OPFS) and offline capability (Service Workers), why does it need a server at all?&lt;&#x2F;p&gt;
&lt;p&gt;CRDTs (Conflict-free Replicated Data Types) answer the consistency question. Multiple clients can edit the same data independently, offline, with no coordination. When they reconnect, their changes merge automatically without conflicts. No server-mediated locking. No “last write wins” data loss. Mathematical guarantees that concurrent edits converge to the same state.&lt;&#x2F;p&gt;
&lt;p&gt;The architecture:&lt;&#x2F;p&gt;
&lt;pre&gt;&lt;code&gt;Client A (offline)     Client B (offline)
    │                      │
    ├── Local edits        ├── Local edits
    │   (CRDT ops)         │   (CRDT ops)
    │                      │
    └──────┐      ┌────────┘
           ▼      ▼
      ┌──────────────┐
      │ Sync service  │  (thin, stateless)
      │ (persistence  │
      │  + relay)     │
      └──────────────┘
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;The sync service is not a backend. It stores operations and relays them between clients. It does not run business logic. It does not validate (the CRDT handles consistency). It does not transform (the merge function is built into the data type). It is a persistence layer with a WebSocket attached.&lt;&#x2F;p&gt;
&lt;p&gt;I build systems like this. The concrete model: a document is a flat &lt;code&gt;HashMap&amp;lt;EntityId, Entity&amp;gt;&lt;&#x2F;code&gt; where each entity holds CRDT-typed fields. The field types determine how concurrent edits merge:&lt;&#x2F;p&gt;
&lt;table&gt;&lt;thead&gt;&lt;tr&gt;&lt;th&gt;CRDT type&lt;&#x2F;th&gt;&lt;th&gt;Merge behavior&lt;&#x2F;th&gt;&lt;th&gt;Use case&lt;&#x2F;th&gt;&lt;&#x2F;tr&gt;&lt;&#x2F;thead&gt;&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;LwwRegister&amp;lt;T&amp;gt;&lt;&#x2F;td&gt;&lt;td&gt;Last writer wins (by timestamp)&lt;&#x2F;td&gt;&lt;td&gt;Simple values: name, status, URL&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;GrowOnlySet&amp;lt;T&amp;gt;&lt;&#x2F;td&gt;&lt;td&gt;Union of both sides&lt;&#x2F;td&gt;&lt;td&gt;Tags, labels, immutable references&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;ObservedRemoveSet&amp;lt;T&amp;gt;&lt;&#x2F;td&gt;&lt;td&gt;Add wins over concurrent remove&lt;&#x2F;td&gt;&lt;td&gt;Collaborator lists, mutable collections&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;MaxRegister&lt;&#x2F;td&gt;&lt;td&gt;Higher value wins&lt;&#x2F;td&gt;&lt;td&gt;Version counters, progress indicators&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;MinRegister&lt;&#x2F;td&gt;&lt;td&gt;Lower value wins&lt;&#x2F;td&gt;&lt;td&gt;Earliest timestamps, priority values&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;&#x2F;tbody&gt;&lt;&#x2F;table&gt;
&lt;p&gt;Each field carries a hybrid logical clock (HLC) timestamp. The HLC combines physical time with a logical counter, so causality is preserved even when wall clocks drift. Two clients edit the same field at the “same” time? The HLC ordering is deterministic. Both clients converge to the same value without coordination.&lt;&#x2F;p&gt;
&lt;p&gt;The merge function has three properties that make this work: it is associative (grouping does not matter), commutative (order does not matter), and idempotent (applying the same operation twice has no additional effect). These are not implementation details. They are the mathematical foundation that makes server-free consistency possible. You can sync operations in any order, from any number of clients, through any number of intermediate relays, and every replica converges to the same state.&lt;&#x2F;p&gt;
&lt;p&gt;The client owns its data. The server is optional. When the server exists, it persists operations and relays them. It does not arbitrate, transform, or validate beyond authentication.&lt;&#x2F;p&gt;
&lt;p&gt;This is not a niche pattern for collaborative text editors. Any application where users create and modify data can benefit. Notes, task managers, project planning tools, CMS authoring, form builders. The question is not “should this be local-first?” The question is “does this need a server, and if so, for what?”&lt;&#x2F;p&gt;
&lt;h2 id=&quot;what-the-backend-becomes&quot;&gt;What the backend becomes&lt;&#x2F;h2&gt;
&lt;p&gt;If the edge handles infrastructure concerns and business logic that depends on request context, and the client handles computation, rendering, and local state, what is left for the backend?&lt;&#x2F;p&gt;
&lt;p&gt;A persistence layer.&lt;&#x2F;p&gt;
&lt;p&gt;The backend becomes the place where data rests between sessions and syncs between devices. Not an application server. A persistence layer.&lt;&#x2F;p&gt;
&lt;p&gt;Consider the spectrum of what “backend” looks like now:&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Static sites.&lt;&#x2F;strong&gt; This site is an example. raskell.io is built with Zola. Markdown files compile to HTML at build time and deploy to edge CDN nodes. No application server. No database. No runtime process. The “backend” is a git repository and a CI pipeline. Content lives as files. Serving happens at the edge. The total monthly infrastructure cost is the price of a domain name.&lt;&#x2F;p&gt;
&lt;p&gt;This is not limited to blogs. Documentation sites, marketing pages, product landing pages, e-commerce storefronts with pre-rendered product pages. Any content that changes at author-time rather than request-time can be static. The headless CMS (Contentful, Sanity, Strapi, or just a git repo) publishes content. The static site generator builds HTML. The CDN serves it. The “backend” runs at build time, not at request time.&lt;&#x2F;p&gt;
&lt;p&gt;I take this further than most. All of my projects are CDN-first, even the ones with dedicated backends. The principle: if the backend goes down, the user should still see something useful. The static layer is the safety net.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;raskell-io&#x2F;kurumi&quot;&gt;Kurumi&lt;&#x2F;a&gt;, a local-first second brain app, is the purest expression of this. It is a Progressive Web App served entirely from CDN edge nodes with Service Workers handling offline capability. There is no backend server. Notes sync between devices through CRDTs when connectivity exists, but the app works fully offline. The entire “infrastructure” is a static deployment and an optional sync relay.&lt;&#x2F;p&gt;
&lt;p&gt;But the CDN-first pattern also applies to applications that have real backends. Cyanea has a Phoenix&#x2F;Elixir backend that manages datasets, user accounts, and collaboration. But the public-facing surface, the landing pages, category pages, trending spaces and protocols and datasets, is statically generated. The backend exports JSON snapshots of its database objects on a timed interval. A static site generator picks up those snapshots and rebuilds the public pages: what labs are active, which protocols are trending, which datasets were recently published. The result is a set of HTML pages sitting on a CDN that stay current without depending on the backend being up at the moment a visitor arrives.&lt;&#x2F;p&gt;
&lt;pre&gt;&lt;code&gt;┌──────────────────┐     JSON export      ┌──────────────┐
│  Cyanea Backend  │ ──── (interval) ────&amp;gt; │  Static Site  │
│  (Phoenix&amp;#x2F;BEAM)  │                       │  Generator    │
│                  │                       │  (Zola)       │
│  - datasets      │                       │               │
│  - protocols     │                       │  → CDN edge   │
│  - labs          │                       │    nodes      │
│  - spaces        │                       │               │
└──────────────────┘                       └──────────────┘
         │                                        │
         │ dynamic app                    static pages
         ▼                                        ▼
   app.cyanea.bio                          cyanea.bio
   (logged-in users,                  (public, always up,
    real-time features)                fast, no backend
                                       dependency)
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;This is a genuinely resilient architecture. The backend can be down for maintenance, mid-deploy, or experiencing load, and the public site keeps serving. The static pages are never stale by more than one generation interval. For a site where “trending this week” is sufficient freshness, that interval can be hours. The CDN handles traffic spikes that would overwhelm a backend. The backend handles the dynamic work that requires real-time data.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Thin persistence APIs.&lt;&#x2F;strong&gt; For applications with dynamic data, the backend shrinks to a database with an API in front of it. Accept writes, serve reads, enforce schema constraints. GraphQL or REST over Postgres. No rendering. No business logic beyond data integrity. The API exists so that clients and edge workers have somewhere to store and retrieve state.&lt;&#x2F;p&gt;
&lt;p&gt;The interesting shift: even the persistence API is getting thinner. Services like Supabase, PlanetScale, and Turso expose the database directly over HTTP or WebSockets with built-in auth. Your “backend” becomes a hosted database with row-level security policies. No application code at all.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Sync relays.&lt;&#x2F;strong&gt; For local-first applications, the backend is even simpler. Accept CRDT operations from clients, persist them to durable storage, fan them out to other connected clients via WebSocket. No merge logic (the CRDT handles that). No transformation. No validation beyond authentication. The relay does not understand the data. It stores and forwards.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Event logs.&lt;&#x2F;strong&gt; Append-only storage. Clients sync by replaying events from their last known position. The log is the source of truth. Everything else (search indexes, analytics dashboards, recommendation models) is a materialized view built asynchronously. The hot path is the append. The read path is the replay. Both are simple.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Batch processors.&lt;&#x2F;strong&gt; The one place where traditional backend compute survives: jobs that require access to the full dataset. Analytics aggregation, report generation, search index building, ML model training. These run on schedules or triggers, not in the request path. They read from the event log or the database, compute, and write results back. The user never waits for them.&lt;&#x2F;p&gt;
&lt;p&gt;The common thread: the backend does not touch the hot path. User requests hit the edge and the client. The backend runs in the background, on its own schedule, when no one is waiting.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-architecture-that-makes-this-work&quot;&gt;The architecture that makes this work&lt;&#x2F;h2&gt;
&lt;p&gt;Pushing logic to the edge and the client is not free. Both environments have constraints, and ignoring them is how you build fragile systems.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;at-the-edge-bounded-resources&quot;&gt;At the edge: bounded resources&lt;&#x2F;h3&gt;
&lt;p&gt;Every operation at the edge needs explicit limits. No open-ended computations, no unbounded queues, no surprise behavior. This is not just good practice. It is existential. The edge proxy sits between the internet and your infrastructure. If it behaves unpredictably, everything behind it suffers.&lt;&#x2F;p&gt;
&lt;p&gt;Concretely:&lt;&#x2F;p&gt;
&lt;table&gt;&lt;thead&gt;&lt;tr&gt;&lt;th&gt;Resource&lt;&#x2F;th&gt;&lt;th&gt;Bound&lt;&#x2F;th&gt;&lt;th&gt;Why&lt;&#x2F;th&gt;&lt;&#x2F;tr&gt;&lt;&#x2F;thead&gt;&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;Agent concurrency&lt;&#x2F;td&gt;&lt;td&gt;Per-agent semaphore (default: 100)&lt;&#x2F;td&gt;&lt;td&gt;Prevents noisy neighbor between agents&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;Agent timeout&lt;&#x2F;td&gt;&lt;td&gt;100ms default&lt;&#x2F;td&gt;&lt;td&gt;Prevents latency cascade&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;Connection pool&lt;&#x2F;td&gt;&lt;td&gt;Explicit max (default: 10K)&lt;&#x2F;td&gt;&lt;td&gt;Prevents file descriptor exhaustion&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;Request body&lt;&#x2F;td&gt;&lt;td&gt;Streaming, not buffered&lt;&#x2F;td&gt;&lt;td&gt;Prevents memory exhaustion&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;Route cache&lt;&#x2F;td&gt;&lt;td&gt;LRU with size limit&lt;&#x2F;td&gt;&lt;td&gt;Prevents unbounded growth&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;Rate limit queues&lt;&#x2F;td&gt;&lt;td&gt;Bounded with max delay&lt;&#x2F;td&gt;&lt;td&gt;Prevents request pile-up&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;&#x2F;tbody&gt;&lt;&#x2F;table&gt;
&lt;p&gt;If you cannot articulate the bound for every resource your edge system uses, you do not have an architecture. You have an accident waiting for load.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;on-the-client-isolation-and-sandboxing&quot;&gt;On the client: isolation and sandboxing&lt;&#x2F;h3&gt;
&lt;p&gt;The client has different constraints. Battery life, memory pressure, the user closing the tab at any moment.&lt;&#x2F;p&gt;
&lt;p&gt;WASM runs in a sandbox. No file system access, no network access, no shared memory (unless explicitly granted). This is the security model that makes client-side compute viable. Untrusted code (your own, running on someone else’s device) cannot escape the sandbox.&lt;&#x2F;p&gt;
&lt;p&gt;Web Workers run in separate threads with message-passing. No shared mutable state. No locks. No data races. The isolation is enforced by the runtime, not by programmer discipline.&lt;&#x2F;p&gt;
&lt;p&gt;Service Workers have a lifecycle managed by the browser. They can be terminated at any time to save resources. Your offline logic must handle graceful shutdown. This means: durable state in IndexedDB, idempotent sync operations, no in-memory state that cannot be reconstructed.&lt;&#x2F;p&gt;
&lt;p&gt;CRDTs provide consistency guarantees without coordination. But they are not magic. They consume memory (tombstones for deleted items, version vectors for causal ordering). They need garbage collection. They need careful schema design because not every data model maps cleanly to CRDT primitives. A counter works. A last-writer-wins register works. A rich text document with formatting, comments, and embedded media requires careful thought.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;the-trust-boundary&quot;&gt;The trust boundary&lt;&#x2F;h3&gt;
&lt;p&gt;Here is the part most edge-computing articles skip: trust.&lt;&#x2F;p&gt;
&lt;p&gt;If the edge handles auth, the backend trusts the edge to have done auth correctly. If the client handles business logic, the server trusts the client to have computed correctly. These are real trust boundaries with real failure modes.&lt;&#x2F;p&gt;
&lt;p&gt;At the edge, trust is earned through:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Failure isolation.&lt;&#x2F;strong&gt; Agent crashes do not take down the proxy. Bad config is validated before activation.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Observability.&lt;&#x2F;strong&gt; Every decision is logged, metered, and traceable. If the WAF blocked a request, you can see exactly which rules fired and why.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Bounded behavior.&lt;&#x2F;strong&gt; No surprise modes. Every resource has explicit limits. Every failure mode is configured, not assumed.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;On the client, trust is conditional:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Never trust the client for security decisions.&lt;&#x2F;strong&gt; Validate at the edge or the backend. Client-side checks are UX, not security.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Trust the client for its own data.&lt;&#x2F;strong&gt; If the user is editing their own document, the client is authoritative. CRDTs handle consistency. The server persists, it does not arbitrate.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Verify at the boundary.&lt;&#x2F;strong&gt; When client data syncs to the server, validate schema and authorization. Trust the merge, verify the input.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h2 id=&quot;when-not-to-do-this&quot;&gt;When not to do this&lt;&#x2F;h2&gt;
&lt;p&gt;Not everything belongs at the edge or on the client. Here is what stays in the backend:&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Multi-service transactions.&lt;&#x2F;strong&gt; If an operation needs to read from three databases, check inventory, charge a payment, and send a notification, that is a backend workflow. Distributed transactions need coordination, and coordination needs a central authority.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Heavy data joins.&lt;&#x2F;strong&gt; If your query joins six tables with complex filters and aggregations, it runs next to the database, not at an edge node 200ms away from the data.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Regulatory requirements.&lt;&#x2F;strong&gt; Some industries mandate that data processing happens in specific locations, on specific infrastructure, with specific audit trails. Edge deployment may not satisfy these constraints.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Small teams with simple needs.&lt;&#x2F;strong&gt; If you have one backend, ten users, and no latency problems, this architecture is overhead. A Django app behind nginx is fine. Optimize when you have a reason to optimize, not before.&lt;&#x2F;p&gt;
&lt;p&gt;The edge handles cross-cutting concerns and request-context computation. The client handles local state and user-facing compute. The backend handles coordination, persistence, and anything that needs the full dataset. Know which is which.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;where-this-is-going&quot;&gt;Where this is going&lt;&#x2F;h2&gt;
&lt;p&gt;Five years ago, the stack was: browser (thin) renders server-generated HTML, backend (fat) runs everything, database stores state. The mental model was request&#x2F;response, and the backend was the center of gravity.&lt;&#x2F;p&gt;
&lt;p&gt;The stack now:&lt;&#x2F;p&gt;
&lt;pre&gt;&lt;code&gt;┌─────────────────────────────────────────────────────┐
│ Client                                               │
│ WASM │ WebGPU │ Web Workers │ Service Workers │ CRDT │
│ (compute, render, offline, local state)              │
└────────────────────┬────────────────────────────────┘
                     │
┌────────────────────┴────────────────────────────────┐
│ Edge                                                 │
│ Proxy │ Workers │ Containers │ KV │ Durable Objects  │
│ (auth, WAF, routing, SSR, API aggregation, policy)   │
└────────────────────┬────────────────────────────────┘
                     │
┌────────────────────┴────────────────────────────────┐
│ Backend                                              │
│ Database │ Sync relay │ Event log │ Batch processing  │
│ (persistence, coordination, async compute)           │
└─────────────────────────────────────────────────────┘
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;The client is fat. The edge is fat. The backend is thin. The center of gravity moved to both ends simultaneously.&lt;&#x2F;p&gt;
&lt;p&gt;Every year, this accelerates. Models get smaller and run on consumer GPUs. WASM runtimes get faster and gain more system APIs through WASI. Edge platforms add durable storage, queues, and cron triggers. CRDTs mature from academic curiosities to production libraries. SQLite-in-the-browser goes from experiment to default architecture for offline-capable apps.&lt;&#x2F;p&gt;
&lt;p&gt;The backend will not disappear. Data needs to live somewhere durable, and cross-device sync needs a relay. Coordination problems need a central authority. Batch processing needs access to the full dataset. But the backend’s role is narrowing to exactly these things. It is becoming infrastructure, not application. Plumbing, not logic.&lt;&#x2F;p&gt;
&lt;p&gt;I find myself building systems where the most interesting engineering happens at the boundaries. A reverse proxy (&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;zentinelproxy.io&quot;&gt;Zentinel&lt;&#x2F;a&gt;) that inspects 912K requests per second through 285 WAF rules, authenticates with sub-millisecond latency, and routes with crash-isolated agents. A bioinformatics platform (&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;cyanea.bio&quot;&gt;Cyanea&lt;&#x2F;a&gt;) where the browser runs the computation and the backend exports JSON for statically generated pages. A distributed compute platform (&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;archipelag.io&quot;&gt;Archipelag&lt;&#x2F;a&gt;) where users’ browsers are the compute fleet via WASM and WebGPU. A note-taking app (&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;raskell-io&#x2F;kurumi&quot;&gt;Kurumi&lt;&#x2F;a&gt;) that works fully offline with CRDTs and never touches a server for reads. Between all of them, a database, a sync relay, or just a CDN. Necessary and boring.&lt;&#x2F;p&gt;
&lt;p&gt;The backend is not dead. It is just not where the interesting work happens anymore.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;references-and-further-reading&quot;&gt;References and further reading&lt;&#x2F;h2&gt;
&lt;h3 id=&quot;edge-platforms-and-proxies&quot;&gt;Edge platforms and proxies&lt;&#x2F;h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;workers.cloudflare.com&#x2F;&quot;&gt;Cloudflare Workers&lt;&#x2F;a&gt; - V8 isolate-based edge compute&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;deno.com&#x2F;deploy&quot;&gt;Deno Deploy&lt;&#x2F;a&gt; - Edge runtime built on the Deno JavaScript runtime&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.fastly.com&#x2F;products&#x2F;edge-compute&quot;&gt;Fastly Compute&lt;&#x2F;a&gt; - Wasm-based edge compute platform&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;vercel.com&#x2F;docs&#x2F;functions&#x2F;edge-functions&quot;&gt;Vercel Edge Functions&lt;&#x2F;a&gt; - Edge compute integrated with Next.js&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;fly.io&quot;&gt;fly.io&lt;&#x2F;a&gt; - Container-based edge deployment platform&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;cloudflare&#x2F;pingora&quot;&gt;Pingora&lt;&#x2F;a&gt; - Cloudflare’s Rust framework for programmable proxies&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;zentinelproxy.io&quot;&gt;Zentinel&lt;&#x2F;a&gt; - Security-first reverse proxy with crash-isolated agents&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h3 id=&quot;client-side-compute&quot;&gt;Client-side compute&lt;&#x2F;h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;webassembly.org&#x2F;&quot;&gt;WebAssembly&lt;&#x2F;a&gt; - Portable binary instruction format for the web&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;wasi.dev&#x2F;&quot;&gt;WASI&lt;&#x2F;a&gt; - WebAssembly System Interface for running Wasm outside the browser&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.w3.org&#x2F;TR&#x2F;webgpu&#x2F;&quot;&gt;WebGPU specification&lt;&#x2F;a&gt; - W3C standard for GPU compute in the browser&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;developers.google.com&#x2F;mediapipe&quot;&gt;MediaPipe&lt;&#x2F;a&gt; - ML inference framework running client-side via Wasm and WebGPU&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;sqlite.org&#x2F;wasm&quot;&gt;SQLite Wasm&lt;&#x2F;a&gt; - Official SQLite build targeting WebAssembly&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;sql.js.org&#x2F;&quot;&gt;sql.js&lt;&#x2F;a&gt; - SQLite compiled to Wasm via Emscripten&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;developer.mozilla.org&#x2F;en-US&#x2F;docs&#x2F;Web&#x2F;API&#x2F;File_System_API&#x2F;Origin_private_file_system&quot;&gt;Origin Private File System&lt;&#x2F;a&gt; - MDN reference for persistent browser storage&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;developer.mozilla.org&#x2F;en-US&#x2F;docs&#x2F;Web&#x2F;API&#x2F;Service_Worker_API&quot;&gt;Service Worker API&lt;&#x2F;a&gt; - MDN reference for offline-capable web apps&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;developer.mozilla.org&#x2F;en-US&#x2F;docs&#x2F;Web&#x2F;API&#x2F;Web_Workers_API&quot;&gt;Web Workers API&lt;&#x2F;a&gt; - MDN reference for background threads in the browser&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h3 id=&quot;crdts-and-local-first&quot;&gt;CRDTs and local-first&lt;&#x2F;h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=x7drE24geUw&quot;&gt;CRDTs: The Hard Parts&lt;&#x2F;a&gt; - Martin Kleppmann’s talk on practical CRDT challenges&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.inkandswitch.com&#x2F;local-first&#x2F;&quot;&gt;Local-first software&lt;&#x2F;a&gt; - Ink and Switch research paper on local-first architectures&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;automerge.org&#x2F;&quot;&gt;Automerge&lt;&#x2F;a&gt; - CRDT library for collaborative applications&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;yjs.dev&#x2F;&quot;&gt;Yjs&lt;&#x2F;a&gt; - High-performance CRDT framework for the web&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;hal.inria.fr&#x2F;inria-00555588&#x2F;document&quot;&gt;A comprehensive study of CRDTs&lt;&#x2F;a&gt; - Shapiro et al., the foundational survey paper&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h3 id=&quot;databases-at-the-edge&quot;&gt;Databases at the edge&lt;&#x2F;h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;turso.tech&#x2F;&quot;&gt;Turso&lt;&#x2F;a&gt; - SQLite-compatible database with edge replicas&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;neon.tech&#x2F;&quot;&gt;Neon&lt;&#x2F;a&gt; - Serverless PostgreSQL with branching&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;superfly&#x2F;litefs&quot;&gt;LiteFS&lt;&#x2F;a&gt; - Distributed SQLite replication by Fly.io&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.cockroachlabs.com&#x2F;&quot;&gt;CockroachDB&lt;&#x2F;a&gt; - Distributed SQL database designed for multi-region&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;supabase.com&#x2F;&quot;&gt;Supabase&lt;&#x2F;a&gt; - Open-source Firebase alternative built on PostgreSQL&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h3 id=&quot;projects-referenced&quot;&gt;Projects referenced&lt;&#x2F;h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;cyanea.bio&quot;&gt;Cyanea&lt;&#x2F;a&gt; - Bioinformatics platform using Wasm and WebGPU for client-side compute&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;archipelag.io&quot;&gt;Archipelag&lt;&#x2F;a&gt; - Distributed compute platform with browser-based Wasm nodes&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;raskell-io&#x2F;kurumi&quot;&gt;Kurumi&lt;&#x2F;a&gt; - Local-first second brain app with CRDT sync&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;raskell-io&#x2F;conflux&quot;&gt;Conflux&lt;&#x2F;a&gt; - Schema-aware CRDT engine for deterministic merge&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
</description>
      </item>
      <item>
          <title>Looking back on 2025</title>
          <pubDate>Wed, 31 Dec 2025 00:00:00 +0000</pubDate>
          <author>Unknown</author>
          <link>https://raskell.io/articles/looking-back-on-2025/</link>
          <guid>https://raskell.io/articles/looking-back-on-2025/</guid>
          <description xml:base="https://raskell.io/articles/looking-back-on-2025/">&lt;p&gt;I spent part of this year on the shores of Okinawa. The water there is something else entirely, this impossible azure that shifts to turquoise in the shallows, so clear you can see the coral formations from the surface. I found myself thinking about systems while I was there, the way you do when you’re floating in salt water with nothing pressing to attend to.&lt;&#x2F;p&gt;
&lt;p&gt;Between swims, I read Tim Berners-Lee’s “This is for everyone.” I’ve been building web software for over a decade now, and I thought I understood what the web was. But reading TBL’s words while watching that reef ecosystem do its thing, thousands of species in constant exchange, no central coordinator, just emergent complexity from simple rules, something shifted in how I saw it all.&lt;&#x2F;p&gt;
&lt;p&gt;The web TBL imagined was supposed to work like that reef. A commons. Many small nodes, each doing their own thing, connected through open protocols. Information flowing freely. The beauty of it wasn’t in any single node but in the connections between them, the way the whole became more than the sum of its parts. The same principle that makes a reef resilient makes a network powerful: diversity, redundancy, local adaptation.&lt;&#x2F;p&gt;
&lt;p&gt;What we built instead looks more like industrial aquaculture. Five platforms. Algorithmic monoculture. Content optimized for engagement metrics rather than usefulness. We took a system designed for decentralization and built the most centralized information infrastructure in human history.&lt;&#x2F;p&gt;
&lt;p&gt;I keep thinking about how that happened. The web itself never changed. HTTP still works the same way, HTML still does what it always did. What changed was the economics. Publishing became free, but being &lt;em&gt;found&lt;&#x2F;em&gt; became expensive. The platforms positioned themselves as the gatekeepers of attention, and suddenly you couldn’t reach people without paying the toll, whether in ad spend or in algorithmic compliance or in the slow erosion of doing whatever it took to game SEO.&lt;&#x2F;p&gt;
&lt;p&gt;The thing about monocultures is they’re efficient right up until they’re not. A reef can lose a species and adapt. A monoculture gets one disease and collapses. We’ve been watching the web’s monoculture show stress fractures for years. The enshittification of platforms, the SEO content farms drowning out signal with noise, the way social media stopped being social and started being a feed of engagement-optimized content from strangers.&lt;&#x2F;p&gt;
&lt;p&gt;Then 2025 happened, and AI started breaking things in interesting ways.&lt;&#x2F;p&gt;
&lt;p&gt;The obvious take is that AI makes the content problem worse. And superficially, that’s true. If you thought SEO spam was bad before, wait until you see what happens when generating ten thousand pages of plausible-sounding garbage costs essentially nothing. The content farms went into overdrive. Social platforms filled with synthetic engagement.&lt;&#x2F;p&gt;
&lt;p&gt;But here’s the thing I keep coming back to: maybe that’s the fever that breaks the infection.&lt;&#x2F;p&gt;
&lt;p&gt;The old economics of the web depended on a particular scarcity. Human attention is finite, and the platforms controlled access to it. You wanted eyeballs, you played their game. SEO worked because Google was the gateway and you could optimize for what Google wanted. Platform distribution mattered because that’s where the people were.&lt;&#x2F;p&gt;
&lt;p&gt;AI disrupts this in ways that I think are genuinely interesting. When an AI assistant can synthesize information from across the web and deliver it directly to the user, the value of ranking first on Google diminishes. Why click through to a content farm when the answer is already in front of you? When AI agents can find and surface relevant content directly, you don’t need to be on the platform where the eyeballs gather. The middleman’s leverage starts to evaporate.&lt;&#x2F;p&gt;
&lt;p&gt;And crucially: when everyone can generate infinite content at zero marginal cost, content quantity becomes worthless. What matters is provenance. Accuracy. Usefulness. The things that are actually hard. The things that require a human perspective, or at least require &lt;em&gt;being right&lt;&#x2F;em&gt; in ways that matter.&lt;&#x2F;p&gt;
&lt;p&gt;I find myself unexpectedly optimistic about what comes next.&lt;&#x2F;p&gt;
&lt;p&gt;If AI breaks the distribution stranglehold that platforms have, the economics of the web could flip in interesting directions. The old model needed scale because reaching people was expensive. But if AI handles discovery, finding relevant content and bringing it to users, then maybe you don’t need scale anymore. Maybe small becomes viable again.&lt;&#x2F;p&gt;
&lt;p&gt;Think about what this means concretely. A static site costs nearly nothing to run. No databases to scale, no servers to babysit, just files sitting on edge nodes around the world. If you don’t need to capture user data for ad-driven personalization, you don’t need the complexity of the surveillance stack. If you don’t need platform distribution, you don’t need to play platform games.&lt;&#x2F;p&gt;
&lt;p&gt;There’s another piece to this that I think most people are missing: edge computing changes what personalization can mean. The conventional wisdom is that personalization requires surveillance, that you need to know everything about a user to show them relevant content. But that’s only true if personalization happens in a centralized database somewhere. If personalization happens at the edge, at the moment of request, you can adapt content to context without ever needing to know who the user is. The edge function doesn’t need a profile. It just needs to know what was asked for and what context it’s being asked in.&lt;&#x2F;p&gt;
&lt;p&gt;This is the architecture I keep thinking about: static content at the origin, edge functions that adapt it anonymously, AI agents that find and surface it based on actual relevance rather than SEO gaming. No surveillance required. No platform dependency. No scaling costs that force you into growth-at-all-costs mode.&lt;&#x2F;p&gt;
&lt;p&gt;It looks more like a reef than a fish farm.&lt;&#x2F;p&gt;
&lt;p&gt;I don’t want to oversell this. The transition, if it happens, won’t be clean. The platforms aren’t going to quietly cede control. The incentives that built the current web are still operating. And AI itself could go in directions that make things worse rather than better. There are plenty of dystopian paths from here.&lt;&#x2F;p&gt;
&lt;p&gt;But when I think about what I want to build toward, it’s that reef model. Many small, specialized nodes. Interconnected through open protocols. Resilient because distributed. Sustainable because the economics work at small scale.&lt;&#x2F;p&gt;
&lt;p&gt;This site is part of that bet. Static content, no tracking, no platform dependencies. The tools I’m working on (Zentinel, Sango, Ushio) are all about making edge infrastructure more accessible, making it easier to build and operate systems that are distributed and independent.&lt;&#x2F;p&gt;
&lt;p&gt;2025 was the year AI started breaking the old model. I don’t know exactly what grows in its place. But floating in that Okinawan water, watching the reef do what reefs do, I got a sense of what healthy systems look like. Diverse. Interconnected. Resilient. Not optimized for any single metric, but somehow working anyway.&lt;&#x2F;p&gt;
&lt;p&gt;That’s what I’m betting on.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;references-and-further-reading&quot;&gt;References and further reading&lt;&#x2F;h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.w3.org&#x2F;People&#x2F;Berners-Lee&#x2F;&quot;&gt;Tim Berners-Lee&lt;&#x2F;a&gt; - Creator of the World Wide Web and author of “This is for everyone”&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Enshittification&quot;&gt;Enshittification&lt;&#x2F;a&gt; - Cory Doctorow’s term for the pattern of platform decay&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;indieweb.org&#x2F;&quot;&gt;IndieWeb&lt;&#x2F;a&gt; - Community building the independent web with open standards&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.getzola.org&#x2F;&quot;&gt;Zola&lt;&#x2F;a&gt; - Static site generator used to build this site&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;zentinelproxy.io&quot;&gt;Zentinel&lt;&#x2F;a&gt; - Security-first reverse proxy for the open web&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;raskell-io&#x2F;sango&quot;&gt;Sango&lt;&#x2F;a&gt; - Edge diagnostics CLI for TLS, HTTP, and security header analysis&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;raskell-io&#x2F;ushio&quot;&gt;Ushio&lt;&#x2F;a&gt; - Deterministic edge traffic replay tool&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
</description>
      </item>
      <item>
          <title>Mise ate my Makefile</title>
          <pubDate>Sun, 14 Dec 2025 00:00:00 +0000</pubDate>
          <author>Unknown</author>
          <link>https://raskell.io/articles/mise-ate-my-makefile/</link>
          <guid>https://raskell.io/articles/mise-ate-my-makefile/</guid>
          <description xml:base="https://raskell.io/articles/mise-ate-my-makefile/">&lt;p&gt;I maintain around forty repositories across four GitHub organizations. &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;zentinelproxy.io&quot;&gt;Zentinel&lt;&#x2F;a&gt; alone accounts for over thirty: the core proxy, a Rust SDK, and a growing collection of agents for WAF inspection, auth, rate limiting, GraphQL security, and a dozen other edge concerns. &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;archipelag.io&quot;&gt;Archipelag&lt;&#x2F;a&gt; spans an Elixir coordinator, a Rust node agent, Python and TypeScript SDKs, mobile agents in Kotlin and Swift, and infrastructure-as-code repos. &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;cyanea.bio&quot;&gt;Cyanea&lt;&#x2F;a&gt; is Elixir with Rust NIFs and a separate Rust bioinformatics library. Then there are the standalone tools: &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;raskell-io&#x2F;conflux&quot;&gt;Conflux&lt;&#x2F;a&gt; (Rust CRDT engine), &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;raskell-io&#x2F;sango&quot;&gt;Sango&lt;&#x2F;a&gt; (Rust edge diagnostics CLI), &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;raskell-io&#x2F;shiioo&quot;&gt;Shiioo&lt;&#x2F;a&gt; (Rust agentic orchestrator), &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;raskell-io&#x2F;vela&quot;&gt;Vela&lt;&#x2F;a&gt; (Rust bare-metal deployment), &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;raskell-io&#x2F;refrakt&quot;&gt;Refrakt&lt;&#x2F;a&gt; (Gleam web framework), &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;raskell-io&#x2F;kurumi&quot;&gt;Kurumi&lt;&#x2F;a&gt; (Svelte local-first app), and this site you are reading (Zola).&lt;&#x2F;p&gt;
&lt;p&gt;The languages span Rust, Elixir, Gleam, Python, TypeScript, Kotlin, Swift, and whatever shell scripts accumulated over the years. Every project needs a toolchain. Most need task automation. All of them need to be approachable for a contributor who clones the repo for the first time.&lt;&#x2F;p&gt;
&lt;p&gt;The Makefile approach was breaking down. So was everything else I tried.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;what-was-failing&quot;&gt;What was failing&lt;&#x2F;h2&gt;
&lt;p&gt;The standard setup for most of my Rust projects was a Makefile with targets for &lt;code&gt;build&lt;&#x2F;code&gt;, &lt;code&gt;test&lt;&#x2F;code&gt;, &lt;code&gt;clippy&lt;&#x2F;code&gt;, &lt;code&gt;fmt&lt;&#x2F;code&gt;, and &lt;code&gt;release&lt;&#x2F;code&gt;. Simple enough for one repo. The problem surfaces when you maintain thirty of them.&lt;&#x2F;p&gt;
&lt;p&gt;GNU Make and BSD Make disagree on syntax in ways that cause silent failures. A Makefile that works on my Linux CI runner breaks on a contributor’s macOS laptop because of a conditional or a shell invocation difference. The fix is always “use GNU make,” but that means documenting it, adding a check, and fielding issues from people who forget.&lt;&#x2F;p&gt;
&lt;p&gt;Worse, Makefiles cannot declare tool dependencies. A Rust project needs a specific Rust version, maybe &lt;code&gt;protoc&lt;&#x2F;code&gt; for gRPC, maybe &lt;code&gt;cargo-watch&lt;&#x2F;code&gt; for development convenience. The Makefile assumes these tools exist. When they do not, the developer gets a cryptic error five minutes into their first build.&lt;&#x2F;p&gt;
&lt;p&gt;So projects accumulated scaffolding:&lt;&#x2F;p&gt;
&lt;pre&gt;&lt;code&gt;.rust-version
.tool-versions
Makefile
scripts&amp;#x2F;setup.sh
scripts&amp;#x2F;ci.sh
scripts&amp;#x2F;release.sh
.envrc
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Six files to express what amounts to: “this project uses Rust 1.83, needs protoc, and has five things you can run.” Multiply that by forty repos and you have a maintenance surface that nobody wants to touch. The &lt;code&gt;scripts&#x2F;&lt;&#x2F;code&gt; folder in particular had a way of growing silently. Someone adds a helper. Someone else copies it from another project with modifications. Six months later you have three slightly different versions of the same release script across three orgs.&lt;&#x2F;p&gt;
&lt;p&gt;The Elixir projects had it worse. Elixir needs Erlang&#x2F;OTP at a specific version, then Elixir itself at a matching version, then Node for asset compilation in Phoenix, then possibly Rust for NIFs (Cyanea compiles Rust bioinformatics code into the BEAM release). Four tool dependencies before you write a line of application code. &lt;code&gt;asdf&lt;&#x2F;code&gt; handled the version management, but slowly and without task automation, so you still needed a Makefile on top.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;why-not-nix&quot;&gt;Why not Nix&lt;&#x2F;h2&gt;
&lt;p&gt;I gave Nix a serious try. The promise is appealing: declare your entire development environment in a single file, get reproducible builds, never worry about system state. The Nix shell concept is genuinely elegant.&lt;&#x2F;p&gt;
&lt;p&gt;In practice, the cost was too high for my use case. Nix’s learning curve is steep even for experienced engineers. The language is its own thing. The documentation assumes you already understand the Nix store model. When something breaks, the error messages point at derivation hashes, not at the thing you actually did wrong.&lt;&#x2F;p&gt;
&lt;p&gt;The bigger issue was onboarding. If a contributor wants to fix a typo in a Zentinel agent’s README, asking them to install Nix and understand flakes is a non-starter. The tool that manages your development environment should not itself become a project you have to learn. Nix solves a harder problem than I have. I do not need bit-for-bit reproducible builds across machines. I need “install Rust 1.83 and run the tests.”&lt;&#x2F;p&gt;
&lt;h2 id=&quot;why-not-asdf&quot;&gt;Why not asdf&lt;&#x2F;h2&gt;
&lt;p&gt;asdf was my default for years. It handled the version management problem well enough. The plugin system meant I could manage Rust, Elixir, Erlang, Node, and Python versions with a single &lt;code&gt;.tool-versions&lt;&#x2F;code&gt; file.&lt;&#x2F;p&gt;
&lt;p&gt;Three things pushed me away.&lt;&#x2F;p&gt;
&lt;p&gt;First, speed. asdf is shell scripts. Every invocation pays the cost of sourcing plugins, resolving versions, and shimming binaries. On a fast machine you barely notice. On CI, where you run &lt;code&gt;asdf install&lt;&#x2F;code&gt; in a fresh environment, the overhead adds up. Mise is a compiled Rust binary. It is meaningfully faster at both installation and version resolution.&lt;&#x2F;p&gt;
&lt;p&gt;Second, no task automation. asdf manages tool versions. That is all it does. You still need Make or a scripts folder for project tasks. That means two tools, two configuration surfaces, two things to document.&lt;&#x2F;p&gt;
&lt;p&gt;Third, plugin quality varied. The core plugins for Node and Ruby were solid. Plugins for less mainstream tools could be stale, broken, or missing. Mise started as an asdf-compatible rewrite and inherited the plugin ecosystem, but its built-in backends for common tools (Rust, Node, Python, Go, Erlang, Elixir) are faster and more reliable than shelling out to plugins.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;what-mise-actually-does&quot;&gt;What mise actually does&lt;&#x2F;h2&gt;
&lt;p&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;mise.jdx.dev&#x2F;&quot;&gt;Mise&lt;&#x2F;a&gt; is a single Rust binary that combines tool version management and task running into one configuration file per project. It does asdf’s job and Make’s job in a single tool.&lt;&#x2F;p&gt;
&lt;p&gt;Here is this site’s configuration. The entire thing:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;toml&quot; class=&quot;language-toml &quot;&gt;&lt;code class=&quot;language-toml&quot; data-lang=&quot;toml&quot;&gt;# mise.toml (raskell.io)
[tools]
zola = &amp;quot;0.19&amp;quot;

[env]
_.file = &amp;quot;.env&amp;quot;

[tasks.serve]
description = &amp;quot;Start the Zola development server&amp;quot;
run = &amp;quot;zola serve&amp;quot;

[tasks.build]
description = &amp;quot;Build the site for production&amp;quot;
run = &amp;quot;zola build&amp;quot;

[tasks.check]
description = &amp;quot;Check the site for errors without building&amp;quot;
run = &amp;quot;zola check&amp;quot;

[tasks.new]
description = &amp;quot;Create a new article&amp;quot;
run = &amp;quot;&amp;quot;&amp;quot;
#!&amp;#x2F;usr&amp;#x2F;bin&amp;#x2F;env bash
if [ -z &amp;quot;$1&amp;quot; ]; then
  echo &amp;quot;Usage: mise run new &amp;lt;article-slug&amp;gt;&amp;quot;
  exit 1
fi
SLUG=&amp;quot;$1&amp;quot;
DATE=$(date +%Y-%m-%d)
FILE=&amp;quot;content&amp;#x2F;articles&amp;#x2F;${SLUG}.md&amp;quot;
cat &amp;gt; &amp;quot;$FILE&amp;quot; &amp;lt;&amp;lt; ARTICLE
+++
title = &amp;quot;&amp;quot;
date = ${DATE}
description = &amp;quot;&amp;quot;
[taxonomies]
tags = []
categories = []
[extra]
author = &amp;quot;Raffael&amp;quot;
+++
ARTICLE
echo &amp;quot;Created $FILE&amp;quot;
&amp;quot;&amp;quot;&amp;quot;
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;One file. Declares the tool (Zola 0.19), loads environment variables, and defines every task a contributor needs. &lt;code&gt;mise install&lt;&#x2F;code&gt; sets up the toolchain. &lt;code&gt;mise tasks&lt;&#x2F;code&gt; shows what is available. &lt;code&gt;mise run serve&lt;&#x2F;code&gt; starts the dev server. No Makefile. No scripts folder. No documentation page explaining how to get Zola at the right version.&lt;&#x2F;p&gt;
&lt;p&gt;For a Rust project like &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;raskell-io&#x2F;shiioo&quot;&gt;Shiioo&lt;&#x2F;a&gt; (the agentic orchestrator), the configuration is larger but follows the same pattern:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;toml&quot; class=&quot;language-toml &quot;&gt;&lt;code class=&quot;language-toml&quot; data-lang=&quot;toml&quot;&gt;# .mise.toml (shiioo)
[tools]
rust = &amp;quot;latest&amp;quot;

[env]
RUST_LOG = &amp;quot;info&amp;quot;
RUST_BACKTRACE = &amp;quot;1&amp;quot;
_.path = [&amp;quot;.&amp;#x2F;target&amp;#x2F;release&amp;quot;, &amp;quot;.&amp;#x2F;target&amp;#x2F;debug&amp;quot;]

[tasks.build]
description = &amp;quot;Build all crates in release mode&amp;quot;
run = &amp;quot;cargo build --release&amp;quot;

[tasks.test]
description = &amp;quot;Run all tests&amp;quot;
run = &amp;quot;cargo test&amp;quot;

[tasks.clippy]
description = &amp;quot;Run clippy lints&amp;quot;
run = &amp;quot;cargo clippy --all-targets -- -D warnings&amp;quot;

[tasks.fmt]
description = &amp;quot;Format code with rustfmt&amp;quot;
run = &amp;quot;cargo fmt --all&amp;quot;

[tasks.ci]
description = &amp;quot;CI pipeline: format check, clippy, test&amp;quot;
depends = [&amp;quot;fmt-check&amp;quot;, &amp;quot;clippy&amp;quot;, &amp;quot;test&amp;quot;]

[tasks.dev]
description = &amp;quot;Full development build and run&amp;quot;
depends = [&amp;quot;fmt&amp;quot;, &amp;quot;check&amp;quot;, &amp;quot;test&amp;quot;]
run = &amp;quot;cargo run -p shiioo-server&amp;quot;
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;The &lt;code&gt;depends&lt;&#x2F;code&gt; key is where mise replaces the one thing Make was genuinely good at: task dependency ordering. &lt;code&gt;mise run ci&lt;&#x2F;code&gt; runs format checking, then clippy, then tests, in sequence. If clippy fails, tests do not run. It is not as expressive as Make’s file-based dependency graph, but for project automation tasks (as opposed to build tasks, which cargo or mix handle), it covers what I actually need.&lt;&#x2F;p&gt;
&lt;p&gt;For a multi-language project like Cyanea, the value is even clearer. The Elixir app needs Erlang, Elixir, Node, and Rust. One &lt;code&gt;[tools]&lt;&#x2F;code&gt; section pins all four. One &lt;code&gt;mise install&lt;&#x2F;code&gt; gets a contributor from zero to a working environment. Without mise, that setup involved installing asdf, adding four plugins, running &lt;code&gt;asdf install&lt;&#x2F;code&gt;, then installing direnv for environment variables, then reading the Makefile to figure out how to run things. With mise, it is two commands: &lt;code&gt;mise install&lt;&#x2F;code&gt; and &lt;code&gt;mise run dev&lt;&#x2F;code&gt;.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-cross-project-pattern&quot;&gt;The cross-project pattern&lt;&#x2F;h2&gt;
&lt;p&gt;The real payoff is not in any single project. It is the consistency across all of them.&lt;&#x2F;p&gt;
&lt;p&gt;Every repo in every org follows the same contract:&lt;&#x2F;p&gt;
&lt;ol&gt;
&lt;li&gt;Clone the repo&lt;&#x2F;li&gt;
&lt;li&gt;Run &lt;code&gt;mise install&lt;&#x2F;code&gt;&lt;&#x2F;li&gt;
&lt;li&gt;Run &lt;code&gt;mise tasks&lt;&#x2F;code&gt; to see what is available&lt;&#x2F;li&gt;
&lt;li&gt;Run &lt;code&gt;mise run dev&lt;&#x2F;code&gt; or &lt;code&gt;mise run test&lt;&#x2F;code&gt;&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;p&gt;That is it. Whether the project is a Rust reverse proxy with thirty modules, an Elixir Phoenix application with LiveView and a NATS integration, a Gleam web framework, or a static site built with Zola, the entry point is identical. The person cloning the repo does not need to know which build system the project uses internally. They do not need to read a CONTRIBUTING.md to find out whether it is &lt;code&gt;make test&lt;&#x2F;code&gt; or &lt;code&gt;cargo test&lt;&#x2F;code&gt; or &lt;code&gt;mix test&lt;&#x2F;code&gt;. It is always &lt;code&gt;mise run test&lt;&#x2F;code&gt;.&lt;&#x2F;p&gt;
&lt;p&gt;This matters more than it sounds. When you maintain projects across four orgs and multiple languages, the cognitive overhead per context switch is the actual bottleneck. I work on Zentinel (Rust) in the morning, switch to Archipelag (Elixir) after lunch, then fix something on this site (Zola) in the evening. Without a consistent interface, each switch means recalling which project uses which conventions. With mise, the interface is always the same. The implementation behind &lt;code&gt;mise run test&lt;&#x2F;code&gt; differs (cargo, mix, zola check), but I do not care about that. I type the same command and the right thing happens.&lt;&#x2F;p&gt;
&lt;p&gt;For new contributors, the effect is more pronounced. Zentinel’s agent ecosystem has over twenty Rust repos. A contributor who submits a PR to the WAF agent and then wants to help with the auth agent does not need to learn a new setup process. Same structure, same task names, same workflow. The consistency compounds.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;what-mise-handles-that-make-does-not&quot;&gt;What mise handles that Make does not&lt;&#x2F;h2&gt;
&lt;p&gt;&lt;strong&gt;Environment variables.&lt;&#x2F;strong&gt; Mise loads environment from the config file or from &lt;code&gt;.env&lt;&#x2F;code&gt; files, scoped to the project directory. When I &lt;code&gt;cd&lt;&#x2F;code&gt; into a project, the right environment is active. When I leave, it deactivates. No direnv, no &lt;code&gt;.envrc&lt;&#x2F;code&gt;, no &lt;code&gt;source .env&lt;&#x2F;code&gt; in every shell session.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Tool installation.&lt;&#x2F;strong&gt; &lt;code&gt;mise install&lt;&#x2F;code&gt; in a fresh clone gets every tool the project needs at the exact specified version. Make cannot do this. Make assumes the tools exist. That assumption breaks on new machines, in CI, and for every new contributor.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Task discovery.&lt;&#x2F;strong&gt; &lt;code&gt;mise tasks&lt;&#x2F;code&gt; lists every available task with its description. Make has &lt;code&gt;make help&lt;&#x2F;code&gt; patterns, but those are conventions, not built-in features. With mise, discoverability is the default.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;File-based tasks.&lt;&#x2F;strong&gt; Any executable file in &lt;code&gt;.mise&#x2F;tasks&#x2F;&lt;&#x2F;code&gt; becomes a task automatically. No registration, no config entry needed. For tasks that outgrow a one-liner in TOML but do not warrant a standalone script in &lt;code&gt;scripts&#x2F;&lt;&#x2F;code&gt;, this is the right middle ground. The task is discoverable through &lt;code&gt;mise tasks&lt;&#x2F;code&gt; but lives as a normal shell script you can test independently.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;what-breaks&quot;&gt;What breaks&lt;&#x2F;h2&gt;
&lt;p&gt;Mise is not perfect. Honest assessment after running it across forty repos:&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Dynamic dependencies.&lt;&#x2F;strong&gt; Make can express “rebuild this if that file changed.” Mise tasks are imperative: they run or they do not. If you need file-level dependency tracking, you still need a build system (cargo, mix, webpack). Mise orchestrates tasks. It does not replace the build tool.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Ecosystem maturity.&lt;&#x2F;strong&gt; Mise is younger than Make and asdf. The documentation is good but not exhaustive. Some features (like hooks and watch mode) are recent additions. The pace of development is fast, which means features arrive quickly but occasionally change between minor versions.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Team familiarity.&lt;&#x2F;strong&gt; Make is universal. Every engineer has encountered a Makefile. Mise is still relatively unknown. Introducing it to a team requires a short pitch, but the pitch is easy: “it is Make plus asdf in one tool, configured in TOML.”&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Complex shell tasks.&lt;&#x2F;strong&gt; When a task grows beyond a few lines, the inline TOML string syntax gets awkward. The workaround is file-based tasks in &lt;code&gt;.mise&#x2F;tasks&#x2F;&lt;&#x2F;code&gt;, which works well but means the task definition lives in two places (TOML for metadata and task list, shell file for implementation).&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-migration&quot;&gt;The migration&lt;&#x2F;h2&gt;
&lt;p&gt;If you are moving an existing project, here is the approach I settled on after migrating across all four orgs:&lt;&#x2F;p&gt;
&lt;ol&gt;
&lt;li&gt;Add a &lt;code&gt;mise.toml&lt;&#x2F;code&gt; (or &lt;code&gt;.mise.toml&lt;&#x2F;code&gt;) at the project root. Start with just &lt;code&gt;[tools]&lt;&#x2F;code&gt; to declare the required versions.&lt;&#x2F;li&gt;
&lt;li&gt;Move the most-used Make targets to &lt;code&gt;[tasks]&lt;&#x2F;code&gt; one at a time. Keep the Makefile around until everything is ported.&lt;&#x2F;li&gt;
&lt;li&gt;Add &lt;code&gt;[env]&lt;&#x2F;code&gt; entries to replace &lt;code&gt;.envrc&lt;&#x2F;code&gt; or &lt;code&gt;.env.example&lt;&#x2F;code&gt; files.&lt;&#x2F;li&gt;
&lt;li&gt;Move standalone scripts from &lt;code&gt;scripts&#x2F;&lt;&#x2F;code&gt; to &lt;code&gt;.mise&#x2F;tasks&#x2F;&lt;&#x2F;code&gt; as file-based tasks.&lt;&#x2F;li&gt;
&lt;li&gt;Delete the Makefile last.&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;p&gt;Do not try to migrate everything at once. Start with the three tasks developers use daily (usually &lt;code&gt;dev&lt;&#x2F;code&gt;, &lt;code&gt;test&lt;&#x2F;code&gt;, and &lt;code&gt;build&lt;&#x2F;code&gt;). The rest can move incrementally. I also settled on a few naming conventions that help across projects: use clear verb-noun prefixes like &lt;code&gt;db-reset&lt;&#x2F;code&gt;, &lt;code&gt;cache-clear&lt;&#x2F;code&gt;, &lt;code&gt;test-unit&lt;&#x2F;code&gt;. Consistent naming makes task discovery predictable even before you run &lt;code&gt;mise tasks&lt;&#x2F;code&gt;.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-bottom-line&quot;&gt;The bottom line&lt;&#x2F;h2&gt;
&lt;p&gt;Mise is not a revolutionary tool. It does not do anything that was previously impossible. You could always install the right Rust version, write a Makefile, set up direnv, and maintain a scripts folder. What mise does is collapse all of that into a single file that is readable, portable, and consistent.&lt;&#x2F;p&gt;
&lt;p&gt;The compound effect is what matters. Forty repositories, four organizations, six languages, one pattern. Clone, install, run. No guessing which build system this particular project uses. No debugging a Makefile that works on Linux but breaks on macOS. No explaining to a contributor that they need asdf plus three plugins plus direnv plus GNU make before they can run the tests.&lt;&#x2F;p&gt;
&lt;p&gt;Every new project starts with a &lt;code&gt;mise.toml&lt;&#x2F;code&gt;. Setup takes two commands instead of a page of instructions. Contributors do not message me asking how to run things. They run &lt;code&gt;mise tasks&lt;&#x2F;code&gt; and figure it out.&lt;&#x2F;p&gt;
&lt;p&gt;That is the tool working.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;references-and-further-reading&quot;&gt;References and further reading&lt;&#x2F;h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;mise.jdx.dev&#x2F;&quot;&gt;mise&lt;&#x2F;a&gt; - Official documentation and installation guide&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;jdx&#x2F;mise&quot;&gt;mise source code&lt;&#x2F;a&gt; - GitHub repository and issue tracker&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;asdf-vm.com&#x2F;&quot;&gt;asdf&lt;&#x2F;a&gt; - The version manager mise was originally inspired by&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;nixos.org&#x2F;&quot;&gt;Nix&lt;&#x2F;a&gt; - Reproducible builds and development environments&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.gnu.org&#x2F;software&#x2F;make&#x2F;&quot;&gt;GNU Make&lt;&#x2F;a&gt; - The build tool mise replaces for task automation&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;toml.io&#x2F;&quot;&gt;TOML specification&lt;&#x2F;a&gt; - The configuration format mise uses&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;direnv.net&#x2F;&quot;&gt;direnv&lt;&#x2F;a&gt; - Environment variable manager that mise’s &lt;code&gt;[env]&lt;&#x2F;code&gt; section replaces&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;raskell-io&#x2F;shiioo&quot;&gt;Shiioo&lt;&#x2F;a&gt; - Real-world mise configuration referenced in this article&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;raskell-io&#x2F;mise-hx&quot;&gt;mise-hx&lt;&#x2F;a&gt; - Example of a custom mise plugin (for the hx Haskell toolchain)&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
</description>
      </item>
      <item>
          <title>Disk space maintenance on Void Linux</title>
          <pubDate>Wed, 01 May 2024 00:00:00 +0000</pubDate>
          <author>Unknown</author>
          <link>https://raskell.io/articles/disk-space-void-linux-maintenance/</link>
          <guid>https://raskell.io/articles/disk-space-void-linux-maintenance/</guid>
          <description xml:base="https://raskell.io/articles/disk-space-void-linux-maintenance/">&lt;h2 id=&quot;monday-morning-surprise&quot;&gt;Monday morning surprise&lt;&#x2F;h2&gt;
&lt;p&gt;As I spent most time doing stuff with my computer rather than configuring my beloved Linux distribution, Void Linux, I have developed the tendency to not really bother about Void at all until something crucial becomes unusable. After almost two years of having switched from Arch to Void, I have actually never encountered any major problem and felt I had made the right decision.&lt;&#x2F;p&gt;
&lt;p&gt;I checked my disk usage out of curiosity if the 250GB solid-state disk would be enough. And there came the surprise:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;shell&quot; class=&quot;language-shell &quot;&gt;&lt;code class=&quot;language-shell&quot; data-lang=&quot;shell&quot;&gt;$ df -H
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        8.4G     0  8.4G   0% &amp;#x2F;dev
tmpfs           8.4G  1.9M  8.4G   1% &amp;#x2F;dev&amp;#x2F;shm
tmpfs           8.4G  1.4M  8.4G   1% &amp;#x2F;run
&amp;#x2F;dev&amp;#x2F;nvme0n1p3  138G  117G   21G  85% &amp;#x2F;
efivarfs        158k   85k   69k  56% &amp;#x2F;sys&amp;#x2F;firmware&amp;#x2F;efi&amp;#x2F;efivars
cgroup          8.4G     0  8.4G   0% &amp;#x2F;sys&amp;#x2F;fs&amp;#x2F;cgroup
&amp;#x2F;dev&amp;#x2F;nvme0n1p4  366G   34G  332G  10% &amp;#x2F;home
&amp;#x2F;dev&amp;#x2F;nvme0n1p1  536M  152k  536M   1% &amp;#x2F;boot&amp;#x2F;efi
tmpfs           8.4G   25k  8.4G   1% &amp;#x2F;tmp
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;My root partition was full, way too full in my opinion. Did I miss something? Is Void not what I was looking for after all? I don’t enjoy baby sitting my OS &lt;em&gt;du jour&lt;&#x2F;em&gt;.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-painless-solution&quot;&gt;The painless solution&lt;&#x2F;h2&gt;
&lt;p&gt;After a quick Brave search, I ended up finding what I was looking for. Some kind fellow software engineer from China didn’t shy away to make a blog post about his journey when he faced the very same problem. Out of annoyance of having to deal with that, I copy-pasted as quickly as possible, not minding what kind of side-effects I might run into, these three commands.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;1-cleaning-the-package-cache&quot;&gt;1. Cleaning the package cache&lt;&#x2F;h3&gt;
&lt;p&gt;All the knowledge I was lacking was to be found with the man page of &lt;code&gt;xbps-remove&lt;&#x2F;code&gt;.&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;shell&quot; class=&quot;language-shell &quot;&gt;&lt;code class=&quot;language-shell&quot; data-lang=&quot;shell&quot;&gt;# xbps-remove -yO
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;The &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;man.voidlinux.org&#x2F;xbps-remove.1#O,&quot;&gt;man page&lt;&#x2F;a&gt; of &lt;code&gt;xbps-remove&lt;&#x2F;code&gt; tells us the &lt;code&gt;-O&lt;&#x2F;code&gt; parameter takes care of &lt;em&gt;cleaning the cache directory removing obsolete binary packages.&lt;&#x2F;em&gt; Obsolete binary packages? Good riddance! I was surprised this to learn that solely this step freed up almost half of my used root partition disk space.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;2-removing-orphaned-packages&quot;&gt;2. Removing orphaned packages&lt;&#x2F;h3&gt;
&lt;pre data-lang=&quot;shell&quot; class=&quot;language-shell &quot;&gt;&lt;code class=&quot;language-shell&quot; data-lang=&quot;shell&quot;&gt;# xbps-remove -yo
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Here the same &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;man.voidlinux.org&#x2F;xbps-remove.1#o,&quot;&gt;man page&lt;&#x2F;a&gt; tells us that the &lt;code&gt;-o&lt;&#x2F;code&gt; parameter takes care of &lt;em&gt;removing installed package orphans that were installed automatically (as dependencies) and are not currently dependencies of any installed package.&lt;&#x2F;em&gt; As before, good riddance!&lt;&#x2F;p&gt;
&lt;h3 id=&quot;3-purging-old-unused-kernels&quot;&gt;3. Purging old, unused kernels&lt;&#x2F;h3&gt;
&lt;p&gt;This one is interesting. While I knew about the circumstance that the people behind Void had developed their own package management ecosystem, I hadn’t fully realized there were other utilities that came along with the upstream Void installation which were there for me to manage my beloved OS. So, apparently, one of these is a &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;void-linux&#x2F;void-packages&#x2F;blob&#x2F;master&#x2F;srcpkgs&#x2F;base-files&#x2F;files&#x2F;vkpurge&quot;&gt;shell script&lt;&#x2F;a&gt; name &lt;code&gt;vkpurge&lt;&#x2F;code&gt;, I must assume as a short name for &lt;code&gt;Void&#x27;s Kernel purging&lt;&#x2F;code&gt; tool. I like this type of naming heavily implying its functionality.&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;shell&quot; class=&quot;language-shell &quot;&gt;&lt;code class=&quot;language-shell&quot; data-lang=&quot;shell&quot;&gt;# vkpurge rm all
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;It performed as expected. Old kernel files (and modules?) were indeed purged and freed up even more disk space. I should add that this step is optional as it is always useful to have some old kernels at hand when things hit the fan (which for me, they haven’t in a very, very long time).&lt;&#x2F;p&gt;
&lt;h2 id=&quot;result&quot;&gt;Result&lt;&#x2F;h2&gt;
&lt;p&gt;I couldn’t be happier.&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;shell&quot; class=&quot;language-shell &quot;&gt;&lt;code class=&quot;language-shell&quot; data-lang=&quot;shell&quot;&gt;$ df -H
Filesystem      Size  Used Avail Use% Mounted on
...
&amp;#x2F;dev&amp;#x2F;nvme0n1p3  138G   45G   93G  33% &amp;#x2F;
...
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;h2 id=&quot;renewal-of-faith&quot;&gt;Renewal of faith&lt;&#x2F;h2&gt;
&lt;p&gt;Overall, why am I even writing this if some other fellow engineer already figured this out? Simply, because I would therefore be able to explain why I have enjoyed my journey with Void as my go-to Linux distribution. It keeps things simple. Some well-documented utilities. As simple that a simple Brave search suffices to find the answer to my problems.&lt;&#x2F;p&gt;
&lt;p&gt;This very aspect of Void is worthwhile highlighing. I remember more arcane Linux distributions that had me in their grip in figuring things out. Many Googles searches were necessary and even more trial and errors attempts to get simple things fixed.&lt;&#x2F;p&gt;
&lt;p&gt;Now back to my Monday morning.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;references-and-further-reading&quot;&gt;References and further reading&lt;&#x2F;h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;voidlinux.org&#x2F;&quot;&gt;Void Linux&lt;&#x2F;a&gt; - Official project site&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;docs.voidlinux.org&#x2F;xbps&#x2F;index.html&quot;&gt;Void Linux Handbook: XBPS&lt;&#x2F;a&gt; - Official documentation for the XBPS package manager&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;man.voidlinux.org&#x2F;xbps-remove.1&quot;&gt;xbps-remove(1) man page&lt;&#x2F;a&gt; - Manual page for package removal and cache cleaning&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;void-linux&#x2F;void-packages&#x2F;blob&#x2F;master&#x2F;srcpkgs&#x2F;base-files&#x2F;files&#x2F;vkpurge&quot;&gt;vkpurge source&lt;&#x2F;a&gt; - Shell script for purging old kernels on Void Linux&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;docs.voidlinux.org&#x2F;about&#x2F;faq.html&quot;&gt;Void Linux FAQ&lt;&#x2F;a&gt; - Common questions about running and maintaining Void&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;hr &#x2F;&gt;
&lt;div class=&quot;footnote-definition&quot; id=&quot;1&quot;&gt;&lt;sup class=&quot;footnote-definition-label&quot;&gt;1&lt;&#x2F;sup&gt;
&lt;p&gt;Painting in header image is “Seaside” by Aleksandr Deyneka&lt;&#x2F;p&gt;
&lt;&#x2F;div&gt;
</description>
      </item>
      <item>
          <title>All beginning is Haskell</title>
          <pubDate>Mon, 06 Mar 2023 00:00:00 +0000</pubDate>
          <author>Unknown</author>
          <link>https://raskell.io/articles/all-beginning-is-haskell/</link>
          <guid>https://raskell.io/articles/all-beginning-is-haskell/</guid>
          <description xml:base="https://raskell.io/articles/all-beginning-is-haskell/">&lt;p&gt;This site is called raskell.io. That is not an accident.&lt;&#x2F;p&gt;
&lt;p&gt;I started learning Haskell because I liked mathematics and someone told me there was a programming language built on top of it. Not “inspired by” in the loose way that every language claims some mathematical foundation. Actually built on lambda calculus, category theory, and type theory, in a way where the math is not decoration but structure.&lt;&#x2F;p&gt;
&lt;p&gt;What I did not expect was how thoroughly it would rewire the way I think about building software. Not because Haskell is the best language for every task. It is not, and I write far more Rust than Haskell these days. But because Haskell teaches you to think about programs in a way that makes you better at everything else.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;what-haskell-actually-teaches-you&quot;&gt;What Haskell actually teaches you&lt;&#x2F;h2&gt;
&lt;p&gt;Most introductions to Haskell talk about pure functions, immutability, and monads. They are not wrong, but they miss the point. The point is not any single feature. It is how those features combine into a way of thinking about programs as compositions of well-typed transformations.&lt;&#x2F;p&gt;
&lt;p&gt;In an imperative language, you think about sequences of steps. Do this, then that, then check a condition, then loop. The program is a recipe. In Haskell, you think about transformations. What goes in, what comes out, what shape does the data have at each stage. The program is a pipeline.&lt;&#x2F;p&gt;
&lt;p&gt;This sounds abstract until you see it in practice. Suppose you need to process a list of user records: filter out inactive users, extract their email addresses, and normalize them to lowercase.&lt;&#x2F;p&gt;
&lt;p&gt;In an imperative style, you write a loop with conditions and mutations. In Haskell:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;haskell&quot; class=&quot;language-haskell &quot;&gt;&lt;code class=&quot;language-haskell&quot; data-lang=&quot;haskell&quot;&gt;activeEmails :: [User] -&amp;gt; [Email]
activeEmails = map (normalize . email) . filter isActive
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;One line. Read it right to left: filter active users, then map over the result, extracting and normalizing emails. The type signature tells you what goes in (&lt;code&gt;[User]&lt;&#x2F;code&gt;) and what comes out (&lt;code&gt;[Email]&lt;&#x2F;code&gt;). No mutation. No intermediate variables. No place for off-by-one errors or null pointer exceptions.&lt;&#x2F;p&gt;
&lt;p&gt;The type signature is not just documentation. It is a contract enforced by the compiler. If &lt;code&gt;isActive&lt;&#x2F;code&gt; expects a &lt;code&gt;User&lt;&#x2F;code&gt; and you pass it a &lt;code&gt;String&lt;&#x2F;code&gt;, the program will not compile. If &lt;code&gt;normalize&lt;&#x2F;code&gt; returns an &lt;code&gt;Email&lt;&#x2F;code&gt; but you try to use it as a &lt;code&gt;String&lt;&#x2F;code&gt;, the program will not compile. The compiler is your first reviewer, and it is tireless.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;types-as-design-tools&quot;&gt;Types as design tools&lt;&#x2F;h2&gt;
&lt;p&gt;The deeper lesson is that types are not just error catchers. They are design tools.&lt;&#x2F;p&gt;
&lt;p&gt;When I design a system in Haskell, I start with the types. What are the entities? What are the relationships? What transformations are valid? The type system forces you to be precise about these questions before you write any logic. This precision surfaces design problems early, when they are cheap to fix.&lt;&#x2F;p&gt;
&lt;p&gt;Consider modeling a document that can be in one of several states:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;haskell&quot; class=&quot;language-haskell &quot;&gt;&lt;code class=&quot;language-haskell&quot; data-lang=&quot;haskell&quot;&gt;data Document
  = Draft { content :: Text, author :: UserId }
  | UnderReview { content :: Text, author :: UserId, reviewer :: UserId }
  | Published { content :: Text, author :: UserId, publishedAt :: UTCTime }
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;This is an algebraic data type. Each variant carries exactly the data that makes sense for that state. A &lt;code&gt;Draft&lt;&#x2F;code&gt; has no reviewer. A &lt;code&gt;Published&lt;&#x2F;code&gt; document has a timestamp. You cannot accidentally access a reviewer on a draft because the type system will not let you. The invalid state is unrepresentable.&lt;&#x2F;p&gt;
&lt;p&gt;This pattern, making illegal states unrepresentable, is perhaps the most valuable idea I took from Haskell. I use it in Rust constantly. Rust’s &lt;code&gt;enum&lt;&#x2F;code&gt; with associated data is directly descended from Haskell’s algebraic data types, and the same design principle applies: encode your invariants in the type system and let the compiler enforce them.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-monad-is-not-the-point&quot;&gt;The monad is not the point&lt;&#x2F;h2&gt;
&lt;p&gt;Every Haskell introduction eventually gets to monads, usually with a metaphor involving burritos or boxes. I will skip the metaphor.&lt;&#x2F;p&gt;
&lt;p&gt;A monad is a pattern for sequencing computations that carry some context. The &lt;code&gt;IO&lt;&#x2F;code&gt; monad carries the context of interacting with the outside world. The &lt;code&gt;Maybe&lt;&#x2F;code&gt; monad carries the context of possible failure. The &lt;code&gt;State&lt;&#x2F;code&gt; monad carries the context of mutable state. The pattern is the same in each case: take a value in a context, apply a function that produces a new value in a context, get back a combined context.&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;haskell&quot; class=&quot;language-haskell &quot;&gt;&lt;code class=&quot;language-haskell&quot; data-lang=&quot;haskell&quot;&gt;lookupUser :: UserId -&amp;gt; IO (Maybe User)
lookupUser uid = do
  conn &amp;lt;- getConnection
  result &amp;lt;- query conn &amp;quot;SELECT * FROM users WHERE id = ?&amp;quot; [uid]
  return (listToMaybe result)
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;The &lt;code&gt;IO&lt;&#x2F;code&gt; monad here sequences database operations. The &lt;code&gt;Maybe&lt;&#x2F;code&gt; handles the case where no user is found. The types tell you both things at a glance: this function does I&#x2F;O and might not return a result.&lt;&#x2F;p&gt;
&lt;p&gt;The point of monads is not that they are clever. The point is that they make effects explicit and composable. In most languages, a function can do I&#x2F;O, throw exceptions, mutate global state, or launch missiles, and you cannot tell from its signature. In Haskell, the type signature tells you exactly what effects a function can have. &lt;code&gt;Int -&amp;gt; Int&lt;&#x2F;code&gt; is pure. &lt;code&gt;Int -&amp;gt; IO Int&lt;&#x2F;code&gt; does I&#x2F;O. &lt;code&gt;Int -&amp;gt; Maybe Int&lt;&#x2F;code&gt; can fail. The information is right there, enforced by the compiler.&lt;&#x2F;p&gt;
&lt;p&gt;This discipline, making effects explicit, changed how I design APIs even in languages that do not enforce it. When I write a Rust function that returns &lt;code&gt;Result&amp;lt;T, E&amp;gt;&lt;&#x2F;code&gt;, I am using the same pattern: making failure explicit in the type rather than hiding it behind an exception. Rust learned this from Haskell, and so did I.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;why-i-am-still-building-haskell-tooling&quot;&gt;Why I am still building Haskell tooling&lt;&#x2F;h2&gt;
&lt;p&gt;If Haskell taught me so much, why do I mostly write Rust?&lt;&#x2F;p&gt;
&lt;p&gt;The honest answer: Haskell’s ecosystem has gaps. The language itself is excellent. GHC is one of the most sophisticated compilers ever built. The type system is unmatched in its expressiveness among production languages. But the surrounding infrastructure, the package management, the build tooling, the deployment story, has not kept pace.&lt;&#x2F;p&gt;
&lt;p&gt;Dependency management in Haskell is fragmented. Cabal and Stack coexist with overlapping but incompatible approaches. Build times are long. Cross-compilation is painful. Setting up a Haskell development environment from scratch still involves more friction than it should in 2026.&lt;&#x2F;p&gt;
&lt;p&gt;This is why hx exists. hx is a Haskell toolchain CLI that I am building in Rust. The choice of implementation language is deliberate. Haskell’s tooling problems are partly caused by tooling that is itself written in Haskell, creating bootstrap problems and long compile times for the tools themselves. A Rust binary starts instantly, compiles to a single static executable, and cross-compiles trivially. The tool should not have the same dependencies as the thing it manages.&lt;&#x2F;p&gt;
&lt;p&gt;hx is distributed through &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;raskell.io&#x2F;articles&#x2F;mise-ate-my-makefile&#x2F;&quot;&gt;mise&lt;&#x2F;a&gt; (naturally), as well as through AUR, Homebrew, Scoop, and Chocolatey. The goal is that setting up a Haskell project should be as frictionless as setting up a Rust project: one command to install the toolchain, one command to build.&lt;&#x2F;p&gt;
&lt;p&gt;On the other end of the spectrum, bhc (the Basel Haskell Compiler) is an experiment in taking Haskell in a direction GHC was never designed for: compiling Haskell for low-latency runtimes without a garbage collector. The target is workloads like tensor pipelines and real-time systems where GC pauses are not acceptable. bhc is early and ambitious, but it comes from the same conviction: Haskell’s ideas deserve better infrastructure than they currently have.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-haskell-in-my-rust&quot;&gt;The Haskell in my Rust&lt;&#x2F;h2&gt;
&lt;p&gt;I write Rust the way Haskell taught me to think.&lt;&#x2F;p&gt;
&lt;p&gt;Rust’s ownership model is not the same as Haskell’s purity, but it serves a similar purpose: it forces you to think about data flow explicitly. In Haskell, you cannot mutate a value because the language will not let you. In Rust, you can mutate, but the borrow checker forces you to be explicit about who owns the data and who can see it. Both languages make you think before you write.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;raskell-io&#x2F;conflux&quot;&gt;Conflux&lt;&#x2F;a&gt;, my CRDT engine, uses algebraic data types for its merge semantics. Each CRDT field type (LwwRegister, GrowOnlySet, ObservedRemoveSet) is an enum variant with associated data, exactly the pattern I described above. The merge function is associative, commutative, and idempotent. These are mathematical properties that I learned to care about from Haskell, where such properties are often expressed as type class laws.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;zentinelproxy.io&quot;&gt;Zentinel&lt;&#x2F;a&gt;, the reverse proxy, uses Rust’s type system to enforce that WAF decisions are handled in the correct pipeline stage. An &lt;code&gt;AgentDecision&lt;&#x2F;code&gt; is either &lt;code&gt;Allow&lt;&#x2F;code&gt;, &lt;code&gt;Block&lt;&#x2F;code&gt;, or &lt;code&gt;Modify&lt;&#x2F;code&gt;, and the proxy’s merge logic ensures that a &lt;code&gt;Block&lt;&#x2F;code&gt; from any agent cannot be overridden. The pattern is a monoid (decisions combine associatively with &lt;code&gt;Block&lt;&#x2F;code&gt; as the absorbing element), though nobody would call it that in the Rust codebase. The concept came from Haskell. The implementation is pure Rust.&lt;&#x2F;p&gt;
&lt;p&gt;Even &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;raskell-io&#x2F;shiioo&quot;&gt;Shiioo&lt;&#x2F;a&gt;, the agentic orchestrator, uses Haskell-influenced patterns. DAG workflows are compositions of typed transformations. Events are algebraic data types with exhaustive pattern matching. The event-sourcing model treats state as a fold over an event stream. &lt;code&gt;foldl&lt;&#x2F;code&gt; in Haskell, &lt;code&gt;Iterator::fold&lt;&#x2F;code&gt; in Rust. Same idea, different syntax.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;why-raskell&quot;&gt;Why “raskell”&lt;&#x2F;h2&gt;
&lt;p&gt;The name is a portmanteau. Raffael plus Haskell. I chose it because Haskell is where my engineering thinking started to take its current shape. Not the first language I learned, but the first one that changed how I think about all the others.&lt;&#x2F;p&gt;
&lt;p&gt;I do not believe you need to write Haskell to benefit from Haskell. But I believe that learning it, really learning it, not just reading about monads but building something real with algebraic data types and type classes and higher-order functions, will make you a better engineer in whatever language you actually use.&lt;&#x2F;p&gt;
&lt;p&gt;All beginning is Haskell. The rest is implementation.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;references-and-further-reading&quot;&gt;References and further reading&lt;&#x2F;h2&gt;
&lt;h3 id=&quot;learning-haskell&quot;&gt;Learning Haskell&lt;&#x2F;h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.haskell.org&#x2F;&quot;&gt;Haskell Language&lt;&#x2F;a&gt; - Official site with documentation and community links&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;learnyouahaskell.com&#x2F;&quot;&gt;Learn You a Haskell for Great Good!&lt;&#x2F;a&gt; - Approachable illustrated introduction&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;book.realworldhaskell.org&#x2F;&quot;&gt;Real World Haskell&lt;&#x2F;a&gt; - Practical Haskell for production use&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;wiki.haskell.org&#x2F;&quot;&gt;Haskell Wiki&lt;&#x2F;a&gt; - Community-maintained reference and tutorials&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;wiki.haskell.org&#x2F;Typeclassopedia&quot;&gt;Typeclassopedia&lt;&#x2F;a&gt; - Comprehensive guide to Haskell’s type class hierarchy&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h3 id=&quot;type-systems-and-theory&quot;&gt;Type systems and theory&lt;&#x2F;h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Haskell_Curry&quot;&gt;Haskell Curry&lt;&#x2F;a&gt; - The logician the language is named after&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Lambda_calculus&quot;&gt;Lambda calculus&lt;&#x2F;a&gt; - Alonzo Church’s formal system underlying Haskell&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Hindley%E2%80%93Milner_type_system&quot;&gt;Hindley-Milner type system&lt;&#x2F;a&gt; - The type inference algorithm at the core of Haskell and ML&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;blog.janestreet.com&#x2F;effective-ml-revisited&#x2F;&quot;&gt;Making illegal states unrepresentable&lt;&#x2F;a&gt; - Yaron Minsky’s influential talk on using types for correctness&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;wiki.haskell.org&#x2F;Algebraic_data_type&quot;&gt;Algebraic data types&lt;&#x2F;a&gt; - Haskell wiki reference on sum and product types&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h3 id=&quot;monads-and-effects&quot;&gt;Monads and effects&lt;&#x2F;h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;homepages.inf.ed.ac.uk&#x2F;wadler&#x2F;papers&#x2F;marktoberdorf&#x2F;baastad.pdf&quot;&gt;Philip Wadler, “Monads for functional programming”&lt;&#x2F;a&gt; - The foundational paper on monads in programming&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;wiki.haskell.org&#x2F;All_About_Monads&quot;&gt;All About Monads&lt;&#x2F;a&gt; - Haskell wiki guide to monadic programming&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h3 id=&quot;haskell-tooling&quot;&gt;Haskell tooling&lt;&#x2F;h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.haskell.org&#x2F;ghc&#x2F;&quot;&gt;GHC&lt;&#x2F;a&gt; - The Glasgow Haskell Compiler&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.haskell.org&#x2F;cabal&#x2F;&quot;&gt;Cabal&lt;&#x2F;a&gt; - Haskell’s build and package system&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;docs.haskellstack.org&#x2F;&quot;&gt;Stack&lt;&#x2F;a&gt; - Alternative build tool with curated package sets&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;raskell-io&#x2F;mise-hx&quot;&gt;mise-hx&lt;&#x2F;a&gt; - mise plugin for the hx Haskell toolchain CLI&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h3 id=&quot;projects-referenced&quot;&gt;Projects referenced&lt;&#x2F;h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;raskell-io&#x2F;conflux&quot;&gt;Conflux&lt;&#x2F;a&gt; - CRDT engine using algebraic data types for merge semantics&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;zentinelproxy.io&quot;&gt;Zentinel&lt;&#x2F;a&gt; - Reverse proxy with monoid-based decision merging&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;raskell-io&#x2F;shiioo&quot;&gt;Shiioo&lt;&#x2F;a&gt; - Agentic orchestrator using event-sourced state folds&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
</description>
      </item>
      <item>
          <title>My OpenBSD journey: Getting it virtualized with libvirt (1)</title>
          <pubDate>Mon, 06 Feb 2023 00:00:00 +0000</pubDate>
          <author>Unknown</author>
          <link>https://raskell.io/articles/my-openbsd-journey-getting-it-virtualized-with-libvirt-1/</link>
          <guid>https://raskell.io/articles/my-openbsd-journey-getting-it-virtualized-with-libvirt-1/</guid>
          <description xml:base="https://raskell.io/articles/my-openbsd-journey-getting-it-virtualized-with-libvirt-1/">&lt;h2 id=&quot;void-linux-as-my-daily-driver&quot;&gt;Void Linux as my daily driver&lt;&#x2F;h2&gt;
&lt;p&gt;Around six months ago, I decided to ditch my long in the tooth Arch-based setup on my belovest Thinkpad X1 Carbon. I’ve been very loyal over the years, and almost came to belive that Arch will be a constant in my adult life. While I kept up with upcoming technologies, I somehow lost track of the ever so diversifying landscape of Linux distributions. It took me a while of constantly coming across a generically named reference to what seemed to be yet another Linux distribution. That outwardly generic sounding name, &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;voidlinux.org&#x2F;&quot;&gt;Void Linux&lt;&#x2F;a&gt;, kept poking my curiosity by supposedly feeling like Arch Linux in the old days, while sharing some substantial DNA with the BSD operating systems. Yet, that’s another story I might tell another day, but to remain brief, the BSDs, in particular the infamous OpenBSD with its quite infamous lead developer Theo De Raadt, always were what I considered the endgame. The holy grail of Unix operating systems, so did I think over the decades, FreeBSD, NetBSD and OpenBSD, have always been on my personal radar and I felt I had to earn the intellectual capacity to be able to properly put them at use one day. Last year, when I made the (almost painless) switch from Arch Linux to Void Linux, the simplicity and especially the barebone experience of Void reignited the fascination and the admiration I always had for the BSD operating systems and their philosophy.&lt;&#x2F;p&gt;
&lt;p&gt;While I could write (and definitely will in the near future) about how my journey onto Void Linux and how it has been so far, I preferred to write down, in some like diary, and document every step on how one can approach and ultimately use &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.openbsd.org&#x2F;&quot;&gt;OpenBSD&lt;&#x2F;a&gt; in 2023. Big disclaimer, I’ve yet to install OpenBSD on some baremetal server I ordered some days ago, but dabbled around in the meantime with OS-level virtualization in order to get it running. That’s what brings me to &lt;em&gt;libvirt&lt;&#x2F;em&gt; and my surprise to learn that I wouldn’t need some full-fledged virtualization solution like the ones offered by VirtualBox or VMWare to efficiently run a virtualized OpenBSD machine. So, ok, let’s recap, so far we got the following bill of materials:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;Void Linux as the host system&lt;&#x2F;li&gt;
&lt;li&gt;OpenBSD to be virtualized on that host system&lt;&#x2F;li&gt;
&lt;li&gt;libvirt as the glue that makes virtualization feel like black magic&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h3 id=&quot;from-void-to-openbsd&quot;&gt;From Void to OpenBSD&lt;&#x2F;h3&gt;
&lt;p&gt;Before I tell you more about the history of how things went down while setting up OpenBSD, let me give you some basic notions about both Void Linux, being one out of many Linux distributions for the sake of simplicity representing them all as it ended up being my distribution of choice, and OpenBSD. As already mentioned earlier, Void Linux and OpenBSD are both Unix-like operating systems, but they feature enough differences to make them noteworthy. Here are a few similarities and differences between the two:&lt;&#x2F;p&gt;
&lt;p&gt;What is similar:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;Both are free and open-source operating systems.&lt;&#x2F;li&gt;
&lt;li&gt;Both use a package manager for software management. Void Linux uses XBPS, while OpenBSD uses pkg_add.&lt;&#x2F;li&gt;
&lt;li&gt;Both prioritize security and stability in their development and design.&lt;&#x2F;li&gt;
&lt;li&gt;Both feature a version control based package repository, meaning that changes in build definition are managed by pull requests from users.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;What is different:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;License: Void Linux is licensed under the MIT License, while OpenBSD is licensed under the ISC License.&lt;&#x2F;li&gt;
&lt;li&gt;Philosophy: OpenBSD prioritizes security and privacy, while Void Linux prioritizes simplicity and modularity.&lt;&#x2F;li&gt;
&lt;li&gt;Package Management: Void Linux uses the XBPS binary package manager, while OpenBSD uses pkg_add (also binary). OpenBSD additionally has a ports system for building from source.&lt;&#x2F;li&gt;
&lt;li&gt;Package Repository: Void Linux has a large and diverse repository, while OpenBSD has a smaller and more curated repository.&lt;&#x2F;li&gt;
&lt;li&gt;Init System: Void Linux uses runit as its init system, while OpenBSD uses rc. None uses the infamous systemd init system.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h2 id=&quot;what-is-openbsd-in-a-nutshell&quot;&gt;What is OpenBSD in a nutshell&lt;&#x2F;h2&gt;
&lt;p&gt;OpenBSD is a free and open-source operating system that focuses on security, standardization, and robustness. It is based on the Berkeley Software Distribution (BSD) Unix operating system and is developed by a global community of volunteers. OpenBSD aims to provide a secure platform for both personal and enterprise use by implementing strong security features, including access control mechanisms, encryption, and auditing.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Theo_de_Raadt&quot;&gt;Theo de Raadt&lt;&#x2F;a&gt; is the founder and lead developer of OpenBSD. His main objective with OpenBSD is to create a secure operating system that is free from backdoors, vulnerabilities, and other security weaknesses. He is committed to auditing the source code of the operating system and third-party software included with it, to identify and remove any potential security risks. De Raadt is also dedicated to improving the overall quality of the codebase and ensuring compatibility with a wide range of hardware and software.&lt;&#x2F;p&gt;
&lt;p&gt;What makes OpenBSD really special and stand out is that is developed a suite of tools that got adopted by other OSs like Linux, macOS or even Windows. One of the most famous instances of such adoption is the now de facto standard openssh suite. It actually emerged from within the development circle of the OpenBSD project. OpenBSD also implemented a wide range of OS features that are by now considered staples among other OSs, things like Linux-based OS-level containerization done via the means of cgroups, something that OpenBSD already pioneered and solved with a different spin many years before Linux with pledge and unveil. Go check out &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;why-openbsd.rocks&#x2F;fact&#x2F;freezero&#x2F;&quot;&gt;Why OpenBSD rocks&lt;&#x2F;a&gt; to get a feel what makes OpenBSD so unique and interesting.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;virtualization-with-libvirt&quot;&gt;Virtualization with libvirt&lt;&#x2F;h3&gt;
&lt;p&gt;So now, let’s get back to our virtualization endeavour where we would like to virtualize OpenBSD on a Void Linux installation. If you happen to be using another Linux distribution, most of the individual steps would be very similar. That brings me to the next technology we should explain a bit more here, and that is libvirt.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;libvirt.org&#x2F;&quot;&gt;libvirt&lt;&#x2F;a&gt; is an open-source virtualization management library that provides a simple and unified API for managing virtualization technologies, including KVM, QEMU, Xen, and others. It aims to simplify the process of creating, managing, and migrating virtual machines, storage, and networks, and to make it easier for administrators to manage virtual environments.&lt;&#x2F;p&gt;
&lt;p&gt;To virtualize an operating system like OpenBSD with libvirt, you need to follow these steps:&lt;&#x2F;p&gt;
&lt;ol&gt;
&lt;li&gt;Install libvirt and the virtualization technology you want to use, such as KVM.&lt;&#x2F;li&gt;
&lt;li&gt;Download the OpenBSD iso file and place it in a location accessible by libvirt.&lt;&#x2F;li&gt;
&lt;li&gt;Create a new virtual machine in libvirt with the OpenBSD ISO as the installation media. This can be done through the command line or using a graphical user interface such as virt-manager.&lt;&#x2F;li&gt;
&lt;li&gt;Configure the virtual machine, including the amount of memory, CPU, and disk space, to meet the requirements of OpenBSD.&lt;&#x2F;li&gt;
&lt;li&gt;Start the virtual machine and install OpenBSD as you would on a physical machine.&lt;&#x2F;li&gt;
&lt;li&gt;Once the installation is complete, you can configure the virtual network, storage, and other settings as required.&lt;&#x2F;li&gt;
&lt;li&gt;Finally, you can use the libvirt API or the command line to manage and control the virtual machine, including starting, stopping, migrating, and snapshotting.&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;h3 id=&quot;step-by-step-guide&quot;&gt;Step-by-step guide&lt;&#x2F;h3&gt;
&lt;p&gt;Let’s first install the &lt;code&gt;libvirt&lt;&#x2F;code&gt; package and some related packages which we need in order to connect via VNC. The VNC will provide us with the possibility to use the graphical interface of the running OpenBSD instance.&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;bash&quot; class=&quot;language-bash &quot;&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;$ sudo xbps-install -S dbus qemu libvirt virt-manager virt-viewer tigervnc
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Now, we need to add our user, in my case &lt;code&gt;raskell&lt;&#x2F;code&gt;, to the &lt;code&gt;libvirt&lt;&#x2F;code&gt; group which got simultanously created with the installation of libvirt.&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;bash&quot; class=&quot;language-bash &quot;&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;$ sudo usermod -aG libvirt raskell
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;OpenBSD features a release cycle of six months. We would need to update our system every six month to keep up with the latest packages. During a given release, only security and bug fix patches are applied to the curated packages maintained by pkg_add. Therefore, in February 2023, we’re using the &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.openbsd.org&#x2F;72.html&quot;&gt;OpenBSD 7.2&lt;&#x2F;a&gt; release version. As I’m living in Switzerland, I chose to pull the iso image from a Swiss mirror, in this case from &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;mirror.ungleich.ch&#x2F;pub&#x2F;OpenBSD&#x2F;7.2&#x2F;&quot;&gt;&lt;code&gt;mirror.ungleich.ch&#x2F;pub&#x2F;OpenBSD&lt;&#x2F;code&gt;&lt;&#x2F;a&gt; (check what mirror is closest to you to get the best download rate).&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;bash&quot; class=&quot;language-bash &quot;&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;# cd &amp;#x2F;var&amp;#x2F;lib&amp;#x2F;libvirt&amp;#x2F;boot&amp;#x2F;
# sudo wget https:&amp;#x2F;&amp;#x2F;mirror.ungleich.ch&amp;#x2F;pub&amp;#x2F;OpenBSD&amp;#x2F;7.2&amp;#x2F;amd64&amp;#x2F;install72.iso
--2023-01-12 20:48:15--  https:&amp;#x2F;&amp;#x2F;mirror.ungleich.ch&amp;#x2F;pub&amp;#x2F;OpenBSD&amp;#x2F;7.2&amp;#x2F;amd64&amp;#x2F;install72.iso
Resolving mirror.ungleich.ch (mirror.ungleich.ch)... 2a0a:e5c0:2:2:400:c8ff:fe68:bef3, 185.203.114.135
Connecting to mirror.ungleich.ch (mirror.ungleich.ch)|2a0a:e5c0:2:2:400:c8ff:fe68:bef3|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 583352320 (556M) [application&amp;#x2F;octet-stream]
Saving to: ‘install72.iso.1’

install72.iso.1                      100%[====================================================================&amp;gt;] 556.33M  7.11MB&amp;#x2F;s    in 77s     

2023-01-12 20:49:33 (7.19 MB&amp;#x2F;s) - ‘install72.iso.1’ saved [583352320&amp;#x2F;583352320]
sudo wget https:&amp;#x2F;&amp;#x2F;mirror.ungleich.ch&amp;#x2F;pub&amp;#x2F;OpenBSD&amp;#x2F;7.2&amp;#x2F;amd64&amp;#x2F;SHA256
--2023-01-12 20:47:38--  https:&amp;#x2F;&amp;#x2F;mirror.ungleich.ch&amp;#x2F;pub&amp;#x2F;OpenBSD&amp;#x2F;7.2&amp;#x2F;amd64&amp;#x2F;SHA256
Resolving mirror.ungleich.ch (mirror.ungleich.ch)... 2a0a:e5c0:2:2:400:c8ff:fe68:bef3, 185.203.114.135
Connecting to mirror.ungleich.ch (mirror.ungleich.ch)|2a0a:e5c0:2:2:400:c8ff:fe68:bef3|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1992 (1.9K) [application&amp;#x2F;octet-stream]
Saving to: ‘SHA256.1’

SHA256.1                             100%[====================================================================&amp;gt;]   1.95K  --.-KB&amp;#x2F;s    in 0s      

2023-01-12 20:47:39 (742 MB&amp;#x2F;s) - ‘SHA256.1’ saved [1992&amp;#x2F;1992]
# grep install63.iso SHA256 &amp;gt; &amp;#x2F;tmp&amp;#x2F;x
# sha256sum -c &amp;#x2F;tmp&amp;#x2F;x
# rm &amp;#x2F;tmp&amp;#x2F;x
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Before we can start the virtualization server and get running our OpenBSD instance, we need to define the configuraiton on how to virtualize and ultimately boot the system with. This is done with &lt;code&gt;virt-install&lt;&#x2F;code&gt;. Noteworthy here is that we &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.qemu.org&#x2F;&quot;&gt;QEMU&lt;&#x2F;a&gt; as our emulation solution of choice, we allocate up to 4GB of RAM and 4 CPU cores to the machine.&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;bash&quot; class=&quot;language-bash &quot;&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;$ sudo virt-install \
      --name=openbsd \
      --virt-type=qemu \
      --memory=2048,maxmemory=4096 \
      --vcpus=2,maxvcpus=4 \
      --cpu host \
      --os-variant=openbsd7.0 \
      --cdrom=&amp;#x2F;var&amp;#x2F;lib&amp;#x2F;libvirt&amp;#x2F;boot&amp;#x2F;install72.iso \
      --network=bridge=virbr0,model=virtio \
      --graphics=vnc \
      --disk path=&amp;#x2F;var&amp;#x2F;lib&amp;#x2F;libvirt&amp;#x2F;images&amp;#x2F;openbsd.qcow2,size=40,bus=virtio,format=qcow2
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Once, it is up and running, we can use a vnc solution to connect to the running machine. In this case, I chose &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.libvirt.org&#x2F;manpages&#x2F;virsh.html&quot;&gt;virsh&lt;&#x2F;a&gt; to do the job. virsh is a command-line interface tool for managing virtualization environments created with libvirt. It allows us to manage virtual machines, storage pools, and network interfaces, as well as other virtualization components, from the command line.&lt;&#x2F;p&gt;
&lt;p&gt;To establish a VNC connection with a running libvirt virtualized OpenBSD instance, you can use the following steps:&lt;&#x2F;p&gt;
&lt;ol&gt;
&lt;li&gt;Start the virtual machine in libvirt: You can start the virtual machine using the virsh command &lt;strong&gt;&lt;code&gt;virsh start &amp;lt;vm-name&amp;gt;&lt;&#x2F;code&gt;&lt;&#x2F;strong&gt;, where &lt;strong&gt;&lt;code&gt;&amp;lt;vm-name&amp;gt;&lt;&#x2F;code&gt;&lt;&#x2F;strong&gt; is the name of the virtual machine you want to start.&lt;&#x2F;li&gt;
&lt;li&gt;Find the VNC display: Once the virtual machine is running, you can find the VNC display number for the virtual machine using the virsh command &lt;strong&gt;&lt;code&gt;virsh vncdisplay &amp;lt;vm-name&amp;gt;&lt;&#x2F;code&gt;&lt;&#x2F;strong&gt;.&lt;&#x2F;li&gt;
&lt;li&gt;Connect to the VNC display: You can connect to the VNC display using a VNC client, such as &lt;strong&gt;&lt;code&gt;vncviewer&lt;&#x2F;code&gt;&lt;&#x2F;strong&gt;, and specify the IP address of the host running the virtual machine and the VNC display number. For example, if the host’s IP address is &lt;strong&gt;&lt;code&gt;192.168.0.100&lt;&#x2F;code&gt;&lt;&#x2F;strong&gt; and the VNC display number is &lt;strong&gt;&lt;code&gt;:0&lt;&#x2F;code&gt;&lt;&#x2F;strong&gt;, the command to connect would be &lt;strong&gt;&lt;code&gt;vncviewer 192.168.0.100:0&lt;&#x2F;code&gt;&lt;&#x2F;strong&gt;.&lt;&#x2F;li&gt;
&lt;li&gt;Authenticate to the VNC server: You may need to enter a password to authenticate to the VNC server. The password is set when the virtual machine is created in libvirt.&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;p&gt;With these steps, you can establish a VNC connection with a running libvirt virtualized OpenBSD instance and interact with the virtual machine’s graphical user interface.&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;bash&quot; class=&quot;language-bash &quot;&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;$ virsh dumpxml openbsd | grep vnc
&amp;lt;graphics type=&amp;#x27;vnc&amp;#x27; port=&amp;#x27;5900&amp;#x27; autoport=&amp;#x27;yes&amp;#x27; listen=&amp;#x27;127.0.0.1&amp;#x27;&amp;gt;
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Like I did, most of you would like to interact with a graphical interface such as with X11. For that, we yet another tool, a so-called VNC viewer. A very simple implementation of such a vnc viewer is &lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;tigervnc.org&#x2F;&quot;&gt;tigervnc&lt;&#x2F;a&gt; (simply install it with &lt;code&gt;$ sudo xbps-install -S tigervnc&lt;&#x2F;code&gt;).&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;bash&quot; class=&quot;language-bash &quot;&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;$ sudo virsh --connect
qemu:&amp;#x2F;&amp;#x2F;&amp;#x2F;system start openbsd
Domain &amp;#x27;openbsd&amp;#x27; started
&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;h2 id=&quot;references-and-further-reading&quot;&gt;References and further reading&lt;&#x2F;h2&gt;
&lt;h3 id=&quot;openbsd&quot;&gt;OpenBSD&lt;&#x2F;h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.openbsd.org&#x2F;&quot;&gt;OpenBSD&lt;&#x2F;a&gt; - Official project site&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.openbsd.org&#x2F;faq&#x2F;&quot;&gt;OpenBSD FAQ&lt;&#x2F;a&gt; - Comprehensive installation and usage guide&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;why-openbsd.rocks&#x2F;&quot;&gt;Why OpenBSD Rocks&lt;&#x2F;a&gt; - Collection of OpenBSD innovations and features&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.openssh.com&#x2F;&quot;&gt;OpenSSH&lt;&#x2F;a&gt; - The SSH suite that originated from the OpenBSD project&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Theo_de_Raadt&quot;&gt;Theo de Raadt&lt;&#x2F;a&gt; - OpenBSD founder and lead developer&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;man.openbsd.org&#x2F;pledge.2&quot;&gt;pledge(2)&lt;&#x2F;a&gt; - OpenBSD’s system call for restricting process capabilities&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;man.openbsd.org&#x2F;unveil.2&quot;&gt;unveil(2)&lt;&#x2F;a&gt; - OpenBSD’s system call for restricting filesystem visibility&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h3 id=&quot;void-linux&quot;&gt;Void Linux&lt;&#x2F;h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;voidlinux.org&#x2F;&quot;&gt;Void Linux&lt;&#x2F;a&gt; - Official project site&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;docs.voidlinux.org&#x2F;&quot;&gt;Void Linux Handbook&lt;&#x2F;a&gt; - Official documentation&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;docs.voidlinux.org&#x2F;xbps&#x2F;index.html&quot;&gt;XBPS package manager&lt;&#x2F;a&gt; - Void’s binary package management system&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h3 id=&quot;virtualization&quot;&gt;Virtualization&lt;&#x2F;h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;libvirt.org&#x2F;&quot;&gt;libvirt&lt;&#x2F;a&gt; - Virtualization management library and API&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.qemu.org&#x2F;&quot;&gt;QEMU&lt;&#x2F;a&gt; - Open-source machine emulator and virtualizer&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.libvirt.org&#x2F;manpages&#x2F;virsh.html&quot;&gt;virsh(1)&lt;&#x2F;a&gt; - Command-line interface for managing libvirt guests&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;tigervnc.org&#x2F;&quot;&gt;TigerVNC&lt;&#x2F;a&gt; - VNC implementation for remote desktop access&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h3 id=&quot;guides&quot;&gt;Guides&lt;&#x2F;h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;voidlinux&#x2F;comments&#x2F;ghwvv5&#x2F;guide_how_to_setup_qemukvm_emulation_on_void_linux&#x2F;&quot;&gt;[Guide] How to setup QEMU&#x2F;KVM emulation on Void Linux&lt;&#x2F;a&gt; - Community guide on r&#x2F;voidlinux&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.cyberciti.biz&#x2F;faq&#x2F;kvmvirtualization-virt-install-openbsd-unix-guest&#x2F;&quot;&gt;KVM virt-install: Install OpenBSD as Guest Operating System&lt;&#x2F;a&gt; - Step-by-step KVM installation guide&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.skreutz.com&#x2F;posts&#x2F;autoinstall-openbsd-on-qemu&#x2F;&quot;&gt;Auto-install OpenBSD on QEMU&lt;&#x2F;a&gt; - Automated OpenBSD installation on QEMU&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
</description>
      </item>
      <item>
          <title>Hello and outlook</title>
          <pubDate>Tue, 20 Sep 2022 00:00:00 +0000</pubDate>
          <author>Unknown</author>
          <link>https://raskell.io/articles/hello-and-outlook/</link>
          <guid>https://raskell.io/articles/hello-and-outlook/</guid>
          <description xml:base="https://raskell.io/articles/hello-and-outlook/">&lt;h2 id=&quot;hello-world&quot;&gt;Hello world&lt;&#x2F;h2&gt;
&lt;p&gt;Welcome to my tech blog! My name is Raffael and I am excited to share my musings about the state of tech, open-source, and the life as a software developer with you.&lt;&#x2F;p&gt;
&lt;p&gt;In this first post, I wanted to introduce myself and explain what you can expect to find on this blog. I have been working in the tech industry for several years, and I have experience with a variety of technologies, programming languages, and operating systems. I have a strong interest in open-source software and the principles of open collaboration and sharing that underpin it. Also, I have a passion for tech hardware and the functional restoration of discarded equipment. The freedom to express oneself with technology to make the world a more comfortable place is a strong focus of mine.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;outlook&quot;&gt;Outlook&lt;&#x2F;h2&gt;
&lt;p&gt;On this blog, I will be covering a wide range of topics related to technology in general, open-source, operating systems, programming, and tech hardware. Some of the things you can expect to find include:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;My dabblings into new and bleeding-edge technologies in general&lt;&#x2F;li&gt;
&lt;li&gt;My experience in using Linux, BSDs, and embedded operating systems&lt;&#x2F;li&gt;
&lt;li&gt;Exploration of new and exciting open-source projects&lt;&#x2F;li&gt;
&lt;li&gt;Discussion of the latest technology trends and innovations&lt;&#x2F;li&gt;
&lt;li&gt;Personal musings and insights into the tech industry&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;I believe that technology has the power to change the world, and I am excited to be a small part of that change. I hope that this blog will serve as a valuable resource for anyone interested in technology, open-source, Linux, OpenBSD, and programming. I will be posting new content on a regular basis, so be sure to check back often.&lt;&#x2F;p&gt;
&lt;p&gt;In my next post, I will dive into a specific topic and share my knowledge and experience. I want to make sure that my readers will learn something new every time they visit my blog.&lt;&#x2F;p&gt;
&lt;p&gt;Thank you for visiting, and I look forward to connecting with you on social media or in real life.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;references&quot;&gt;References&lt;&#x2F;h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;voidlinux.org&#x2F;&quot;&gt;Void Linux&lt;&#x2F;a&gt; - My Linux distribution of choice at the time of writing&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;www.openbsd.org&#x2F;&quot;&gt;OpenBSD&lt;&#x2F;a&gt; - The security-focused BSD operating system&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a rel=&quot;noopener&quot; target=&quot;_blank&quot; href=&quot;https:&#x2F;&#x2F;github.com&#x2F;raskell-io&#x2F;www.raskell.io&quot;&gt;raskell.io source&lt;&#x2F;a&gt; - This site’s source code&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
</description>
      </item>
    </channel>
</rss>
