Notes from RSAC 2026


I am writing this about two weeks after RSAC 2026 closed. In between, my coworkers and I drove down the West Coast, through Big Sur to LA, then out to Las Vegas for Easter weekend. That was deliberate. Not the route, exactly, but the space. The conference gave me a lot to process, and I have learned over the years that I process better when I am moving, when I am not sitting at a desk trying to force conclusions out of raw impressions.
So here is what I took away. Not a session-by-session recap. Not a vendor roundup. Just the things that are still with me now that the noise has faded and the Pacific Coast Highway is behind me.
Milan Duric and I presented “Self-Learning WAF: Using Generative AI to Tame ModSecurity False Positives” on Wednesday morning, March 25, in Moscone West 3020. We had an 8:30 AM slot, which is either a curse or a blessing depending on your audience. It turned out to be a blessing. The room was full of people who had chosen to be there at that hour, which means they cared about the topic, and the energy reflected that.
The talk went flawlessly. If you have ever presented at a conference of that scale, you know that “flawless” is not something you take for granted. There is always the moment before you start where you wonder if the demo will work, if the projector will behave, if your timing will hold. All of it held. Milan and I had rehearsed enough that the talk felt natural rather than performed, which is the line you want to hit.
I will write a separate post about the content of the talk itself, what we built, what we learned, how the audience responded to specific ideas. This post is about everything else.
This was my third RSA Conference. I attended in 2024, 2025, and now 2026. But it was my first time as a speaker, and that changed the experience in ways I did not fully expect.
When you attend as a participant, you are a consumer of the conference. You pick sessions, you walk the expo floor, you absorb. When you are a speaker, even for just one session, you become part of the fabric. People approach you after the talk. They reference something you said in a hallway conversation two days later. You are on the other side of the dynamic, and it gives you a different relationship with the event.
I have grown to genuinely appreciate the sheer volume and quality of the whole thing. RSAC is not one conference. It is several conferences layered on top of each other. There are deeply technical sessions where people walk you through real implementations, real code, real incident response timelines. There are strategic talks where CISOs and policy architects work through the organizational and regulatory implications of what is changing. And then there are the keynotes, the big voices, people who have shaped the field for decades, sharing what they see on the horizon.
Because it happens in San Francisco, in the heart of the Bay Area, the reach is different from any other security event. You are not just at a conference. You are at the geographic center of the industry that is driving the transformation everyone is trying to understand. The density of talent, capital, and ambition in that city during RSAC week is difficult to describe if you have not experienced it. The only comparable events I can think of are BlackHat and DefCon, but even those have a different energy. RSAC pulls in a wider spectrum of the industry, from the deeply technical to the deeply strategic, from startup founders to government officials, and puts them all in the same building for a week. That range is what makes it valuable.
Ever since I started attending in 2024, AI has been a substantial part of the conference. That makes sense. The developments in AI over the past few years have had a prominent cybersecurity dimension from the start, and the industry has been working through what that means, both as a threat to defend against and as a capability to harness.
But this year felt qualitatively different from the previous two. In 2024 and 2025, the AI conversation was broad and somewhat exploratory. What can large language models do for security? How do we detect AI-generated phishing? What does the threat landscape look like when attackers have access to the same models we do? Important questions, but still in the “what is possible” phase.
2026 was past that. The conversation had narrowed and deepened. It was specifically about agents. Not AI in general, not language models as a capability, but autonomous agents as a new category of infrastructure. Enterprise-grade agentic systems. Agentic orchestration patterns. Agent-native architectures. The shift from “can we use AI?” to “how do we architect our systems around autonomous agents that are already here?” was palpable in almost every session I attended.
The concept that kept coming up was NHI: non-human identities. This term has existed in the identity and access management world for a while, but at RSAC 2026 it had taken on a new meaning. The old NHI conversation was about service accounts, API keys, machine certificates. The new NHI conversation is about LLM inference backends that operate as something fundamentally different from traditional automated systems. These are entities that do not just execute a fixed pipeline. They reason, they make judgment calls, they interact with systems and data in ways that look more like what a human analyst does than what a cron job does. But they operate at machine speed, they do not sleep, and they do not have the contextual judgment or accountability that comes with a human in the seat.
The trust problem this creates is real, and it is not just a theoretical concern. Human employees were already risk factors before AI entered the picture. Insider threats, social engineering, credential compromise, accidental misconfiguration. These are well-understood attack surfaces. Now add entities that move faster than any human, that can touch more systems in a minute than a human employee touches in a day, and that are harder to audit because their reasoning process is opaque. The attack surface did not just grow. It changed shape in ways that existing security architectures were not designed for.
This was the most interesting tension I observed across the conference, and I think it is going to be one of the defining questions for cybersecurity in the next few years.
A small number of talks presented approaches that kept a human firmly in the loop. AI assists, human decides. AI flags, human acts. AI generates recommendations, human approves or rejects. These were careful, measured presentations, and some of them were good. The argument is intuitive and appeals to anyone who has been burned by automation gone wrong: keep a human in the critical path because humans have judgment that machines do not.
But the majority of the conference had reached a different consensus, and it was stated with increasing confidence as the week went on: the human in the loop is a bottleneck. Not philosophically. Operationally. In terms of the speed at which threats materialize and the speed at which defenses need to respond.
The argument is straightforward once you lay it out. Adversaries and threat actors are already leveraging AI to accelerate their operations. They are scanning for vulnerabilities at machine speed. They are generating novel attack variations faster than any human analyst can write detection rules. They are using AI to identify and exploit zero-day vulnerabilities in timeframes that make traditional patch cycles look like geological processes. If your defensive response depends on a human reading an alert, understanding the context, making a judgment call, and clicking a button before a countermeasure activates, you have introduced a rate limiter into your defense that your attacker does not have. You are playing at human speed against an adversary operating at machine speed.
I agree with this assessment, and I want to be precise about what I mean by that. I do not think humans are irrelevant to security. They are not. Human judgment, human understanding of organizational context, human ability to reason about novel situations, these remain essential. But I think the role of the human needs to shift fundamentally. The human should be a supervisor, not a gatekeeper. The human should set policy, define constraints, establish acceptable parameters, review outcomes, and intervene when something goes wrong. But the human should not be the bottleneck whose reaction time determines how fast sophisticated defense measures can respond to a threat that is moving at inference speed.
This is an engineering problem, not a philosophical one. How do you design agent-native architectures where the human is still in control, still has full visibility, still sets the rules, but is not the limiting factor in the response loop? That is the challenge of 2026. I did not hear anyone at the conference claim to have fully solved it. But I heard a lot of people working on it seriously, and the framing had matured beyond the naive “just automate everything” takes that dominated the early AI-security conversation.
This is also, incidentally, part of why I built Zentinel the way I did. A reverse proxy that sits at the edge, where policy enforcement happens at wire speed, is exactly the kind of system that needs to operate autonomously within human-defined constraints. The agent architecture in Zentinel, where security logic runs in isolated processes with bounded resources and explicit failure modes, is my answer to the question of how you let autonomous systems make real-time decisions while keeping the human in the position of supervisor rather than bottleneck. I wrote about this in more detail in What Zentinel Is Really Optimizing For.
I attended a panel with four former NSA directors and US Cyber Command commanders. Both positions are held by the same person at any given time, so these were people who had sat at the intersection of signals intelligence and military cyber operations at the highest level the United States has. Regardless of how you feel about the NSA or US foreign policy, the caliber of strategic thinking in that room was extraordinary.
Paul Nakasone said something that has stayed with me since. He laid out what he considers the four most important factors when assessing the strategic potentiality of a nation state in this era. Not military strength. Not GDP. Four specific things: chips, data, talent, and energy.
Chips, meaning silicon, meaning raw compute power. How many advanced GPUs can you deploy? How advanced are they relative to the frontier? And critically: can you manufacture them domestically, or are you dependent on someone else’s fabrication capacity? Right now, the entire world depends on TSMC in Taiwan for leading-edge chip fabrication, and the geopolitical implications of that single point of dependency are staggering.
Data, meaning access to the raw material that AI systems learn from. Who has it, how much of it, how diverse is it, and under what legal and political constraints can it be used for training and inference?
Talent, meaning the human capital that knows how to build, train, deploy, secure, and govern these systems. Where that talent lives, where it wants to live, and what it takes to attract and retain it. This is not just about researchers at frontier labs. It is about the entire pipeline: the engineers who build the infrastructure, the operators who keep it running, the security professionals who defend it, the policy people who regulate it.
Energy, meaning access to cheap, abundant, reliable power. Because the compute demands of frontier AI are measured in gigawatts now, not megawatts. A single frontier training run can consume more electricity than a small city. The question of whether you can physically power your AI ambitions is no longer abstract.
I could not stop thinking about AI 2027 while listening to Nakasone. The scenario work by Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, and Romeo Dean, which I first wrote about in How I Work These Days, makes strikingly similar assessments from a technology trajectory perspective rather than a national security one. Their scenario tracks the distribution of global AI compute (the US holding roughly 70% of frontier capacity through its companies, China around 12%), the geopolitical competition for chip manufacturing, the energy infrastructure required to sustain frontier operations (their projection of global AI datacenter spending reaching the trillion-dollar range by 2026 no longer reads like speculation), and the talent concentration in a handful of US-based labs.
What makes AI 2027 feel prophetic, and I do not use that word casually, is that its core thesis keeps holding up month after month. The idea that automating AI research itself creates a self-reinforcing feedback loop, that the cycle between capability and capability-building is compressing, that the timeline for transformative change is shorter than most institutional planning horizons assume. The specific dates and milestones may shift. The authors themselves have revised some timelines. But the directional assessment, the shape of the curve, still looks right to me as of April 2026. The scenario’s detailed treatment of compute distribution, espionage risks, and the escalatory dynamics between nation states competing for AI dominance maps remarkably well onto what I heard discussed in more guarded terms on the RSAC floor.
Listening to Nakasone lay out those four factors, I kept thinking about Europe. And about Switzerland specifically, since that is where I live and work. Europe is behind on all four. The continent does not manufacture frontier chips. It does not host the leading AI labs. Its regulatory environment, while well-intentioned, has optimized more for constraint than for capability. Its energy infrastructure is in the middle of a complex transition. And its talent pipeline, while strong in research, struggles to retain builders who can turn research into deployed systems at scale, because many of them leave for the Bay Area, London, Singapore, the Gulf states, or other places where the ecosystem is more supportive of what they want to build.
This is part of why I co-founded Die Zukunft, a new Swiss political party focused on structural transformation. The name means “The Future” in German. The party exists because I believe the political infrastructure in Switzerland, and in most European countries, is not designed to respond to the kind of structural shift that Nakasone was describing. The decisions being made right now about compute sovereignty, energy policy, talent retention, and regulatory frameworks will determine whether Europe is a participant in the next decade or a consumer of other people’s technology. Die Zukunft’s platform addresses these questions directly: digital sovereignty defined in infrastructure terms, faster permitting for critical energy and compute projects, open standards as a hard requirement for government systems, and immigration policy designed to attract the talent that builds these systems. It is not a technology party in the narrow sense. It is a party built on the recognition that the structural transformation AI is driving is too consequential to be left to the current pace of European political response.
And this is also why I built Archipelag. If Europe wants digital sovereignty, it needs sovereign compute infrastructure. Not just policy positions about data residency, but actual physical capacity to run AI workloads within European jurisdictions, at competitive cost, without depending on American hyperscalers. Archipelag is a decentralized AI compute network that routes inference jobs to community-operated nodes with jurisdiction-aware routing baked into the infrastructure layer. It is designed so that a European company can run AI workloads with cryptographic guarantees about where their data is processed, using idle GPU capacity that already exists across the continent. It is my direct answer to the infrastructure gap that Nakasone’s four factors expose so clearly for Europe.
This connects to something broader that I have been thinking about since well before RSAC, but that the conference brought into sharper focus.
I work for a large Swiss financial institution. My perspective is shaped by what I see inside that kind of organization every day. But I believe the dynamic applies to enterprises of all sizes and in all sectors, even if the scale and specifics differ.
The biggest threat I see right now is not a specific vulnerability, not a particular attack vector, not a novel exploit technique. It is inertia. It is the widening gap between what is happening at the frontier of AI capability and what most organizations are actually doing about it. Too many companies still think they can outsource their way through this transition. Buy an AI-powered security product from a vendor. Subscribe to a managed detection service that mentions “AI” somewhere in its marketing materials. Check the compliance box and move on to the next quarter.
I do not think that is going to work. Not because those vendors are bad. Some of them are genuinely good at what they do. But because the organizations that will remain competitive and secure over the next few years are the ones that build internal AI capability, not just consume external AI services. That means investing in your own GPU compute. That means building the internal expertise to deploy, fine-tune, and operate models on your own infrastructure. That means treating AI as a core organizational competency, not a procurement line item.
This is not a popular opinion in many boardrooms. It is expensive. It is hard. It requires talent that is difficult to hire and even harder to retain when they can work at a frontier lab or a well-funded startup instead. But the alternative, waiting and relying on service providers to package AI innovation for you at their pace and under their terms, means you are always operating one step behind. You are consuming someone else’s capability with someone else’s priorities, under someone else’s constraints. In a landscape that is moving as fast as this one, that delay compounds.
The companies that understand this and invest now are going to have a structural advantage that widens over time. The ones that wait are going to find themselves trying to close a gap that gets larger with every quarter of inaction. I saw enough at RSAC to believe that some organizations have internalized this. And I saw enough to believe that many more have not.
I want to be fair about this, because I know how the previous section sounds, and I do not want to come across as someone who dismisses the entire vendor ecosystem. I am not easily swayed by big tech vendors and their keynote promises, and I think anyone who works in security should maintain a healthy skepticism toward product demos and polished presentations. That is just professional hygiene.
That said, I genuinely enjoyed many of the keynotes this year. Some of these companies have real long-term vision. They see the shape of what is coming, and their best presenters can communicate that vision with clarity and conviction. I respect that, even when I disagree with their specific approach or their business model.
But enjoying a keynote and trusting that buying a product will translate into sustainable cybersecurity for your organization are very different things. Actual security is hands-on work. It is understanding your own systems, your own architecture, your own threat model, your own failure modes. It is the boring, unglamorous work of knowing what runs where, what talks to what, what happens when something fails, and what your actual attack surface looks like on a Tuesday afternoon. That work cannot be fully offloaded to a vendor. It cannot be outsourced to a dashboard, no matter how sophisticated the analytics behind it.
The vendors that impressed me most this year were the ones that acknowledged this honestly. The ones that positioned their tools as force multipliers for competent teams rather than replacements for the need to have competent teams in the first place. That distinction matters, and the vendors who understand it tend to build better products because they are designing for operators, not for procurement committees.
The last session of the conference was Hugh Jackman in conversation with Hugh Thompson. I was not sure what to expect from a Hollywood actor closing out a cybersecurity conference, and I suspect a lot of people in the audience had the same reservation going in. But it worked. Jackman is funny, self-aware, and surprisingly thoughtful about creativity, discipline, and the craft of doing hard things well. He talked about preparation, about the difference between performing and connecting, about the years of work that go into making something look effortless.
At one point he taught the audience that if you say “raise up lights” in an American accent, you are saying “razor blades” in Australian. The room loved it. It was one of those moments where several thousand cybersecurity professionals all became delighted seven-year-olds for about ten seconds, and it was a good reminder that conferences are also about shared human moments, not just information transfer.
It was the right way to end a dense week. Light enough to let people exhale after five days of intense content, but substantive enough in its own way that it did not feel like filler.
After the conference closed, a few of us stayed on. We rented a car and drove south from San Francisco, down Highway 1 through Big Sur. If you have not done that drive, I do not know how to describe it adequately except to say that it recalibrates your sense of scale. The Pacific is very large and very indifferent, and spending a few hours winding along cliff-edge roads with that water stretching out to the horizon below you is a useful counterweight to a week of thinking about the future of everything.
We spent time in LA. Santa Monica and Venice Beach, walking the boardwalk, eating food that was too expensive and not caring. The kind of aimless, unstructured time that my brain needed after five days of absorbing information at high density. I find that the most useful thinking often happens when you are not trying to think. When you are just watching the ocean or walking on a beach and letting your subconscious do whatever it does with the raw material you fed it.
Then Las Vegas for Easter weekend. Spring break crowds, desert heat starting to build, the particular surreality of the Strip. It was not productive time in any conventional sense, and it was not meant to be. It was decompression. Space for the conference to settle from a collection of impressions into something more like understanding.
Two weeks out, here is what I think is different.
I went to RSAC 2026 with a set of convictions about where things are heading. Agents are going to be the primary operating model for security infrastructure. Human-in-the-loop is going to shift to human-as-supervisor. Organizations that do not build their own AI infrastructure are going to fall behind structurally. Europe needs to wake up to the compute sovereignty problem before it becomes irreversible. These were things I already believed before I got on the plane to San Francisco.
What the conference did was sharpen them. Hearing Nakasone frame national potentiality in terms of chips, data, talent, and energy gave me a cleaner lens for thinking about the geostrategic dimension. Seeing the breadth and depth of the agentic conversation on the conference floor confirmed that this is not a niche position or an edge case anymore. It is the emerging consensus of the industry. And presenting our own work, standing in front of a room and showing what we actually built, made it more real in a way that writing code in a terminal at midnight does not.
Conferences like RSAC have always had this effect on me. They compress a year’s worth of signals into a week, and then you spend the following weeks unpacking what you heard and figuring out what it means for what you are building. After RSAC 2024, I started thinking seriously about edge security architectures, which eventually became Zentinel. After RSAC 2025, the urgency around AI-native infrastructure solidified into the work that became Archipelag. This year, I expect the sharpened understanding of agentic systems and the geostrategic landscape to feed directly into what I build next.
I also came away with a renewed sense of urgency about the gap between what the frontier looks like and what most organizations are doing about it. That gap, between the leading edge and the institutional mean, is the real risk. Not any single threat actor, not any specific vulnerability class. The systemic inability of large institutions to move at the pace that the situation demands. That is what keeps me up at night, and that is what I am trying to address in my own work, whether it is building infrastructure, writing about it, or working on the political dimension through Die Zukunft.
I am going to write a separate post about the talk itself, about what Milan and I built with self-learning WAFs and what we learned along the way. That is coming soon. For now, these are the notes I wanted to capture while the impressions are still sharp enough to be useful.
I am already looking forward to RSAC 2027.