Friction is a bug
The principle behind the work. Borrowed from engineering, on purpose.
The last two posts were about the same thing from different angles.
The first one was that security creates work, a lot of it, and AI is finally a tool that lowers the barrier to entry enough that anyone on the team can take things off the list instead of adding to it. The second was that we don’t have a system of record for security context, and most of the manual chasing we do every day traces back to that gap.
Different problems. Same instinct underneath: reduce work for the org and reduce work for my team.
I want to dig into the principle behind that, and a few related ones, because they’re the actual foundation for almost every decision I make about what we build, what we say yes to, and what we say no to.
The principle
Friction is a bug.
That’s the language we use on my team. Borrowed loosely from how engineering thinks about defects, even if engineering itself doesn’t always agree on whether friction is a bug or a feature. For us, it’s a bug. Not a tradeoff. Not the price of safety. A thing to be filed, prioritized, and fixed.
Most security thinking treats friction as the cost of doing security. Tolerate the slow approval. Tolerate the extra form. Tolerate the redundant review. The assumption is that friction and security are roughly the same thing, and more of one means more of the other.
My experience says the opposite. Past a certain point, friction creates risk. More friction, less security.
Why I think that
This isn’t theory. It’s 30+ years of trial and error...mostly errors.
Early on, I was on the outside looking in. Security was the roadblock and I had to spend years trying things, building relationships, and proving the approach before I started getting invited into the conversations that mattered. By the time I landed in my first leadership role, that had mostly turned around. In the last couple of roles, I've been brought in proactively. But the pattern hasn't disappeared. I still see plenty of security teams operating as the roadblock at other companies and across my peer network. Same pattern, different angle.
Two things happen when security becomes the team that regularly says no.
People find their own path. The behavior doesn’t stop. It just stops including you.
And you lose visibility. The team that was supposed to know what was happening doesn’t know what’s happening. No oversight. No telemetry. No early warning. The “control” you gained on paper has already disappeared in practice.
Being a roadblock doesn’t always reduce risk. It can just hide it.
The two-question filter
When a request lands, two questions run in sequence.
What are they trying to accomplish?
How do I help them do it securely?
It’s easy to collapse those two questions into “Should I approve this?” That’s a different kind of thinking, and a lot of security functions default there. The two-question version starts from the assumption that the work is going to happen. My job is to help them get to the goal in a way the company can support.
Sometimes that’s a secure version of the exact ask. Sometimes it’s a redirect...”what you’re describing can’t be done safely the way you’ve laid it out, but here’s the company-approved path that gets you to the same outcome.” Either way, the conversation ends with the team having somewhere to go.
This is what we call “Enable, Don’t Block” on my team. It’s the first principle, on purpose. Default isn’t no. Default is “yes, and here’s how to do this securely.” A no, when it lands, is paired with a reason in the team’s terms and an alternative path wherever one exists.
Yes by default doesn’t mean yes to everything. Hard no’s still happen. Unauthorized access to customer data...no exceptions. Non-standard changes that create systemic risk for one person’s convenience...no. But people accept “no” when they understand the reason and have somewhere else to go. They reject “no” when it lands as “we’re not going to help.”
The empathy test
This one is small, but it matters more than people give it credit for.
Before we roll out a process, a request, or a requirement, we run a gut check. Two questions. Would we find this annoying, confusing, or pointless if it were on our plate? And is this something we’d actually do ourselves? If the answer to the first is yes, or the answer to the second is no, we go back to the drawing board.
If we wouldn’t do it ourselves, we have no business pushing it onto someone else.
It sounds obvious. It isn’t. Security teams ship work that other teams have to do all the time without ever asking the question. Upload screenshots of evidence. Update a Confluence page when a new repo or vendor exists. Triage every vulnerability, including the false positives. Create and route their own tickets.
Every minute someone outside security spends on that is a minute they’re not spending building product, driving revenue, or creating market advantage. That cost compounds across the org. We don’t see it because it’s happening in someone else’s area of responsibility.
Automation first
If a human is doing the same thing on any kind of regular cadence, that’s a signal we should be automating it. Not a hard rule. More of a default lean.
This isn’t an efficiency argument for my own team. It’s part of how we keep our reputation intact. Security is already viewed as the team that creates work. Every manual step we eliminate is trust earned back.
This is also where AI changes the math. A lot of work that was “too expensive to automate” two years ago is now within reach. Repo intake. First-pass vulnerability triage. Open source library reviews. Vendor questionnaires. Things we used to either skip, half-do, or put on someone else’s plate. None of this has to be complicated. Most of it is small, scoped, deterministic stuff with AI doing one piece and a human still owning the decision.
The bar for automation isn’t “is this the most exciting thing we could build.” It’s “is this still being done by hand.”
Meet teams where they are
We don’t drag people into our tools.
If a team lives in GitHub, we integrate with GitHub. If they work in Slack, we meet them in Slack. Asking someone to context-switch into a security-specific tool to do a thirty-second action adds friction and kills adoption. Even for the people who do comply, you’ve cost them something for no real gain.
This shows up in almost every project we run. The repo intake and inventory tool from the first post is built around it. The Slack intake bot is built around it. The vulnerability triage dashboard is built around it.
Honest disclaimer: we’re not all the way there yet. We still have security tools in our stack that require engineers to come into a security-specific UI for certain things. That’s a reality we’re working to reduce, not a finished story. The goal is for anything new we build to default to plugging into existing workflows, and for the things that don’t to either get integrated or replaced over time.
How I know it’s working
Two things compound when you do this.
First, the secure way becomes the easy way.
If secure means extra effort, extra steps, an extra system to log into, an extra ticket to file, you’re asking people to choose between shipping and doing the right thing. Most of the time they’re going to choose shipping. Not because they don’t care...because they have a job to do.
So you flip it. Build the secure path into the templates, the pipelines, the platforms, and the tooling people already use. Make it the default. That’s the paved road. That’s secure-by-default.
People weren’t trying to be insecure before, they were trying to ship. When secure is just what happens, you don’t have to convince anyone of anything.
Second, teams bring you in on their own. Not because they have to. Because they trust you’re there to help. You get visibility you didn’t have before. You weigh in early instead of after the fact. It’s not just better process. It’s a better relationship. And the relationship is what gives you advance warning when something’s about to change.
Where it goes wrong
Two things to be honest about.
Sometimes we get a call wrong. Something looked fine, and once we dug in, it wasn’t. Less scrutiny than it turned out to need. There’s always risk you can’t see at decision time.
Sometimes the call was right and things just changed. A low-risk vendor adds a new capability. An integration starts handling sensitive data. The use case grows. If it’s filed in everyone’s head as “the security team already cleared that one,” nobody flags the change.
The mitigations are mostly what the last post was about. Better context at decision time means fewer calls made under uncertainty. The relationship loop means people tell you when scope changes. And the source of context I wrote about last time...whatever we end up calling it...is what gives us a shot at catching things early enough to fix them before they become a problem.
The goal isn’t to never get it wrong. The goal is to catch it before it turns into an incident.
Why this matters for what comes next
A lot of what I plan to write about going forward ties back to this. The tools we’ve built. My philosophy on tools more broadly. Why I’ve chosen to build the things I have. And how I think those tools are going to shape our security program over time...and maybe other programs too.
If you only take one thing away from this post...friction is a bug. Treat it like one. File it, prioritize it, fix it. Don’t push work onto other teams that you wouldn’t be willing to do yourself. Don’t confuse the appearance of control with the reality of it.
The point isn’t to be the team that says yes to everything. The point is to be the team that eliminates work for everyone else instead of creating it. The team that doesn’t add friction. The team that makes security the easy way. The team people willingly bring in because they know we’re there to help, not slow them down.

