Security creates work. Let’s stop.
Using AI to eliminate the work, not add more tools.
Security creates work.
A lot of it.
A team spins up a new GitHub repo. Nobody tells us.
We find out later. Maybe much later. That repo might be in scope for compliance.
If it is, and we didn’t know it existed, both teams end up scrambling. Security has to explain why it wasn’t caught. Engineering has to reconstruct context for a repo they might not have touched in months. Under audit pressure, with a tight deadline. None of that is work anyone should be doing.
The engineers aren’t hiding anything. They’re building. They have their own deadlines and their own objectives. The old process asked them to stop what they were doing, switch context, update a Confluence page, and notify security. It’s one more thing on a long list... and it’s easy for it to fall through the cracks.
That’s not their failure. That’s ours. We built a process that depended on them remembering, and then acted surprised when they didn’t.
By design
That’s part of it. Most of what we do is work we create on purpose. We built it, normalized it, and call it security.
We’re the ones telling teams they’ve got vulnerabilities, weaknesses, design flaws. Even when we hand them recommendations, they still have to contextualize it, review it, figure out how it lands in their code.
Then there’s the overhead. Tickets. Reviews. Approvals. Questionnaires. Even when we’re trying to help, we’re slowing things down.
When we become the blocker, teams go around us. They ship without telling us, or they tell us late. Risk goes up, not down.
AI is the first tool we’ve had that can take things off the list, not add to it. That’s how we stop being the bottleneck.
The job isn’t to secure everything
The job is to help the business move forward securely.
It’s easy to lose sight of why we’re here. We’re here to help the business move forward, not to stand in its way. Doing that securely is the how. It’s not the why.
That means reducing friction, not adding more. Guardrails instead of gates. Enablement instead of enforcement.
AI isn’t the fix for this. Broken processes plus AI equals faster broken processes. If the workflow is bad, automating it just makes the bad workflow cheaper to run.
What AI actually is, is a lever.
Work that was too expensive or too time-consuming to automate is now within reach. A lot of what we’ve called “good security” was really “what was possible with the tools at hand.” AI is changing that. The question isn’t whether AI can run your program. It’s where it can take manual work off the table so the team can focus on the parts that matter. Less friction means the business moves faster.
There’s another piece of this we don’t talk about enough. Security teams aren’t staffed to do all the work our programs say needs to get done. We’re small. We were small before AI and we’re still small after. Reducing friction for engineering is the visible win. Reducing friction for our own teams is the quieter one... and it’s what makes any of this sustainable.
What eliminating work actually looks like
A few things my team is running or actively building right now. None of these are agents running wild. Every one keeps a human in the loop on the actual decision.
Intake through Slack
We have a Slack channel where teams ask us for help. “Can I use this tool?” “What do you think of this open source library?” “I need access to X.”
We’re building a bot in that channel that creates a ticket automatically when a request comes in, and in some cases just acts on it directly when the ask is something every employee is already allowed to do.
One thing I think is cool: when someone drops a link to an open source repo and asks what we think, the bot pulls the repo, does a first-pass review, and flags the things that matter. Callbacks. Remote code execution. Anything worth a closer look. The engineer gets that context up front and can make an informed call without waiting on us. The busywork of cloning the repo and looking it over is gone.
Outcome: engineers get answers faster. We spend our time on the calls that actually need judgment.
Vulnerability triage
Our AppSec scanners surface a lot of findings. The existing process asks engineering to triage each one, figure out if it’s real, and if so, create a Jira ticket, track it through to completion, and make sure the scanner comes back clean.
What actually happens: findings pile up. Engineers forget to triage, or they triage and forget to create the ticket, or the ticket gets created but nothing moves. We spend a lot of time on the compliance side just reconciling state.
We’re building a dashboard that pulls findings into one place with clear actions (create ticket, mark false positive, defer) and uses AI to do a first pass. AI proposes the triage. The engineer reviews and clicks. A correctly-formed Jira ticket gets created automatically, with the right context, for the ones that matter.
Outcome: engineers stop burning cycles on dead ends like a vulnerable dependency that only shows up at build time and never reaches runtime. We stop chasing. The findings that are real move faster.
New repos, automatic
Back to where this post opened.
We wrote a tool that detects when a new GitHub repo is created, adds it to an inventory, and opens a Jira ticket so it’s tracked and reviewed. Engineers don’t have to remember to update anything. Security doesn’t have to find out by accident.
The next piece: AI does an initial read of the repo and proposes whether it’s in scope for compliance, with a short summary for the reviewer. The human still makes the final call. If it’s in scope, a custom GitHub repo property gets set automatically to mark it. We’ve just removed every step before the decision, and the step right after.
Outcome: we know about every repo. Engineers don’t lose time on notification hygiene. Compliance scope stops being something we reconstruct after the fact.
Start small. Stay deterministic.
This isn’t about building agents to do everything.
To be clear on where my security team is: we haven’t released any AI agents yet. Everything we’ve shipped is deterministic automation. Coding agents helped us build it. Nothing we’ve put into production is AI making decisions. AI agents are something we’ve only started developing in the last month or two.
Start with small, scoped automations. Clear inputs, clear outputs, clear failure modes. Layer in intelligence only where it actually adds value.
Over-engineering early is how these projects die. You build something ambitious, it breaks in a weird way, the team loses trust, and you’re back to manual. Ship something small that works. Iterate.
What this substack will focus on
Real workflows. What actually works versus what sounds good on a slide. Building things, not just talking about them.
Going forward, more on AI and security in practice. What’s working, what isn’t, the tradeoffs worth knowing about. Occasionally a tool summary, an article worth reading, or whatever home lab experiment ends up being relevant.
Some posts will be about security leadership. How I think about running a modern program. Topics I have opinions on. Ones I’ve changed my mind on.
That’s what I’ll be covering.


