Learn why software delivery fails in government — and what's required to make shipping possible.
Episode 05
Episode 5 focuses on the problem most government teams face: they can build software, but they can’t get it to production reliably. Bryon explains that a real path to production requires two essential components: a solid foundation and the permission to deploy.
A cloud-native Platform as a Service (PaaS) is the foundation, and a continuous Authorization to Operate (cATO) is the permission to deploy. Together, they turn security and compliance into built-in capabilities—so shipping to production becomes routine, not a special event.
Why software fails inside government—and the real-world consequences when it does.

Rethink success: learn fast, reduce risk, and deliver real mission impact.

Why outcomes only happen in production—and why “it won’t work here” is a myth.

Why government software gets stuck before production—and how to fix it.

Build platforms that help teams ship—not slow them down.

Why product, design, and engineering must work as one team.

Change culture by changing behavior.

Achieve alignment through learning—not endless planning.

See how work actually flows through your organization.

Episode release date:
April 14, 2026

Episode release date:
April 14, 2026

Episode release date:
April 28, 2026

Episode release date:
April 28, 2026

Episode release date:
May 12, 2026

Episode release date:
May 12, 2026

Episode Resources
Transcript
Bryon Kroger (00:04):
In the last episode, we talked about the transformation flywheel, the leader's playbook for scaling influence and creating momentum. But that flywheel doesn't turn with ideas alone. It requires a mechanism to translate vision into reality, to move code from a developer's mind into the hands of a Warfighter. And so today we're paving the path to production. This is without a doubt, the most critical technical component of Mission O/S. You can have the best team and the greatest ideas and the most supportive leadership in the world, but if their code is trapped on a laptop, you have nothing. Your ability to create a reliable, automated and secure path for software to travel from a developer's keyboard to an operational environment is the absolute prerequisite for everything else we've discussed. And that's why when we start a new engagement, we don't start by talking about user features.
(00:54):
We start by asking a question like, how do we get hello world from a developer in the United States to a user on a classified network across the globe? That might sound simple, but in the world of government and large enterprises, that journey is filled with bureaucratic delays and obstacles. And it's a journey that traditionally can take months or even years. Our job is to make it take minutes to build this path. There are two essential components, A solid foundation and the permission to deploy. The foundation is a cloud native platform as a service or PaaS. The permission is what we call a continuous authorization to operate or cATO. Think about the old way. In a traditional enterprise, every single application team is responsible for everything full stack. They have to procure servers, configure the operating system, set up the networking, manage the databases, and handle all security compliance for that entire stack.
(01:50):
If you have 50 application teams, you have 50 different teams solving the exact same underlying problems, wasting an astronomical amount of time and money on what we call undifferentiated heavy lifting. It's work that has to be done, but it adds no direct value to the end users. A platform as a service flips that model. It says, we're going to solve all of that undifferentiated heavy lifting once, centrally, and provide it to every application team as a service. Now, “as a service” is a term that gets thrown out a lot. So I want to clarify that. Originally it was intended to mean as a service call over the network, meaning served up in a self-service automated way, not just a service being provided by somebody else. Now the platform team's job is to manage everything from the runtime down. The application team's job is to worry about only two things.
(02:47):
Their application code and their data. Their interaction with the platform should be super simple. Onsi Fakhouri put it best in a haiku that I love, which is here is my source code, run it in the cloud for me. I do not care how. And this abstraction is incredibly powerful for three reasons. First, for the developers, it's a massive accelerator. They get to spend their brain power solving user problems and writing mission critical features, not wrestling with Kubernetes, or configurations or firewall rules, and it unlocks their potential to create value. Second, by centralizing operations into a commercial platform, especially one with solid service level agreements, cost is reduced while reliability actually improves. Third, and this is crucial for the bureaucracy, it centralizes security and compliance. So in a traditional model, every application has to answer hundreds of security controls. With a PaaS, the platform layer handles 80 to 90% of those controls.
(03:52):
The security posture is baked in, consistent, and inherited by every single application team that runs on top of it. So you solve the compliance problem once instead of 50 or a hundred or a thousand times. For leadership, the shareholders, it dramatically lowers the cost across the entire portfolio. The cost to develop is lower because teams are more efficient and the cost to operate is lower because it's all managed centrally. Finally, the cost of compliance is lower because you're not duplicating that work across every single team. And so a PaaS, as I like to say, is a prerequisite for achieving DevOps outcomes in government. It's the engine that drives the path to production. The key to start the engine is the authorization to operate, ATO. And the traditional process to achieve ATO is a transformation killer. The legacy process is a relic of a pre software world.
(04:48):
It treats software like a building or an airplane, something you design for years, build and then inspect for safety at the very end. It's a manual, document driven, bureaucratic nightmare that can take a year, or sometimes two, to navigate. And it's the single greatest impediment to delivering software at the speed of mission need and the speed of relevance. We have to change that process. We replace that process with a continuous authorization to operate, cATO. The philosophy behind a cATO is simple. Security is not a gate you pass through at the end. It's a concurrent, automated process that's baked into the software development lifecycle from the start. Instead of writing a thousand page document explaining our security, we write code, we automate the implementation of security controls, we automate the assessment of those controls to the maximum extent practical, and we automate the continuous monitoring of those controls in production.
(05:46):
Our path to production pipeline becomes a compliance pipeline. Every time a developer commits a line of code, it's automatically scanned for vulnerabilities, it runs through a suite of automated tests, is deployed to a production-like environment for further testing. The evidence from all of these automated checks is collected and made available to security auditors in real time. So compliance becomes a byproduct of good engineering. And the result: your authorization to operate isn't a piece of paper that you get once every three years. It's a state of being you have it at all times. Your code should always be shippable. And the decision to release a new feature then, should be a business decision, made by a product manager, based on mission value. Not a technical decision made by an engineer, or a bureaucratic decision made by a security analyst. This is the path to production, a paved secure route powered by a cloud platform, and unlocked by a continuous ATO that enables you to experiment, deliver, and learn. With it, you can make ship happen.
(07:04):
Before my team and I pioneered continuous ATO at Kessel Run within the Department of Defense, the typical ATO process was painful and very, very manual. Once the development team was hired to develop a piece of software, they would go through a really long design phase, then a development phase, then there would be a code freeze, usually where no further work was allowed. And that software would go through a testing process and then through the ATO process. And this became really painful because contractually, a lot of times, because no more work was being done, the original developers didn't stick around. And so if there were findings from testing, or from the cybersecurity process, RMF, those findings get fed back to the company for remediation. But the same developers aren't there. Or if they are, it might be something they coded a year ago, or maybe even two years ago, or maybe more.
(08:00):
And so the context switching, and the ability to pick up that code and work with it and provide meaningful change, whether it's addressing a test deficiency or a cybersecurity vulnerability, could be really difficult. And so that created this really long feedback loop between development and test and development and cybersecurity. And that's compounded by a queuing problem that we have, where we don't have enough testers, we don't have enough cybersecurity controls assessors, and every time we go back to them, we have to wait to get on their schedule. And so this is what took ATO processes from what should be able to be done in a month or less, if you're doing everything you're supposed to be doing during the development lifecycle, to taking 12 to sometimes 18 months. And in fact, at Rise8, I've now value stream mapped several organizations ATO processes, and usually over 80% of the time it takes them to get an ATO is time spent waiting in queue status. That's pretty crazy when you think about it.
(09:11):
So when I first started socializing the idea of a continuous ATO, the reaction from the traditional cybersecurity community was pretty skeptical and perhaps a little bit hostile at first, but I think that was actually my fault because the way that I was approaching it originally was, really focusing on the cost, and how poorly it was being done, and what that meant. In fact, I would walk around Hanscom Air Force Base and tell people that their ATO process was getting people killed. True. Really creates a sense of urgency. Doesn't necessarily create a lot of friends. And so what I had to focus on was one, it took me realizing that these weren't bad people. They were actually great people that wanted to support the mission, with bad management, inside of a bad system. And so we had to figure out how to address that problem.
(10:03):
And the way we won them over was maybe three separate things we focused on. First and foremost, I had to make the way that I talk sound like the way they expect to hear things. And so I steeped myself in NIST 800-37, the Risk Management Framework, and the NIST 800-53, Control Overlays, and really understanding them. And when I say that, I mean like I locked myself and my team in a room for several days to go through our entire stack and go through all of the controls and make sure I understood them, they understood them. Nothing was getting pencil whipped. Like this is the best package you have ever seen, and we're going to talk about it in the way that somebody who's steeped in this knowledge expects to hear. That, probably more than anything, was a game changer. I stopped using all of this funny language. And I wish I had actually stopped calling it continuous ATO, instead talked about continuous Risk Management Framework, but everything else we changed, and we started using the language straight from NIST RMF, and using RMF to support what we were doing..
(11:11):
I found things in RMF … people said like, “oh, you can't do continuous ATO,” and it says right in the RMF that you should incorporate or integrate the risk management framework into your software development lifecycle, whatever that is, that you should use automation to the maximum extent practical. The list goes on. There are tons of tips. In fact, there's a whole table of tips that include things like this. And so that really helped. The second thing was treating them like users. Part of the whole movement of the way we were building software was user-centered design. This was a new process we were creating with a whole bunch of new software. And so we had to treat those assessors as users and practice good user-centered design, find their pains, and solve their pain points. And what we thought were the pains as consumers of the ATO process was very different than the pains they had as the drivers of it.
(12:04):
And once we started solving those, it really changed how they thought about this process. It wasn't being imposed on them. It was enabling them to do what they'd always wanted to do, which was support the mission. And then last, I think this is super important, it's kind of tied into those two, but we gave them unprecedented access. A lot of times in bureaucratic environments, transparency gets weaponized, and I just told my team, we just got to do it. We got to rip the bandaid off. It's going to be scary. People might weaponize some of the things, but we got to show them everything. And so we gave them access to our Git repositories, to our backlogs. We gave them access to our rule sets and our scanning tools, and at one point we actually started handing a lot of that off to them, and they owned the rule sets, they owned the scans, they owned everything in that pipeline almost, end to end, as well as making them. Just like you have PM acceptance. We had cybersecurity acceptance for security chores that were in the team's backlogs. And so we put those three things together. I think they were probably the most impactful in terms of winning those folks over.
(13:14):
So if you're being told cATO is impossible, there are some small steps that you can take to start and prove that it can be done. First and foremost, you want to build out a common controls authorization package for everything that the app is going to be able to inherit. So if you're using cloud infrastructure, or hopefully you're using that path we talked about, you can build out a common controls inheritance package so that app teams don't have to reassess those controls over and over and over again. Then you can get a dedicated technical assessor team. I recommend getting an independent team to do this and have them integrate with the development teams. When they do that, they're going to be assessing app level controls in real time so that when application development is complete, it should just be a simple sign off and the app can go to production.
(14:09):
What we don't want to do is do a bunch of security controls implementation and wait until the very end to assess it. That's why it takes so long, especially if you do have findings. And then the dev teams have to go back and address those and it's something they developed three months ago, six months ago, maybe a year ago. And then I really recommend the secure release pipeline pattern. There's been a lot of folks that have done interesting innovative things in this space. It's not just setting up scans. There are all kinds of things that you can trigger in a CI/CD pipeline related to security and release engineering. And so building out those patterns into your secure release pipeline. And you don't have to do it all at once. You don't have to get to level 10 on day one. You're going to get the most improvement in speed from having the dedicated technical assessors and having that common controls inheritance model.
(15:05):
The secure release pipeline is going to be the icing on the cake. As you keep iterating on that, you're going to be able to digitize and automate more and more of the manual process. I'm not convinced that you can ever get rid of, or at least fully get rid of, all of the manual processes, but there is a lot of improvement that you can do. And not all of it's related to time. Some of it improves quality outcomes. Some of it reduces error rates. There's a lot of interesting things that you can do. But start with common controls inheritance, dedicated technical assessors, and then building out your own secure release pipeline with patterns that work for you and make sense for your environment.