Learn why software delivery fails in government — and what's required to make shipping possible.
The Goal, the Strategy, and the Results

Episode 2 reframes the goal of digital transformation. Instead of executing a plan, Bryon explains why the real objective is building an organization that can learn the fastest in order to deliver mission impact.
Bryon introduces Continuous Delivery as the first step toward increasing mission capability and shows how the DORA metrics measure progress. Real examples demonstrate how faster delivery reduces risk and drives outcomes in production.
Why software fails inside government—and the real-world consequences when it does.

Rethinking success: learning fast, reducing risk, and delivering real mission impact.

Why outcomes only happen in production—and why “it won’t work here” is a myth.

Episode release date:
March 3, 2026

Episode release date:
March 3, 2026

Episode release date:
March 17, 2026

Episode release date:
March 17, 2026

Episode release date:
March 31, 2026

Episode release date:
March 31, 2026

Episode release date:
April 14, 2026

Episode release date:
April 14, 2026

Episode release date:
April 28, 2026

Episode release date:
April 28, 2026

Episode release date:
May 12, 2026

Episode release date:
May 12, 2026

Episode Resources
Transcript
Bryon Kroger (00:04):
In the last episode, we talked about why transformation is a life or death imperative for critical missions. We established the stakes. Now we get to the hard part. If hacking a bureaucracy is the objective, we need to start by changing the definition of success. We need a new goal, a new strategy, and a new way to measure results. The traditional goal of any large enterprise project is simple. Execute the plan. You spend years in requirements planning. [00:00:30] You get a budget and your job is to deliver what the plan says on time and on budget. Think about it. A five-year plan in the world of software is an eternity. The technology, the threats, the mission needs, they change in months, not years. So that plan, that beautiful, expensive, committee approved plan, it's a fossil before the ink is dry. You just built a monument to irrelevance.
(00:55):
It's a solution to a problem that no longer exists in its original form. So we [00:01:00] have to change the goal. The goal is no longer to execute a plan. The number one goal of any digital transformation is to build an organization that can learn the fastest. That's the team that has the advantage. And the first step to becoming a learning organization is to establish continuous delivery. This is the first goal on every single project we do. Before we talk about value, before we talk about features, we talk about our ability to ship code into production. Why? Because that's [00:01:30] the only place that real learning happens. Until you ship code to prod, it's all just slide decks and good intentions. Now, this is the most important mental model that you need. Your teams produce outputs, lines of code, new features, reports. Senior leaders though, they care about impact.
(01:48):
The mission succeeding, lives saved, time saved. The bridge between them, the missing link is outcomes, a measurable change in the behavior of your users. Does the [00:02:00] analyst now complete their task 50% faster? Does the pilot now get the data they need in seconds instead of minutes? Those are the outcomes and they only happen in production. Our ability to get to production isn't just a technical detail. It is the engine of value creation. This is the most important message you need to deliver to your leadership. Continuous delivery is not an exercise in taking more risk. It's a systematic, beta-driven exercise in risk reduction. Let [00:02:30] me show you what I mean. Look at this graph. This is the story of two worlds. In a traditional waterfall program, you build up risk exponentially over time. The longer you go without validating your assumptions with real users, the higher the chance that you are building the wrong thing.
(02:47):
And the risk climbs until you finally deliver years later and find out whether you succeeded or failed. And in a modern digital era, most likely you're going to fail when you're trying to get things right three years [00:03:00] in advance. It's a mountain of compounding risk. With continuous delivery though, we flip that model on its head. Every single time we ship a change to production, even a small one, we get to test our assumptions. We buy down our risk. And if we're right, we double down. If we're wrong, we find out in hours or days, not years. We lose two weeks, not two years and $200 million. Our risk profile looks like a sawtooth, a series of small, [00:03:30] manageable hills. We're not eliminating risk, we're taming it. We're making dozens of small informed bets instead of one giant blind one. Another important thing about that sawtooth is that it intentionally goes above the risk curve initially.
(03:46):
What we're signifying here is that we're able to take bigger bets because we can buy down the risk of doing so faster. This helps us find new, new, instead of doing old things in a new way. [00:04:00] So if that's the strategy, how do we measure the results? How do we know if we're any good at this? Well, luckily there's a science to it. For years, an organization called DORA, DevOps Research and Assessments, has studied thousands of companies proving that a few key metrics are directly predictive of business and mission success. There are five vital signs. The first two measure throughput. We measure lead time for changes. How long does it take to go from a line of code being committed to it running [00:04:30] in production? This isn't just an engineering metric. It's a measure of your organization's agility. We also measure deployment frequency. How often are application changes deployed into production?
(04:42):
This is your organizational heartbeat. Elite performers do this multiple times per day, but throughput without stability is just a stream of problems. So we also measure stability. Mean time to restore. When a failure happens, how long does it take to fix it? This is your resilience. [00:05:00] And then change fail rate. What percentage of changes cause a failure? This is your quality signal. Finally, we measure reliability. Is this system available and performing as it should? Now, here's the part that breaks a lot of old school mindsets. For decades, we were taught that there was a trade off, that you could have speed or you could have quality. The data proves this is false. The very same practices that allow you to go faster, small batch sizes, automated testing, [00:05:30] robust monitoring, and so much more. These are the same practices that make your systems more stable. The fastest teams also produce the highest quality software.
(05:39):
It's a virtuous cycle. This isn't theory. I want to tell you about my first time seeing this in practice. By 2019, Kesselrun was deploying code to war fighters overseas every four and a half hours. When something broke, we could restore it in under two hours. Our change fail rate was only 8%. These numbers put Kessel Run [00:06:00] on the edge of being an elite software organization, not just in government, but compared to any tech company in the world. When we showed leaders that we were deploying every four and a half hours, it wasn't just a number. It meant that if a Warfighter in the field found a critical issue, we could have it fixed and deployed worldwide before their next shift started. That's not just efficiency. That's a strategic advantage. And mission impact was staggering. Hundreds of millions in fuel savings and planning timelines cut by [00:06:30] over 80%.
(06:31):
When I got out and started Rise8 we helped replicate that success at Kobayashi Maru, deploying a brand new application to a classified network in just 57 days from idea on a whiteboard to live code running in production. These weren't lucky breaks. They're the direct result of changing the goal. We stopped trying to execute a detailed five-year plan and started optimizing for learning. We embraced continuous delivery as a strategy for risk reduction, [00:07:00] and we measured what actually mattered. But the results alone aren't enough. Even with proof in hand, the bureaucracy pushes back. One of its favorite weapons? Language. It co-ops your words like production and value and turns them into buzzwords. And if we're not crystal clear on what we mean, we lose the narrative. That is the foundation.
(07:30):
[00:07:30] Now, there have been many times that stakeholders and senior leaders have expressed fear or skepticism about the perceived risk of continuous delivery. One time I had a senior leader on the cybersecurity side that was very nervous about the changes we were proposing from a cybersecurity and privacy perspective. And so you might be familiar, but in federal government, cybersecurity and privacy [00:08:00] are governed by FISMA, which is law, that states that we have to implement the risk management framework that's governed by NIST. And so with the risk management framework, everybody told us we couldn't do continuous delivery because of RMF and the ATO process. But as we dug into the risk management framework, we found that it actually encouraged what we were trying to do. It's just not the way that business was being done today. And so we went line by line and made sure that our continuous [00:08:30] delivery was congruent with the risk management framework, which is to say we integrated the risk management framework into a DevOps software development lifecycle.
(08:40):
And when we did that, we were actually able to provide them with more transparency than they had ever had. For instance, dedicated technical assessors was something that we paid for to the AO's office. So they were still third parties, independent, but they were dedicated to our project. So that just in terms of the time spent on [00:09:00] a single problem, increased the amount of time that we were getting from technical assessors. In addition, we gave them access to our backlogs, to our code repositories, to all of our scanning tools and the rule sets for them. It was providing them real time data because typically an assessor would see a large batch release, typically quarterly or maybe annually, or maybe once every three years. Instead, they were seeing things every single day. And when there were problems, they were able to [00:09:30] go directly to the development teams during the course of work and address issues as they happened. A real shift left in security.
(09:38):
And after we explained all of this and then got an initial authorization and demonstrated it, continuous monitoring for three months, we were able to get the first ever continuous ATO in the Department of Defense.
(09:56):
I'll just use one example of something that [00:10:00] increases speed and then also increases quality or is to say the quality is actually what's increasing the speed. And so one of the practices that we hold dear is test-driven development. And pair programming as well. And so a pair of programmers is going to write unit tests, integration tests, even end-to-end tests sometimes before writing a single line of code. And then they write the code to pass that test. [00:10:30] And some people stop there and they're like, "Oh, that's great. Automated testing." But there's something more profound that happens there. So we don't just write tests to have tests. The whole point is to be able to go fast forever. In order to go fast forever, you need to have clean code. And in order to have clean code, you need to be able to refactor your code with confidence.
(10:53):
And in order to be able to refactor with confidence, what do you need? Tests. And so we don't just stop at, [00:11:00] "Oh, now we have a bunch of tests." It's, "Oh, now we can refactor with confidence." And then teams need to go and do that. That's a really good practice to get into refactoring the code until it's very clean before moving on. And then once we have clean code, that's what enables us to go fast forever, not just fast at first. A lot of agile teams will hit peak velocity and then go right back down to asem tote, same as a waterfall team, but a really good agile delivery [00:11:30] team will hit speed maybe a little bit slower than those teams that don't prioritize writing all of those tests upfront and getting good at quality, but they'll hit a high bar and level off and go fast forever.