Learn why software delivery fails in government — and what's required to make shipping possible.
Episode 10
Episode 10 is about understanding reality before trying to change it. Bryon explains why transformation efforts fail when leaders don’t see where work slows down, gets blocked, or quietly dies.
This episode introduces practical ways to grasp the current condition—so improvement efforts focus on real constraints instead of assumptions, org charts, or wishful thinking.
Why software fails inside government—and the real-world consequences when it does.

Rethink success: learn fast, reduce risk, and deliver real mission impact.

Why outcomes only happen in production—and why “it won’t work here” is a myth.

Why government software gets stuck before production—and how to fix it.

Build platforms that help teams ship—not slow them down.

Why product, design, and engineering must work as one team.

Change culture by changing behavior.

Achieve alignment through learning—not endless planning.

See how work actually flows through your organization.

Episode release date:
April 14, 2026

Episode release date:
April 14, 2026

Episode release date:
April 28, 2026

Episode release date:
April 28, 2026

Episode release date:
May 12, 2026

Episode release date:
May 12, 2026

Episode Resources
Transcript
In the last episode, we introduced the Improvement Kata, the structured routine for creating alignment. Now, let's zero in on step two of the Improvement Kata, grasping the current condition. This is where most transformations fail before they even begin. Large enterprises are jungles of complexity. They're a tangled mess of legacy processes, political silos, and decades old technology. And you can't improve a system that you don't understand. Any attempt to do it is just guesswork. So how do we cut through the noise? How do we break down the enterprise and build a shared map of reality that we can actually use to make intelligent decisions? To start, we need to look at the problem through two distinct but interconnected lenses, the business process and the technical architecture. So let's start with the business process. Here our tool is value stream mapping, which we touched on in the last episode.
Now we're going to talk about what it actually looks like in action and why it works. Forget the six month time and motion studies from a team of consultants. In Mission OS, a Value Stream Mapping workshop is a one day session or, for a very complex domain, up to a one week sprint. You bring in representatives from every step of the process and map the flow of work physically on the wall with sticky notes and string. The goal is to follow a piece of value from the initial trigger to the final delivery. At the Combined Space Operation Center, for example, the Value Stream is the space tasking cycle. It starts with a request for a space effect and ends with a completed mission. And we trace every tool, every human handoff, and every system that touches that request along the way. But here's where it gets real.
We don't just map the work. We also measure wait time. We obsessively track how long each task sits before someone picks it up. How long it takes to get signatures, how long it waits in queue. That's where transformation usually starts. Not in the work, but in the wasted time between the work. This is a profound and often painful revelation to every Value Stream owner. You might find that a process that takes 30 days involves only a few hours of actual work. The rest, like 99%, is waste. It's time spent waiting in queues, waiting for handoffs, waiting for approvals. The map makes the invisible visible. It exposes that the problem is not that people aren't working hard enough or even that your software's not good enough. The problem is that the overall system is broken. The problem is the handoffs between the silos. And that shared visual understanding is the catalyst for real change.
It moves the conversation away from blame and towards the collective desire to fix the system. Now, once we have a map of the business process, we need to look beneath it. We need a map of the technology that supports it. If you remember from last time, Domain-Driven Design is how we map the technical reality. So I want to break that down and talk more about how this tool helps us start strategically chipping away at the tangled mess. We already know that most enterprises are built on a giant, tightly coupled mess of monolithic systems. Everything connects to everything else, change one thing and three other things break. It's brittle, fragile, and almost impossible to improve incrementally. Domain-Driven Design gives us a way to untangle that mess. At its core, it's about designing your software to reflect the reality of the business domain it serves. That starts by creating a shared language, a ubiquitous language between business and technical teams so that they're describing the same problem in the same way.
But similar to Value Stream Mapping, you don't need or want a 60 day study. We want one day to one week maximum. So we use a tool called Event Storming. We start with the business events from the Value Stream Mapping and we tie them to the technical architecture in a series of workshops. We want a systems view for how data flows through the systems from target nominated to target approved to mission scheduled to mission executed, as just one example. As we map these events against the architecture, patterns start to emerge. We see logical groupings of related events and concepts, distinct functional areas within the mission. Targeting is separate from mission planning. Intel collection is distinct from asset management. These clusters are called bounded contexts. These bounded contexts are the seams in the system. They're the blueprints for a more modular future.
Each bounded context can become a service or a set of services owned by a single team that can develop, deploy, and scale independently. Domain-driven design gets us out of this trap of trying to replace the whole system at once. And instead, we identify the most painful or high value areas and modernize them incrementally. We leverage the seams and the systems, and build forward from there. And when you combine these two practices, Value Stream Mapping and Domain-Driven Design, you get a powerful multi-layered map of your enterprise. The business process lens shows the operational bottlenecks, and the technical lens reveals the architectural constraints and the opportunities to escape them. But then you still have to account for the real humans involved, and their journey. For this, we utilize User Journey Mapping. How do the users interact with the Value Stream processes and the technology? We rely heavily on research paired with real world observation and user interviews.
When we put these three artifacts together, we get a very high fidelity service blueprint with very obvious friction points. This is how you grasp the current condition. You're no longer relying on assumptions or legacy documentation. You have a living model of the system co-created by the people who operate and support it. And most importantly, it gives you the evidence you need to take the next step in the Improvement Kata to establish your next specific and achievable target condition. You're not guessing. You're acting on a deep, shared understanding of the system that you're trying to change.
Value Stream Mapping often reveals that wait time is a big enemy. And this can be difficult to explain to leaders who are conditioned to think that the problem is productivity or sometimes the bureaucracy. One example that I see often is people have really come to hate the Risk Management Framework. And that's because it takes so long and they assume that the Risk Management Framework itself is to blame and that as a business process, it's therefore not value add. But when we've Value Stream Mapped this, again, this is not a mission process, but this is the IT delivery Value Stream, when we Value Stream Mapped it, we've found that over 80% of the time to get an ATO, the end goal of the Risk Management Framework process, 80% of the time is spent waiting in queue. It's in wait time. And so if I told you, "Well, actually we could do the RMF 80% faster and get better outcomes, security outcomes, something that really matters in this modern digital era that we're in, wouldn't you want to do it?
Wouldn't any leader want to do that?" So the reason they don't want to do it is because it takes so long, but if most of that time is waste and we can remove all of that waste, then we can get everybody to be conditioned to think, "RMF is valuable. And I want to do it and I want to invest in it." So it fundamentally flips the narrative when we can show how value moves through the system.
So if you're feeling overwhelmed by your organization's complexity, starting with a very high level Value Stream Map is probably the first small step that you could take next to move towards the change that you want to see. And so, what I recommend here is you don't want to do a deep Value Stream analysis across a really huge, large, complex domain. And you find these in government all the time, almost every government mission is just way too large to do a Value Stream Map on a reasonable timeframe. And so what we want to do is a top level study, like very high level. Highest level possible. And in each of those, try to identify where's the constraint and then do a subprocess analysis. Again, only going one layer deeper.
Or, if you follow the DoD architecture framework, the DoDAF, this would be called an operational view, OV. So OV level one, they go all the way down to like nine or 10. You want to be careful — don't create meaningless distinctions between layers. Go with your best analysis. The point here isn't to get a perfect answer, it's to get to the level of detail that you need to go build your next thing. Eventually you're going to have to get there as you build quarter over quarter, year over year — you'll build out to the left and the right of wherever you start.
To get started quickly, you don't want to do a six month study before you ever write your first line of code. In the course of a day or maybe up to a week, if it's a really complex domain, you want to get down to a sub process that is mapped in enough detail to understand where the constraint is in this painful process, and what you can do to alleviate that constraint.
A great example of this is actually the term "target." We were working primarily with the fires community who wants to take kinetic action against a target — in this context, something nominated for a kinetic strike. And once we scaled and had become more successful, we were asked to serve the ISR unit, intelligence, surveillance and reconnaissance. They also use the term target, but they're targeting things for collections — they want to collect intelligence. You might think, oh, that's just semantics, but it becomes really problematic in the database when you go to integrate those two systems. And it turns out you have to do that because usually before we strike a target, we conduct a lot of collections against it. We want to make sure we sort out those two different contexts so that we're not treating them as the same when we go to integrate the data layer.
This can cause huge problems across different contexts. You see this in commercial as well — there's a famous case study where an airline found that the term "reservation" was being used in 13 different contexts across their systems, which caused major integration problems. A specific term has to be clear and unambiguous to both operators and engineers so that when we're building systems, they work in the field and we don't run into integration challenges.