subscribe for Updates


Join Dominica DeGrandis, a renowned expert in the field of DevOps and digital transformation, at Prodacaity as she explores the intricate journey of companies navigating the challenges of transitioning and transforming their operations to meet modern demands. Drawing from her extensive experience, Dominica presents case studies from three distinct organizations, each facing unique hurdles such as micro workflow fixation, transformation stagnation, and the struggle to balance security risks with technical debt. This insightful talk not only highlights common pitfalls but also offers practical strategies for smoothing out the transformation process, ensuring a more balanced and effective approach to change management.


Dominica DeGrandis (0:13)

I am Dominica DeGrandis. And today, we're going to look at companies in transition, companies who are in the actions that they're taking to try and meet their goals. And along the way, I want to ask the questions, where are we and where should we be? Because I think the answers to those questions are really good indicators to see if the approach we're using for transforming and transitioning is a good fit for the organization. We're going to look at three different companies, soundbite on each one here. The first one is understanding the span of the focus of their value streams that they're working with. The second one is, let's take a look at all the work. Let's track all the work, in particular here, security risks and technical debt. And then, the third one is this transformation stagnation. This rather lethargic adoption of new practices during transformation. 


This first example, we call this stuck in storyland. Stuck in storyland is a phrase that our team coined, and it refers to micro workflows. A micro workflow is just a small slice of the end-to-end value stream. So, here, the micro workflow is just dev, build, test, versus the larger, the macro workflow, which is the gold star of gaining visibility, because it starts with a customer and ends with a customer. This organization is asset management, so 7 billion in annual revenue, 20,000 engineers. And they're on the verge of the great wealth transfer that Forbes has said will be about $72 billion that baby boomers are going to hand over to their kids. And so, this brings up some interesting challenges, because 30 and 40-year-olds have a lot different expectations out of digital services than their parents. For example, I want a paper printout every month that shows all my transactions down to the dime. But my kids would probably be cool with just a thumbs up emoji that indicates that AI didn't see any kind of fraudulent charges and that mom deposited an extra thousand dollars in their account, 'cause their funds are running low. So, this organization has lots of initiatives in their backlog. And at this organization, initiatives break down into capabilities. For them, a capability is a, it's a chunk of value that's bounded by a quarter. So, it needs to be delivered in 90 days. And the capabilities get broken down into epics and epics get broken down into stories. And it's the stories that are delivered at the individual team level. And that's what was getting measured. 


And so, the problem here is this false sense of progress, because the cycle time on stories is great. We're getting stories done rapidly. They come out every two to three weeks. But the value, what the customer coin as the vessel of value, the artifact, the work item type that's actually flowing across the value stream. And when that gets delivered, now customer can access their request, change their feature, their update. So, we're looking at a larger view of this macro workflow now. We've moved from storyland, we're expanding to capabilityland. 


And this organization, they started with Agile and they carried on with DevOps, continue to do their continuous improvement and now they're working on value stream management. They're working to manage their value streams. This is the kind of work that our team does day in and day out. We primarily work with Fortune 100 companies, so pretty large organizations. And for them, their capabilities start when it hits discovery. So, prior to that, IDA, triage, that work's already been funded, the capability already has, it's already been funded, so it's already been approved. They're ready to go essentially. And that's why you see the line of commitment right before discovery. The line of commitment is where, and it was just added to their vocabulary, by the way. Before that, they didn't think about where do we really need to start the clock to show how long it takes for us to deliver customer value? Are we making this 90 to 120 days? So, once capabilities are tracked and measured, now we can ask, okay, where are we with these capabilities? Well, only one in six capabilities were getting delivered in that period of 90 days per planned. So, that's 16%. We call that a 16% investment efficiency, which they weren't too pleased about. They were investing about $4 million in capabilities that weren't being delivered when they should be. And that may sound like a change in a bucket to the DOD, but for them, it was a big deal and they wanted to learn how can we reinvest, how can we invest that money better. How can we elevate visibility from just the team level up higher to multiple team levels? There was some resistance at first in measuring capabilities, because they looked so good when they were just measuring stories, but that's chasing the wrong problem. So, that's why we call it stuck in storyland, because it looks good, the metrics seem fine, but is it delivering customer value? No. And it's taking more than 90 to 120 days to deliver that customer value. And so, where are we? We're at 16% investment efficiency, and we have to wait a year on some of the stuff to get feedback from our customers. So, where should we be? Well, better than 16% is where they want to be. They're not setting a specific target at this point, because they recognize if they do that, oh, we should be 85%, that this is going to be an iterative changes for them. And one of the, so where they're starting is they're smoothing out their funnel. 


If you heard Paul Gaffney talk on Monday, he talked about quit trying to do more and smooth is fast. And if you remember that from his talk on Monday, smooth is fast. And so, you can see how even this amount of work is, that's in progress. There's much fewer capabilities here. Can I go reverse on here, Bap? Okay, so where they were, you see this big huge funnel, so much work that they're trying to get done, more work, we need to get more done. But do they have the capability to do that? No. They can only get a small sliver of that out, 16%. So, the goal is to flatten this out, smooth it out, so that the key thing is that the arrival rate of new capabilities coming into their workflow is ideally equal to the departure rate of those capabilities. And that's what we'll call a more of a stable system. Other terms are, it's where balance, you can balance demand versus capacity. 'Cause when you keep adding more and more and more items into your funnel, the shape of the curve, it just keeps growing and growing and growing. And if you could only deliver this much of it, then it's going to keep growing and things are going to take much longer to do. And all the things associated with the five thieves a time. Conflicting priorities, unplanned work, unknown dependencies, more cognitive load, too much WIP, very problematic. So, they're trying to smooth out this capability funnel. They're trying to balance out the arrival rate with the departure rate of work. And now, they're asking, can we take some of this $4 million, maybe $2.5 million of it? And can we invest that in other areas, so we can accelerate our improvements, accelerate some modernization? 


The idea is like, why invest in an area that is turning into waste? Imagine working at Domino's Pizza and you have people up front who are folding pizza boxes, and then there's people cutting out vegetables and grating cheese and making dough and cooking it in the oven. And then, on the other side, the pizza comes out and it goes into one of the boxes. Well, if you're investing all this money up front into folding up pizza boxes, but you're only delivering 16% of what you could fill, then let's not continue to make more pizza boxes. Let's put the money where it's needed. Maybe we need a bigger cheese grater or something. So, I might come back to this if we have time. 


I'm going to move on to the second one here, which is show me the KR's as in OKRs, objectives and key results. This company health insurance, 300,000 employees, 340 billion in annual revenue. And they are working through a project to product transformation. They have OKRs assigned to them. And the first one is their objective is, they have to complete 30% of all their defects by the end of their next planning iteration, which is about 10 weeks long. And their boss is asking, okay, how are you going to meet this OKR? So, they recognize they need to measure their defects, they need to make 'em visible, and they have a... Their defects are tracked in their tool set, they're using rally, so it's pretty easy to see the defects. They're delivering 146 here over a period of time. And now, all they have to do is just add two things. One is they need to add their timelines, when did the period start, when did the period end. And then, the second thing that they're adding is, they have to determine the percentage of defects that were delivered over the total. So, it's just do the math, divide their throughput by the total defects, and then they get 66% and their OKR was 30%. So, so far, so good. But it's not their only OKR. They also have to show that out of all the work that's been complete, 10% was for technical debt and another 10% was for security risks, things like vulnerabilities and auditing and patches and whatnot. So, this was much harder for them to get visibility on, because they didn't have those work item types articulated in their tool. They had defects, but not debt and not security. 


So, now, we have to have the conversation on what type of work flows through a value stream. When you're working to manage your value stream, what kind of work flows through there? And we talk about four main work item types, features, defects, risk, which are primarily security. It could be, it's a big huge risk to the business. It's not like a risk of, oh, we're going to miss our SLA on something or we're going to miss the due date on this project. Significant risk to the business. So, no work item types for debt or security. Lots of reasons for this. It's a common problem. We work with many large organizations, almost all of them, I don't know one that cannot bring visibility to their feature work, the work that's generating revenue, the work that customers are asking for. And defects, problems in production. Everybody can usually get visibility on that, but the debt work and the security work tends to be hidden inside of other stories. So, it can be difficult. Tool admin pushes back, because there's a lot of effort involved or there's policies that make that hard or they just don't see the value in doing that. So, hard to track, hard to measure. This organization had demand for all those four types, but they could only see those features and defects for the first five months of this transformation. So, where are we? We aren't able to show OKRs here for tech debt and security. All we can see in these metrics here, this is a metric called flow distribution. So, it's the ratio of the four types of work and it's a lot of debt that's happening here. A lot more, sorry, I said debt, I meant defects. Red is defects. A lot more defects than the feature work in green. So, they couldn't specify how much tech debt relative to the features in the defects. But eventually, they found an available field on their work items to add values to for debt and security. And they negotiated with the tool admin to help them out and make it happen. I think it involved some dinner. And now, developers can select from pre-populated dropdown if it's debt or if it's security items. So, at the beginning of July, they can only report on distribution of four... They couldn't report on it, sorry. At the beginning of July, now, they can report on their distribution of all four types. And now, the real debate starts to happen, because they're discussing, we only deliver 2% security risks, what do we need to do to bump that up to 10? Are we going to take away from F investment and working on features? Are we going to take away from our capacity of working on debt or defects? Where should we be? That's the question. Is this even a good OKR to begin with? Why 30% defects? 


Maybe instead of incentivizing our teams to fix more defects faster, we should look at discovering what behavior is driving such a high, 40% of their work is defects. What behavior is driving all these production defects? So, what should our flow distribution be? This is a conversation about prioritization at the highest level with leadership and it's a conversation about trade-offs. Because if we're going to do so many defects, there's only 100%, if we're going to do so many defects, then we're not going to do as many features. We won't have capacity to do as much security and technical debt. All right, this third one is making practices visible. So, this is getting back to the slow adoption, I think what Jim calls in his new book, the slowification and the slowing down, again, the theme of slow is smooth and smooth is fast. But why are transitions and adoptions considered way too slow, much too slow? 


This organization is retail, 100,000 employees, 46 billion in annual revenue. They're doing a flow metrics implementation across all of their engineering and platform teams. And the last two years in retail have been just a brutal landscape, just intense competition, supply chain disruption. Customers who just want to buy online and have it delivered in 48 hours. I admit I fall into that category. And in addition to, if this isn't hard enough, their problems include these. And one of the big ones here that I want to talk about is mismatched expectations. So, a senior director that was implementing the flow metrics now, confessed to me that if we had known now, if we had known then what we know now, we know we would've needed 20x more support, more effort in order to pull off this transition, 20x effort. Because they've got to train and support 3,000 developers and all of their stakeholders, and it's not a one person job. We see this a lot where an executive will get really excited about doing a transformation, and then it gets delegated to other people and it's not their full-time job or they don't have the experience or the budget or the know-how to make this happen. And so, one reason these leaders found themselves overwhelmed with this level of effort required was, what was presented upfront at the initial kickoff. And it's similar to Dr. Andre Martin has a brand new book out where he talks about how companies resort to presenting the very, the most idealistic, the best case scenario upfront. The book is called "Wrong Fit, Right Fit", just was published by IT Revolution. And he says it's like a dating app where the online profile does not accurately reflect the person when you meet them. And he goes on to talk about three versions that people experience at their organization. And the first version is this ideal scenario upfront that often is maybe described before the kickoff or during the kickoff of a new transition. The second area is looking at carefully curated materials that are used for onboarding or training. And the third version is how is it for most people day-to-day, what is it everyday reality of how teams work, of how organizations work. Now, Andre Martin is talking about recruiting talent here, but I see similar patterns in the early days of transitions. There's all this rah, rah, rah upfront, we can do this, and then it just can peter out if people aren't set up for success to do that. I think that there's a need in both. It's not just for hiring talent, but it's also for talent interviewing and looking for work. So, it's a great book to understand, how do I understand what it's really like to work at that company? Because oftentimes, people will get in their new job and they know within a week that this isn't a good, it's not a good fit. So, we need some honest, realistic messaging upfront. Set expectations, so people know what they're getting themselves into when they go and do a transition. There's dozens of questions in the book. I just picked these three, how do we communicate, how do we collaborate, and how are decisions made, and used that as a lens to look at the practices that this organization was using. 


So, the mission, this is the mission that was communicated at the kickoff to the teams. Objectives, pretty common. Let's improve time-to-value. We need to measure how we're doing our delivery effort. We need to make bottlenecks and work-in-progress visible. And we need to increase our throughput, increase our velocity from idea execution, and have fun doing it. Fun is not something I usually see at the begin, on a kickoff. List of things to do. But luckily... Oh, and then, three months later, the executive leaves. So, the economic buyer leaves. And now, there's a few engineering directors who are left to carry on a baton and roll out this implementation. Now, they have a few things in their favor. One is that they know, they've been there a while, they're experienced, they're senior, they know who to contact to get things done. They also are trusted by their peers, their leaders, and their stakeholders. So, they have that trust. So, where should we be? We're looking here at where should we be? We should be able to do these things and have fun. Where are we? 


This is an example from a typical working session with one of the groups. We're looking at a picture, pasted into a mural board to collaborate on what are we going to do about all these dependencies. Just the sheer scope of the challenge. First of all, the teams are organized around technology instead of arranged around product. So, a lot of, it's like Conway's law here, driving many dependencies. They've got over 300 unique Jira types and over 1,000 different statuses, and 1,400 different Jira workflows to deal with. It's a madness. And they're in role here about what to do about this. And this particular coach, this would just happen all the time, every other week, we'd have a coaching session, and it would be a new dilemma, a new picture in a mural board, a new direction, a new priority, just constant reprioritization. It was a real, over time, I got to understand that this is, at least for this coach, this was the way they practice. This was the way they work. They try and get visibility on tough problems and have discussions with a bunch of smart people to figure out how are we going to set some Jira standards here. But the nonstop prioritization and the constant change in direction and leadership and reorgs and people leaving. 


Adam's [Furtado] talk on Monday talked about how to teach and coach some of these middle managers, because of all the problems that the DOD is struggling with right now. And I wanted to let him know that the DOD doesn't have a monopoly on these issues of not enough talent and people leaving and reorgs and changes in direction or reprioritizing. It's ubiquitous, it happens everywhere. And what I mean everywhere, I mean finance, health, gaming, insurance, automotive, it's everywhere. So, it's the day-to-day reality for this coach. And so, now I can understand why others are saying this is taking way too long. I think that the coach's mindset was, we need to figure all of this out upfront before we can take any action and move forward and solve the problem. 


Meanwhile... Oh, so where are we? We're in a hamster wheel here with high cognitive load and we're not making a lot of progress. Meanwhile, same company, different coach, different group of teams. I spent just one training session with this particular coach and a few months later, this email gets forwarded to me. The essence of this email is an engineer walks into town and says, "Can we add scope to this epic to get out the door?" And they look at their FlowTime metrics to determine is it even possible or is there a possibility that we can make that happen? And the result of that is, no, we don't think so, but let's see what we can do. So, then they dig into the data, they look at their metrics, here's their metrics here, go back here just a moment. And they take that data to their leadership and they find a way to shuffle resources around and anyway, crisis averted. So, completely different approach. A coach willing to start where they are, see what they can do in their situation. Granted, it's not the biggest problem that all the dependencies were, but still looking at the practice that they're using, the way that they're communicating, they're collaborating, and that they're going for decision making by focusing on where should they be items that their executive had pointed out from the beginning, bringing visibility to that and showing exactly how long it was taking to do this work, showing that they had lost some headcount and would like to reshuffle other talent around. And how when they do that, their throughput increases here. The middle graph there is for throughput. We call that flow velocity. And the one on the very right showing how, that's FlowTime, and it's showing how FlowTime is improving, because it's decreasing over that period of time. 


And so, this coach... But then, they ended up leaving, which is what happens. But now, knowing what I know now, I do think that paying attention to the practices that are being used, and here's just three examples can be really useful. How do we communicate ideas? Are you at an organization that communicates their ideas in a PowerPoint with a few vague bullets? Or is it a organization where a two-page brief is expected with full sentences and verbs, we need verbs. Or how do we collaborate? Synchronously, asynchronously? What are the expectations? What are the practices? When it comes to making decisions, is it just top-down or is respect bottom-up decisions allowed? My first job out at university, I worked at Boeing. and I remember the project manager, very senior project manager at the time, told me, they'll be in the boardroom or in discussions and a question gets asked and everybody looks around to see who's wearing the nicest shoes and that's who makes the decision. It was like whoever was dressed the nicest. I went out and bought me some nice shoes after that. And so, I think making practices visible now is a huge indicator of how can you work with this organization when they're stumbling with their transformation? What kind of practices work at this organization? And if we can maybe use some of these questions and self-assess, then we can start to bond and build trust on where are we, so that we can improve practices on where should we be. 


If you want to get in touch with me, LinkedIn is probably the best way to do that. There's not too many Dominica DeGrandises on LinkedIn. Or else, email me, Got some stickers on the back table if you want any of those. And if you want a copy of my book, "Making Work Visible," there's a QR code for you here.