Mar 27, 2026

Upskill or Fall Behind: Building Government Software Teams for the AI Era

Description

Software teams are under more pressure than ever—AI is accelerating delivery expectations, methodologies are being stress-tested, and the organizations that invested in their people and practices are pulling ahead fast.

This conversation is for leaders asking: how do we build teams and practices that can actually keep up?

Rise8's Adam Furtado (VP of Enablement), Mike Gehard (Software Engineering & AI Lead), and Matt Pacione (Platform Engineering Enablement Lead) discuss:

  • Why continuous learning and upskilling are non-negotiable in a rapidly shifting software and AI landscape — and what happens to teams that skip it
  • How the best organizations are moving beyond project-based delivery toward genuine outcomes — and what that shift actually looks like in practice
  • What separates organizations that are thriving right now from those that are falling behind, and the signals to watch for in your own environment
  • How to evaluate whether your current methodology (or lack of one) is an asset or a liability as AI accelerates everything

Watch the full livestream, and then check out Bryon’s free course on shipping outcomes in government.

Dive Deeper

Mentioned in the Conversation

Related Content

Transcript

Adam Furtado (00:28):

All right. Welcome everybody to our next iteration of Mission OS Live. My name's Adam Furtado and I will be your host today. We are here today to talk about AI's implications within Mission OS. And Rise8 CEO, Bryon Kroger has been releasing a Mission OS masterclass. Episode eight just dropped. You can go watch that right now on our website, rise8.us or on our YouTube. But Mission OS is a video series that breaks down why most organizations really struggle shipping software within this government space and talks about the conditions and the execution required to make that possible. These live streams extend those conversations with people who are really pushing the edge within the GovTech space, exploring what it really takes to ship across people, process, and technology. The first two episodes of this series are also live and are worth your time. The first feature is a legend in the gov tech space, Jen Pelka.

(01:24):

And the second was a really compelling conversation between Bryon and John Cutler on operating models themselves. So I highly recommend going to check those out. So this is episode three in this series. And as we go, feel free to drop questions into the chat and we'll try to get to them at the end of the conversation if we have time and kind of feed them in as we go. I'm the vice president at RISE8, where I lead our implementation of Mission OS here within the company. And as someone who spent his entire career in this space implementing product and engineering practices in and around government, today's conversation is a bit of a peak into what my day-to-day conversations feel like at work right now. And with the people who I have those conversations with generally too, which is pretty neat. But I have the privilege of working alongside the best in the business in this space, and I'm thrilled to have two of them here with you today.

(02:14):

Mike Gayhard is Rise8's AI lead, and he's really at the forefront of pushing how we're advancing our AI practices and really what that means with the teams that we work with. We also have Matt Patione here, who's Rise8's tech enablement lead, where he focuses on equipping our engineering teams with the platforms, practices, and capabilities they need to deliver consistently and confidently at scale. So these two have really been driving Rise8's AI journey aggressively and thoughtfully, and I'm really glad to have you both here today to talk through it.

Matt Pacione (02:46):

Thanks. Good to be here.

Adam Furtado (02:48):

All right. Hey, Adam. We're going to spend the next 15 or so minutes on something that I think a lot of leaders are feeling, but maybe not quite naming specifically just yet. And it's this gap between where organizations are today and where the need to be to actually leverage AI meaningfully on the things that they really want to leverage. And before we dive into that, a little bit about how we're thinking about AI here at Rise8 is this isn't a pivot or a new direction for us. We've always measured our success on mission outcomes that we can put into production. And our investment into AI has really just been in service of doing that better and faster. And we're going to talk a little bit about that as we get into this as well, but we'll get right into this here. So Mike, I want to start with you.

(03:35):

From what you're seeing on the ground day-to-day, what is the shift that gov tech leaders are most caught off guard by right now, do you think?

Mike Gehard (03:45):

I think it's just the speed of change. I mean, this is the first time in my 25 career career, which spans the introduction of the internet, things like that where it's really hard to keep up with change. And I think a lot of that comes from lack of abstractions. So foundation models are what, five-ish years old. I started playing with them about a year and a half ago. And it's like being back in the CGI days of web development. We're all slinging Pearl Scripts as fast as we can to build e-commerce sites. The rails in the springs of the world haven't shown up yet. So this every change that happens affects your teams because there's no abstraction there to hide behind. And I think the hard part of this is keeping up with that change. And I think the way leaders need to think about this is you need a human on your team whose job it is to literally keep up with this stuff, but then take what they're learning and apply it directly to the problems that you're having right now on your teams, which requires you to understand what problems you're having on your teams right now.

(04:51):

So it's interesting. I need to know where my problems are. I need to know how AI fixes those problems. And you're not just slapping an AI sticker on everything and saying, "AI will fix all things."

Matt Pacione (05:02):

That does seem what seems to be happening, right? If people just say, "Oh, AI this, LLM this, that. " You just throw it in there and it's going to fix everything. And that's not what's going to happen. You can't just tack a new methodology or new idea onto what you've been doing throughout history. You've got to see how it will fundamentally change everything that you're doing.

Adam Furtado (05:24):

Yeah. What does falling behind actually look like inside of an organization? I know that the conversation we're having all day long, it's like there's just this general urgency around ... You mentioned somebody to keep up with all the changes that are happening there. Well, what does it look like, do you think? How would somebody who's watching this know that they're falling behind right now and they should have maybe more urgency in some change?

Mike Gehard (05:48):

If you aren't using AI daily, you're already behind. I think that's the hard part is that everybody's looking for someone like the cavalry to ride over the hill. And I heard this somewhere the other day of like, here's the solution, the practice that you're going to use going forward, and that calvary's not coming. It's probably not coming for a while. So I think you need to be playing with it daily. It's a skill. It's taught me how crappy I am communicating with other humans. Literally, if I can't explain it to the machine, I'm probably bad at explaining it to other humans. So if you're not playing with it on a daily basis, you need to start playing with it on a daily basis. And then again, I have problems. How do I apply this to my problems, not somebody else's problems? Because that's the other problem with using somebody else's abstraction, is that abstraction was created to solve their problems, but does it solve your problems?

Matt Pacione (06:36):

And this was not something most of us chose to do. This is something that's kind of been forced upon us for us to go out and be like, "Okay, I'm going to actually make this change and do this today." Most of us are going to have to say, "I will go and learn this new technology. I'm actually going to have to dedicate time. It's like going back to school." We didn't all like to sit in lectures in school when we were growing up, but now it's like, I need to make that change. I actually have to go out and be proactive about this. I can't just let this sit back and watch the show.

Mike Gehard (07:07):

Yeah. Because if you're waiting for someone to come, those people that come will be light years ahead of you. And I think it gets interesting. So most of my career was in startups. And I think in govtech, there is a barrier to entry. There's a moat there. Nobody's going to disrupt the US government. Maybe another government might, but you still have to have that urgency because you will fall behind. But in a startup, there's someone knocking on your door already and that person's probably using AI and you're hearing this idea of one person unicorn companies. People I'm talking to, those companies are coming. Now, how do you translate that urgency into the gov tech world?

Adam Furtado (07:49):

Yeah, I think there's urgency certainly, but just in a different way, it's maybe more visible in the place that you're describing. In our company and in our careers, we've always been pretty opinionated about our methodologies, the how behind how we work. In fact, Mission OS video series is an example of that in a lot of ways. Mike, you said something previously that I keep coming back to when I think about this sort of thing. It's this idea that AI is a stress test on every methodology in use today. Could you walk the audience through that logic?

Mike Gehard (08:22):

Yeah. So I could talk for days on this one. This one's been a big topic. So I've been doing extreme programming for 18 years now-ish. Worked at Pivotal Labs, so familiar with the practice, pairing daily, test driving daily. And I think what's happening now is the bottleneck in our processes used to be human fingers on keyboards typing in code.That was usually the bottleneck. Story comes in, takes team of engineers X number of weeks. This is why everybody wanted ... I want estimates, and we always sucked at estimates because estimates are hard. That no longer exists. Whether or not you agree that an LLM generates good code for some definition of good, it is generating code faster than any human ever could. And Matt and I are dealing with daily. Slinging code, hundreds of lines of code that would take me hours to type. So that bottleneck moves.

(09:21):

You are no longer constrained by human fingers typing at a keyboard. So that sits between two other processes. The downstream process of, does that code actually do what I want it to do? And the upstream process of, what am I building? Why am I building it? So I think that bottleneck is shift. Now you got to go looking for the next bottleneck. My hypothesis is it goes either one direction or the other or both. So at that point, every practice you have needs to be reevaluated from first principles because the one bottleneck has changed. And I think that's what we're all not ready for. We're all like, "Well, I'm just going to keep doing XP and I'm going to do TDD." I mean, I was listening to a talk from Kent Beck the other day that he put out a podcast. He's not sure if TDD still is relevant in this world.

(10:07):

And to many of us, that's blasphemy. So I think this is where it starts to challenge and you have to go back to first principles. You can't just say, "Well, we've always done this. Let's keep doing this.

Matt Pacione (10:19):

" Yeah. You really have to question how much emphasis have you been putting on the how in all that we're doing and all that we're creating most of the time as engineers, we enjoy the build, we enjoy what we're creating and what we're doing, as opposed to the why of like, who am I building this for and what do they need? What is the problem that I'm trying to solve for them? We tend to just sit behind the keyboard and screen and go do our own thing. And we want to focus on the how. And we really need to focus on the end user and the stakeholder.

Adam Furtado (10:53):

I love that. I love that that message is coming from you two in particular. The two folks in our company who are the most in it, the AI, the technology every day, the fact that you guys are singing the message about users and the mission outcomes needed, I think is a real testament to the way that we work and the way that our industry can work. But I want to pull in a question that was pre-submitted that fits right into this. And the question was, how are teams measuring whether AI actually is improving outcomes and not just speed?

Mike Gehard (11:26):

Oh, this is going to be spicy. I love this question. My typical answer to every one of these is how are you doing this with humans? So you have to be really honest with yourself and say, okay, am I measuring this with humans? Do I have a process that says when story comes in? Now Doris talks about this. When story comes in, how quickly am I in production, which is the only place outcomes matter? If you're not doing that with humans, why are you going to ask AI to do anything better? So I think that's the first thing you have to ask yourself is, do I have a reliable mechanism to measure this now? Because I need a baseline. What is my baseline for all of my DORA metrics? And then just run the experiment. We had a team, our tracer team run this experiment.

(12:11):

They actually took the same story and they called it the John Henry experiment. One pair took the story and implemented it with two humans and one team, one pair took it and implemented with an LLM and they measured it. Now that's kind of imprecise, but I think that's the key. You got to be measuring it now. And if you're not measuring it now, you can't measure the post condition there. I

Matt Pacione (12:35):

Would think with one person, especially we're learning so fast right now. I can't tell you how much I've learned in the last six months to a year. We're just going crazy in that sense. So you can even anecdotally look at yourself and say, "What would I have been able to do or how long would this process have been able to take me a year ago?" The things that I've built in the last three months would've taken me one person, would've taken me probably twice, if not three times as long, just because I know that from my own experience, right?

Adam Furtado (13:08):

Yeah. I think the technology aside for a second, I think this is such an interesting evolution of the place we've gone in all of our careers from this real push to agile principles where we're really focused on the how and we kind of cut, I think a little bit obsessed in our industry about the specific implementations of the way that we worked, always in service of getting the outcomes we want. But I actually think some of the things that you both are describing is forcing us to really think about the outcomes because we can just build the wrong thing so quickly and so often that it almost forces you to answer the right questions in some cases. So I think it's sort of fascinating. The title of this conversation I think was about kind of upskilling and training and education because I think that's kind of a place that as a leader, at least I feel the most pressure.

(13:54):

It's like, traditionally, we had an approach to this, right? There'd be like an industry shift happening. You could read about it and learn about it, figure out if it made sense for your business. You could find an expert, figure out how to build a curriculum and how to implement it. And you could test it with parts of your organization, see if it worked, iterate on it, and invest in that thing for the next six years. And clearly things are changing so fast that this kind of traditional way of teaching people things and learning from an organizational perspective might be thrown out the window. So that leaves me, I think, with two options. One of them is find a new way to teach people and to learn within an organization. The second is to be paralyzed by it and just hope when the carousels stop spending, you're not too far behind to catch up.

(14:42):

Matt, interesting thing about you. You come from an education background, you've been a technology educator in various forms for most of your career. I'm curious how you're thinking about this. As somebody who's learning in real time yourself and doing that digging in you talked about, how do we think about this within our organizations to spread this across big swaths of people?

Matt Pacione (15:02):

Yeah, for sure. I think the first thing that comes to mind is how do you learn and do you know how to learn? That is one of the biggest questions you have to ask yourself is we can create platforms that will help us learn the same way we've been doing it for the last five to 15 years, but is that the best way? With AI, with the tools that we have now, are there better ways to learn? So teaching and giving people the exposure to all the tools that are out there and the processes. And bottom line, you have to think through what do you want to learn? Where are your gaps? You have to come to the table and say, "I don't know something." If someone is going to learn something, they have to admit, "I don't know it. " And that actually, that's humbling, right?

(15:46):

That experience of sitting before somebody and saying, "I don't know something, please teach me. " And to sit before an AI, to have it teach you something is also humbling to know that there is this magician who can teach me these things and I have to trust or I have to verify and validate what's happening there. And when we think through what we're doing now, if the how really doesn't matter and the why is so important, we have to think through, how do I learn about who I'm actually impacting? Do you care about who you are influencing and who you are building something for? What do they need? Where are their problems? Where are their bottlenecks and challenging the ways that they've been doing things over and over again? So bottom line, it comes down to choice. Are you choosing to reach out, to learn where you're weak and where you need to grow, and as well as what do the users need?

(16:44):

What are they learning about them, putting yourself in their shoes? Teaching students in a lecture format where they just want to sit there and they don't want to be there for the most part, right? You need to figure out what they want to learn and figure out where they are and come to that background knowledge that they might've brought into, and how do they actually go through and say, "Hey, I want to learn this, " teaching them, giving that passion to learn.

Adam Furtado (17:10):

Cool. And then Mike, I want to give you a chance to answer this as well too. So internal to our company here at Rise8, we made a huge push to raise the skill levels with AI, both raise the ceilings of the people who are really diving in this, but also raise the floor of our organization. And we actually implemented a bit of a four week training program where all of our PMs and marketers and salespeople and administrative support, we put everybody through a training that you built, Mike, that really got them comfortable using these tools to automate their own processes and figure out how to be more efficient and effective. What have you learned from that experiment about trying to train these things in real time while everything is kind of changing around you?

Mike Gehard (17:52):

Yeah. I think it comes back to defining what you want and who you're building it for. I think engineers like myself who've always been product engineers have always wanted to ... I was always the crappiest technologist. I hated learning languages, but I always wanted to focus on the user to solve problems. So when you're teaching humans who are ... I have a bachelor's in chemical engineering, so I'm used to using engineering mindsets to solve problems. When you're going into a marketing department, it's shifting their mentality that they have a problem to solve. They are the customer. How do they define their problem and then create a measure of success? What is good enough? And it's teaching them that feedback loop because they traditionally don't think in that way, but our world is feedback loops. I want something, I do something, and then I have to validate did the thing that the machine built actually build what I wanted.

(18:45):

So I think that's a big mindset shift for the people that don't tend to be technical because if you look at the way these AI works, they want feedback loops. They want to know when they're done. And if you can give them a measure of done, this is where you hear about Claude going off for eight hours and building a C compiler. The definition of done is important and it will iterate towards that. But if you can't tell it when it's done, it's going to make stuff up. It's not going to build the right thing. And that's hard for some people.

Adam Furtado (19:16):

Yeah, that's great. I mean, I'm excited. Mike, you're doing a talk next week here at Chipson. I'll talk about that in a second with Jason Frazier, somebody who's been in the product space for a long time. And I'm kind of fascinated to see how it comes, but I know that we're talking about how user stories and the way that we wrote as product managers have great application to prompting and in taking the skills that we've used as product people and utilize them. One thing that I found interesting, I'd love to get both of your opinions on is if we rewind, I don't know, six months ago, nine months ago maybe, I remember having conversations with product managers and designers and folks who were technical adjacent who were like, "Oh my God, what does this look like for me going forward? This is like an engineering world now.

(20:01):

Do I even have a place here?" And then the conversations I'm having today are much different than that, where I'm talking to engineers and they're like, "Oh my God, do I even have a place here?" And it feels like the skills that we learned on the product side are clearly really valuable in order to communicate with AI and the engineers are adapting the way they've used to work. So I'm just curious how you all have thought about the evolutions we're seeing in real time from the way that we have worked for so long into how people would be most effective here going forward.

Matt Pacione (20:36):

Yeah. I think the biggest thing I think of is you're putting power into people's hands that haven't had that power before. When we learned software for the first time, it felt very powerful. I could take nothing and put words in a computer and create something. That power is contagious. And when people who've not been in that position before get that creator sense that I actually can be a creator and not just a consumer of somebody else's tech that someone else is doing it for me, I can actually do that. That is super powerful.

Mike Gehard (21:11):

Yeah. And with great power comes great responsibility. We did not practice that. I think what we're going to see is this world where I think we've entered the age of personal software where people can build software for themselves. I think where we're going to find the interesting transition is a lot of the people I'm following are building software for themselves. What does this look like when this becomes a team sport? When you have traditional team sizes, back to your comment, Adam, of like, what happens when a product owner can ship code to production? Do you still need engineers? Does engineers' roles change? I have a hypothesis that the engineering role actually shifts to enabling the whole team to deploy to production. That's what the engineers built. The engineers built the factory. They're not building the software anymore and you have to put guardrails in. So I think, again, we're in this world of, I think every team's got to figure it out based on their current process.

Adam Furtado (22:03):

Actually want to, I'm going to shift to questions from the audience here. So if you have any questions for the folks who are listening in, please get those in. JJ Homan submitted one via LinkedIn here. It's a little long, but I want to read it because I think that the message behind it and the question behind it is really important for our folks on the government side of things. He wrote, "When the coders are constrained by the platforms and tooling that the government is allowed to use, how do those who use AI to develop, do it efficiently and as part of a overall delivery?" For simplicity, if some code is developed with AI in a different place than somehow needs to be brought over into the main platform to integrate, how does the government enable that better? I think the question behind the question here is really like, if I'm a government person, all the things we're just talking about, that all sounds great.

(22:50):

Can I even do any of that? I don't know. What's it going to look like for me to be able to utilize some of these practices and have them be effective in the kind of constraints that I live within? Have you all thought about that application at all?

Mike Gehard (23:03):

Yeah. I mean, we built a container that'll run in CUI environments using firewalls. I think you just have to get creative. And I understand our government partners are a little bit more limited with the LLMs they can use, but yes, all of this does assume free access to foundation models. And people will say, "Well, what about local models and what about open weight models?" And they're like six or eight months behind and I think they'll eventually pick up where I'm running it locally and I'm not shipping my code off to an LLM that lives in the cloud on Anthropic, but we're using Bedrock right now. So you can run Sonnet 4.5 on AWS Bedrock, which is IL-45 compliant. So I think you just got to go out and find those things and then convince somebody that you can to let you use them. And then if you can't, that's a hard place to be because you are going to fall behind and how do you keep up?

Matt Pacione (24:00):

Yeah, you have to put those guardrails in place. You have to have this security in place as well. I mean, just like what we've done in the past, if a contractor has created something in their environment and then brought it over to the government, there's been checks and balances, there's been security checks, all of the same sort of things happen. It's just over here where the contractor has created it, where are you sending out that information? Are you sending it to some sort of API that's not guarded? You have to be careful of all of that and you have to have those boundaries and the security in place. I think it's very similar to what we've done in the past.

Adam Furtado (24:35):

Sure. Yeah. I think the conversations we've been in, they're just some exciting stuff that we're able to do and we're exciting stuff we're doing with government partners today that I think we're at the forefront of, and I think it's getting everybody really fired up about the outcomes we're going to be able to put into production with our government partners has been really exciting. Last question here. Matt, you mentioned earlier that it takes three times as long to do what you've done in the past. What you've been doing, it would've taken you three times longer to do. Yeah. I guess that implies that the same output was done quicker, but the question is, are there also things you wouldn't have been able to do at all? Are there languages you didn't know, parts of the stack you weren't an expert in? Are there things you can do now with these tools that has been upskilled you, I guess, in what you're able to accomplish?

Matt Pacione (25:27):

Yeah, for sure. I mean, have I learned something over the last three months, six months? Most definitely. As I'm going through these processes, you do things ... And in our mind as we're building, we always have these give and take that we have to figure out. I would love to do these 10 things, but the time constraint that I have, I can only do these two or these four. And so in that process of having agents and agentic workflows under your control, you can do way more. I have a team. If I have five engineers working with me or I have five agents doing the same sort of thing. So I'm able to do a whole lot more because one, yes, I am learning, but two, I have to know where I'm going. I have to have that vision and having that guide to the agents and showing where I want to go.

Mike Gehard (26:17):

Yeah. I mean, I'm doing platform work and I always avoided platform work like the plague. I was like, "Let me just focus on the user." And I think I call it Claude TV. You sit and watch the agent work and you're like, "Oh, I didn't know that Git Command existed or oh, that's a new way to do that. " Or, "Ooh, that's a new way to think about that. " And I think you definitely ... It's also, it's production, but also one of the tricks I've been using is having the LLM ask me questions. And it's kind of interesting when it interrogates you and interviews you, you're like, "Oh, I didn't even think about that thing. Oh, there's an edge case that I didn't think about. " So I think there's a give and a take here, and I think it's leveraging it both as a tool to affect action in the world, but also, as Matt said earlier, I don't know.

(27:03):

Help me think through the process that I need to think through to solve the problem I'm trying to solve. But again, you have to be very clear what problem am I trying to solve.

Adam Furtado (27:13):

Yeah. So it's not really just the speed, you're just not pumping up the Reeboks and running faster. It's like an expansion of everything you've been able to go. I think it's really great. We're at time here. Thank you both for joining. This is a great conversation. It's a lot of fun. And everything Mike and Matt just walked through the methodology gap, the learning problem, the outcome shifts. These aren't just substract ideas for us. And Rise8's doing something about it right now. I'm actually here in Park City, Utah, and we're prepping for Ship Summit, which we're hosting next week. It's a first of its kind event and a real investment into our people in the broader gov tech community to really learn these types of skills in real time. Day one on Tuesday is what I genuinely believe is one of the best single day collections of technology and gov tech talent you'll find anywhere in the country this year.

(28:00):

Mike mentioned Kent Beck earlier. He'll be on stage talking about extreme programming and the evolution therein with AI. Gene Kim is on stage three times with workshops and keynotes, John Cutler, Jason Frazier, alongside practitioners who have delivered real solutions inside government, including something we're calling Ship Stories, which is a series of gov tech impact stories curated by former US digital service administrator, Mina Chung. And then day two, we're moving into an impact lab where we take everything we just learned and put it to work. We're building real products in partnership with the Utah Avalanche Center. And so we're not just talking about AI skills, we're applying them. So our capacity's just about full, but if anybody wants to make a last minute run out to Park City, we'd love to have you. If not, we'll share everything that we've learned and hope to see you next year.

(28:48):

But this is Rise8, putting our money where our mouth is and we're really excited to see where this goes. Finally, last plug here. Join us back here in two weeks on April 10th. Barry O'Reilly will be joining us to talk about his new book released just this week called Artificial Organizations. It's kind of the next step to this conversation. How do you apply this within your organization and organizational design? Go buy the book, review it on Amazon. It's going to be a great one. We hope to see you there. And thanks everybody for joining us.