Agile Mentors Podcast from Mountain Goat Software

Mountain Goat Software's Agile Mentors Podcast is for agilists of all levels. Whether you’re new to agile and Scrum or have years of experience, listen in to find answers to your questions and new ways to succeed with agile.

Listen on:

  • Apple Podcasts
  • YouTube
  • Podbean App
  • Spotify

Episodes

Wednesday Dec 10, 2025

What can Agile leaders learn from the Marines? In this episode, Tanner Wortham joins Brian to share how principles of military leadership—like building authority into the trenches, experimenting under pressure, and prioritizing shared mission over ego—map surprisingly well to modern Agile teams.
Overview
In this conversation, Brian sits down with Marine Corps veteran and Execution Architect Tanner Wortham to explore the parallels between leading Marines and leading Agile teams. Drawing from both military and coaching experience, Tanner unpacks how the Corps’ “rule of three,” mission-first mentality, and obsession with experimentation mirror the best of Agile thinking.
They discuss how effective leadership empowers decision-making at the edges, why conflict shouldn't be avoided but navigated with curiosity, and how facing toward hard problems—rather than away from them—builds high-performing, resilient teams. Whether you're coaching a Scrum team or leading large-scale transformations, Tanner’s insights offer a fresh lens on what it really means to lead with agility.
References and resources mentioned in the show:
Tanner Wortham
What the Corps Calls Leading Marines Others Call Agility
#113: Influence Without Authority with Christopher DiBella
#135: Leading Without Authority with Pete Behrens
#132: Can Nice Guys Finish First? with Scott Dunn
Get the Agile Skills Video Library Use code PODCASTSKILLS for $10 off
Subscribe to the Agile Mentors Podcast
Want to get involved?
This show is designed for you, and we’d love your input.
Enjoyed what you heard today? Please leave a rating and a review. It really helps, and we read every single one.
Got an Agile subject you’d like us to discuss or a question that needs an answer? Share your thoughts with us at podcast@mountaingoatsoftware.com
This episode’s presenters are:
Brian Milner is a Certified Scrum Trainer®, Certified Scrum Professional®, Certified ScrumMaster®, and Certified Scrum Product Owner®, and host of the Agile Mentors Podcast training at Mountain Goat Software. He's passionate about making a difference in people's day-to-day work, influenced by his own experience of transitioning to Scrum and seeing improvements in work/life balance, honesty, respect, and the quality of work.
Tanner Wortham is a former Marine turned leadership coach who helps teams and execs cut through the noise, lead with clarity, and actually get things done. With experience at LinkedIn, Salesforce, and beyond, he brings a no-fluff, human-first approach to growth, agility, and real leadership.

Wednesday Dec 03, 2025

It’s not just about cool tools. Hunter Hillegas (CTO at Mountain Goat Software) joins Brian to unpack what it’s really like to build with AI—from hallucinations and context management to dev workflows, testing strategies, and where the humans still matter most.
Overview
This episode dives deep into the real work behind bringing AI into agile. Brian and Hunter trace the arc from early experiments to full-scale agents, sharing what it took to build responsibly on large language models (and what still keeps them up at night). They get into the weeds of context handling, trust and verification, dev productivity, and what makes a good AI coach actually helpful. Along the way, they explore how tools are changing—faster than most teams can keep up—and what that means for the future of learning, coding, and collaborating in agile environments.
References and resources mentioned in the show:
Hunter Hilligas
AI Tool Kit
Agile Skils Video Library
Mike's Better User Stories Webinar
#82: The Intersection of AI and Agile with Emilia Breton
#151: What AI Is Really Delivering (and What It’s Not) with Evan Leybourn & Christopher Morales
#161: Test-Driven Development in the Age of AI with Clare Sudbery
#166: AI Isn’t Coming for Your Job, But It Is Joining Your Team with Dr. Michael Housman
Subscribe to the Agile Mentors Podcast
Want to get involved?
This show is designed for you, and we’d love your input.
Enjoyed what you heard today? Please leave a rating and a review. It really helps, and we read every single one.
Got an Agile subject you’d like us to discuss or a question that needs an answer? Share your thoughts with us at podcast@mountaingoatsoftware.com
This episode’s presenters are:
Brian Milner is a Certified Scrum Trainer®, Certified Scrum Professional®, Certified ScrumMaster®, and Certified Scrum Product Owner®, and host of the Agile Mentors Podcast training at Mountain Goat Software. He's passionate about making a difference in people's day-to-day work, influenced by his own experience of transitioning to Scrum and seeing improvements in work/life balance, honesty, respect, and the quality of work.
Hunter Hillegas is the Chief Technology Officer at Mountain Goat Software. With over 20 years of experience in software development, product ownership, and team leadership, he leads the creation of tools like the AI Toolkit and Team Home to support effective, engaging learning experiences. Hunter lives in Santa Barbara, California, with his wife and their dog Enzo.

Wednesday Nov 26, 2025

It’s not a full episode this week—but it might be the one your heart needs. Brian Milner shares what he’s truly grateful for this year (spoiler: it’s not a new tool or framework), reflects on the human side of agility, and invites you to join him in a quick pause before the final sprint of 2025.
Overview
In this special solo episode, Brian Milner pauses to reflect on what he's most grateful for this year—and invites you to do the same. From a renewed focus on the human side of agility to the evolving nature of our roles as leaders and practitioners, this heartfelt message is a reminder that change isn’t just necessary—it’s powerful. Brian also shares his appreciation for the Mountain Goat Software team and a behind-the-scenes shoutout to Agile Mentors’ own Laura Kendrick for making the show possible. Short, sweet, and soul-centered, it’s a moment to breathe, acknowledge growth, and say thanks before we sprint toward the end of the year.
References and resources mentioned in the show:
Five Lessons I’m Thankful I Learned in my Agile Career by Mike Cohn
#123: Unlocking Team Intelligence with Linda Rising
#125: Embracing Gratitude in Challenging Times with Brian Milner
#134: How Leaders Can Reduce Burnout and Boost Performance with Marcus Lagré
Subscribe to the Agile Mentors Podcast
Want to get involved?
This show is designed for you, and we’d love your input.
Enjoyed what you heard today? Please leave a rating and a review. It really helps, and we read every single one.
Got an Agile subject you’d like us to discuss or a question that needs an answer? Share your thoughts with us at podcast@mountaingoatsoftware.com
This episode’s presenters are:
Brian Milner is a Certified Scrum Trainer®, Certified Scrum Professional®, Certified ScrumMaster®, and Certified Scrum Product Owner®, and host of the Agile Mentors Podcast training at Mountain Goat Software. He's passionate about making a difference in people's day-to-day work, influenced by his own experience of transitioning to Scrum and seeing improvements in work/life balance, honesty, respect, and the quality of work.

Wednesday Nov 19, 2025

Consultant and collaboration expert Evan Unger joins Brian to share practical tactics for leading more engaging, effective meetings that actually get results (and don’t drain everyone’s will to live).
Overview
In this episode of the Agile Mentors Podcast, Brian Milner welcomes longtime consultant and facilitation expert Evan Unger to dig into one of the most persistent workplace headaches: remote meetings.
With decades of experience helping leaders shift from “presenting at” to true collaboration, Evan shares how a simple POPRA framework can change the game, why simultaneous chat might be your new secret weapon, and what leaders get wrong when they step into the (virtual) room. From deprogramming the HIPPO effect to humanizing remote collaboration, this conversation is packed with real talk, useful tools, and just enough snark to make you want to fire up your next Zoom meeting with purpose.
References and resources mentioned in the show:
Evan Unger
Collaborative Leadership: A Virtual Immersion™ Program
#138: The Bad Meeting Hangover with Julie Chickering
#142: Communication Patterns Keeping Your Team Stuck with Marsha Acker
Agile Skills Video Library Use code PODCASTSKILLS for $10 off
Subscribe to the Agile Mentors Podcast
Want to get involved?
This show is designed for you, and we’d love your input.
Enjoyed what you heard today? Please leave a rating and a review. It really helps, and we read every single one.
Got an Agile subject you’d like us to discuss or a question that needs an answer? Share your thoughts with us at podcast@mountaingoatsoftware.com
This episode’s presenters are:
Brian Milner is a Certified Scrum Trainer®, Certified Scrum Professional®, Certified ScrumMaster®, and Certified Scrum Product Owner®, and host of the Agile Mentors Podcast training at Mountain Goat Software. He's passionate about making a difference in people's day-to-day work, influenced by his own experience of transitioning to Scrum and seeing improvements in work/life balance, honesty, respect, and the quality of work.
Evan Unger is a collaboration expert and consultant who’s spent over three decades helping leaders turn messy meetings into meaningful progress—even in a post-pandemic, Zoom-fatigued world. As managing partner at Schwartz + Associates, he now trains leaders in the art of virtual facilitation and high-stakes collaboration, so teams can stop surviving meetings and start making decisions that actually stick.

Wednesday Nov 12, 2025

AI is already changing how we work—and how we work together. In this episode, Dr. Michael Housman joins Brian Milner to explore how AI is reshaping team collaboration, decision-making, and the very structure of Agile teams.
Overview
We keep talking about AI like it’s something that’s coming. But as Dr. Michael Housman points out, it’s already here—embedded in our tools, shaping how we collaborate, and quietly shifting the makeup of our teams.
In this episode, Brian sits down with Dr. Housman, CTO, keynote speaker, and author of the upcoming Future Proof: Transform Your Business with AI or Get Left Behind, to talk about what AI is already doing in Agile environments. From how it’s helping Scrum Masters level up decision-making to how it might literally join your org chart, they dig into what’s helpful, what’s hype, and what leaders need to pay attention to right now.
References and resources mentioned in the show:
Dr. Michael Housman
#82: The Intersection of AI and Agile with Emilia Breton
#99: AI & Agile Learning with Hunter Hillegas
#151: What AI Is Really Delivering (and What It’s Not) with Evan Leybourn & Christopher Morales
#165: Can Your Product Process Keep Up With AI with Cort Sharp
Agile Skills Library use code PODCASTSKILLS for $10 off
Subscribe to the Agile Mentors Podcast
Want to get involved?
This show is designed for you, and we’d love your input.
Enjoyed what you heard today? Please leave a rating and a review. It really helps, and we read every single one.
Got an Agile subject you’d like us to discuss or a question that needs an answer? Share your thoughts with us at podcast@mountaingoatsoftware.com
This episode’s presenters are:
Brian Milner is a Certified Scrum Trainer®, Certified Scrum Professional®, Certified ScrumMaster®, and Certified Scrum Product Owner®, and host of the Agile Mentors Podcast training at Mountain Goat Software. He's passionate about making a difference in people's day-to-day work, influenced by his own experience of transitioning to Scrum and seeing improvements in work/life balance, honesty, respect, and the quality of work.
Dr. Michael Housman is the author of Future Proof: Transform Your Business with AI (or Get Left Behind) and the founder and CEO of AI-ccelerator where he helps organizations leverage advances in artificial intelligence. He is a seasoned technologist with over 15 years of experience architecting AI platforms in sectors ranging from hiring and fraud detection to customer communication and real estate lending. His research has been published in a variety of peer-reviewed journals and profiled by such media outlets as The New York Times, Wall Street Journal, The Economist, and The Atlantic. Dr. Housman received his A.M. and Ph.D. in Applied Economics and Managerial Science from The Wharton School of the University of Pennsylvania and his A.B. from Harvard University.

Wednesday Nov 05, 2025

If AI is speeding up how fast we can ship, what’s slowing teams down now? Brian and returning guest Cort Sharp dig into the emerging friction between AI-assisted development and the still-slow art of product decision-making.
Overview
With AI accelerating software delivery, it’s no longer the developers dragging their feet. It’s the backlog that’s backing everything up. In this episode, Brian and Cort tackle the big shift: as coding becomes faster and easier, the real challenge becomes knowing what to build, why, and whether it’s worth it.
They talk about feature bloat, the myth of productivity, the “good enough” curve, and why product owners are quietly becoming the most critical role on agile teams. Plus: short sprints, fake one-day sprints, and a healthy dose of “what even is a Sprint, anyway?”
If you're feeling the tension between building faster and deciding smarter, this convo’s got your name on it.
References and resources mentioned in the show:
Cort Sharp
#104: Mastering Product Ownership with Mike Cohn
#3: What Makes a Great Product Owner? With Lance Dacy
#164: Why Innovation Efforts Fall Flat with Tendayi Viki
Get the Agile Skills Video Library Use code PODCASTSKILLS for $10 off
Subscribe to the Agile Mentors Podcast
Want to get involved?
This show is designed for you, and we’d love your input.
Enjoyed what you heard today? Please leave a rating and a review. It really helps, and we read every single one.
Got an Agile subject you’d like us to discuss or a question that needs an answer? Share your thoughts with us at podcast@mountaingoatsoftware.com
This episode’s presenters are:
Brian Milner is a Certified Scrum Trainer®, Certified Scrum Professional®, Certified ScrumMaster®, and Certified Scrum Product Owner®, and host of the Agile Mentors Podcast training at Mountain Goat Software. He's passionate about making a difference in people's day-to-day work, influenced by his own experience of transitioning to Scrum and seeing improvements in work/life balance, honesty, respect, and the quality of work.
Cort Sharp is the Scrum Master of the producing team and the Agile Mentors Community Manager. In addition to his love for Agile, Cort is also a serious swimmer and has been coaching swimmers for five years.
Auto-generated Transcript:
Brian Milner (00:00)
Welcome back Agile Mentors. We're here for another episode of the Agile Mentors Podcast. I'm with you here as always, Brian Milner. And today I have back the one and only Cort Sharp with us. Welcome back Cort.
Cort Sharp (00:11)
Hey Brian, thanks for having me.
Brian Milner (00:13)
Yeah. Cort and I were chatting just in between engagements and things we were talking about going on. Cort's coaching a lot recently, and I've been coaching a lot recently as well. And so we've been kind of sharing stories and talking about kind of some of the things we've been experiencing. And you came across something really interesting recently that I thought we talked about might make a good topic. help us out. What was that that you came across?
Cort Sharp (00:42)
Yeah, so I've seen this idea pop up a few times actually on LinkedIn specifically, but I've seen it trickle out into other areas within the coaching that I've been doing recently, but also just in other pieces or parts of the internet as well. And it's this idea of like with AI being brought into organizations, brought into companies, helping out developers so much that AI has actually lowered that barrier. for the programming side of stuff, programming side of the development side of things, that the new blocker that is currently emerging, so the new piece that's been slowing everyone down now is actually the product management side of stuff itself, which I thought was just so fascinating because I've done a little programming, definitely more in the product management side of things now, but I kept seeing this pop up and I was like, man. I would love to just hear, you know, Brian's thoughts about this and the community as a whole, everyone's thoughts about this a little bit here too, but I have my own thoughts, but just quick little immediate reaction to that idea there, Brian. How does that make you feel? What do you think of that?
Brian Milner (01:51)
Yeah, I actually have been thinking this was coming for a while. I don't have this prepared, so please don't get me wrong in this. I know I always say data didn't happen. But there are three studies that I found at one point that were trying to determine the number of features in just your average software project that were rarely or never used. And it was three separate studies spread out over years. And one of them was like 48%. That was the low one, was like 48%. Then there was a middle one that was 64. And then there was another one that was more recent that said like 80%. And I mean, think about that, know, like I, even if you take the low end, And so, you know, 48, let's just round it up to 50 just to make it easier to have the conversation. But let's just say out of those three studies, we say it's 50 % of features that people are building are things that people rarely or never use. Now I get it that there are some rarely used features that are essential, right? Like admin functions and things. You may not use those all the time or it may not be a huge swath of users. that uses that, but you have to have them. So set those aside because that's not 50 % of what's being developed, right? And I think if that's true, if we even like go on the low end of that and say that it's closer to 50%, then that's an awful lot of productivity that's being lost. Not to mention just money and energy and effort. of developers to build stuff that no one cares about. Those studies were all prior to AI. So let that sink in, right? If those are prior to AI and we were seeing at the low end, 50%, you know, across those surveys of things that no one was using. Well, that's where I've been kind of forecasting this to say, if, if AI is speeding up our process to build things. the actual development of things, then what's going to become painfully obvious very quickly is that the bottleneck isn't developers. And it, you know, my point from saying that in classes is to say it's never been right. It's not been developers that have been the bottleneck to us being more successful. That's where the focus has been. But I don't think that was correct. And I think that the correct area to put it on is the product side. And if that's true, right, I know I'm doing a lot of leaps here, but if that's true, if it is the product side, well, I think that what that really translates to is the discipline of product management, of being able to recognize what's valuable.
Cort Sharp (04:50)
Mm-hmm.
Brian Milner (04:54)
to your customers to deliver that, to close the loop and verify that that's actually what was needed and to measure the impact of those things, that discipline, I think, becomes just all the more essential because that stat tells me there's a lot of bad product management going on. So that's my initial thought. That's a lot of thoughts, but that's my initial thought when you said that. What about you? What do you think about that?
Cort Sharp (05:19)
right there. I'll share my thoughts, but I do want to harp on or just go back to your first initial one of the callback to those studies there. When you first threw out those, because I've seen similar studies where it was about 50 % was kind of it. I haven't seen those studies that say like, know, what was the last one you threw out there on the high end, like 80 something percent. ⁓
Brian Milner (05:39)
Yeah, actually I remember, so I remember two of them. The 64 % one was from a group called the Standish group. There's been some question about their methodology in that one. I haven't seen the methodology of the 80 % one, but it was a group called Pindo that did that one. And I don't remember the 48 % one. that's just off top of my head.
Cort Sharp (06:01)
Sure. But that 80 % one though, that one sticks out to me because as you were going through it, I was like, okay, well, I have Google Docs open right here just for some show notes or something. Just make sure I ask the questions that I'm supposed to ask or I want to ask. And I thought, wow, I'm looking at the menu bar right here. I use maybe, two or three of these consistently. And there's like 15 options up here. yeah, I could absolutely see a large majority of features that a product has that go widely unused by the vast majority of its users. And I think that poses the question then is, do we wanna go down the path of having one product be really good for, or like, really good at one thing and then kind of OK at everything else. The thing that always comes to my mind in this, and I've been going down this rabbit hole of kind of digital minimalism, is like the cell phone, right? Where it's a really great communication device. OK camera, kind of OK video, kind of OK speaker if you want to use it once in a while. It's kind of OK at browsing the web or doing some other things on there.
Brian Milner (07:05)
Yeah.
Cort Sharp (07:21)
Is it worth making those products that have an okay aspect to it on these other things that, you know, some people like to use, but not everyone will use all the time type deal thing, which is a totally different discussion here. But that's kind of where my memory went of like, okay, that 80 % plus isn't actually all that surprising to me. I would, I would probably throw out there, you know, for the vast majority of programs that I use, baby, aside from my banking.
Brian Milner (07:35)
Right.
Cort Sharp (07:48)
my banking apps, you know, I don't use, I probably only use 10 to 15%, maybe 20 % of the total features in there. and I, it is such a interesting point to the productivity side of stuff of, okay, are we just being productive for the sake of being productive? is it actually being productive? Are we just working for the sake of working? so yeah, just harping on that a little bit.
Brian Milner (07:50)
right. Yeah, yeah, I mean, I agree. And I kind of have a similar response. And I think that there's, you know, the good enough argument, right? ⁓ Sometimes people take exception to that and say, well, why would we be okay with only doing something good enough? Well, it's not about quality, right? It's not saying that the quality of what you do is good enough, but it's saying that the...
Cort Sharp (08:22)
Mm-hmm.
Brian Milner (08:41)
the amount of functionality is good enough. And I think your example of the cell phone is a great one because, know, I'm old enough that I remember before, you know, that was the main way that people took pictures. You know, when you had the little flip phones and stuff, the quality in those were not very good. And so you would have other digital cameras that would took higher quality photos. But the reason that it won out in you started to just see more and more pictures taken from a phone, even though they were lower quality, was because you always had your phone with you. And so there's sort of an extent to which you would say, how badly do I want to carry around an extra device that's just for taking pictures, even though it takes better quality pictures, is the quality that I'm getting with the phone good enough and there was a tipping point there, right? There was a certain point where it went out and the quality of what was on the phone was high enough that people said, yeah, I don't need a separate digital camera anymore. This is good enough for what I need and that one. And I think that that value curve is very similar across any product. There's a certain level. that when you add features, it's a steep value curve. But after you've added those key core things, then it starts to tail off. It starts to flatten. that flatten, it may still be going up, right? But the effort that it takes to deliver something is not the same return on that investment of effort, right? Early on, it's a huge, you that effort creates a spike in value. Later on, that effort creates a small little spike in value. At a certain point, that's where they talk about trimming the tail. At a certain point, that's what they mean by it is that value curve has gone past that point where now it's flattened and we're incrementally adding small little things, but they're not valuable enough to justify the effort that it's taking to build them. Now, will AI change that? I don't know, right? Because if we have a bank of AI programmers, I don't know that it actually changes it because we still could have that bank of AI programmers doing something else instead, you know? ⁓
Cort Sharp (10:48)
Hmm. Right? Right? So it's figuring out that value proposition side of stuff. Yeah. Yeah.
Brian Milner (11:05)
Right, right. The impact, actual, you know, how much do people care about this being there? And, and at a certain point, you know, we had a podcast recently where we talked about this, just at a certain point, there's a, an end of life, right? At a certain point, you have to deliberately say no to something and say, you know what? This product has done all it's going to do and we'll support people that still use it up to a certain point. But at a certain point you say, no, it's better to have a new product now, where that value curve now starts to get really big again. So yeah, I mean, from an AI standpoint, I think it does make an impact because it kind of just makes it more apparent where that problem is. ⁓ And that's why I think I tell all the product owners that come through classes, I think product owners are poised to become highly impactful.
Cort Sharp (11:46)
Mm-hmm. Mm-hmm.
Brian Milner (12:00)
in their organizations in this AI era. Because if you can refine your craft to a level to where the things that you are producing are all a lot of value, right? All creating a lot of value, then now we have the productivity to spit out more and more of that stuff. And if your side of it's taken care of as well, then everything that we're producing is now producing a lot of value.
Cort Sharp (12:29)
Right. Right. I think it opens the door for programmers, developers. I'm not just going to say programmers because I know AI can help out in every aspect of the development process. But I think it opens the door to developers, not only just being more productive, but also just being able to experiment with new things more, more readily, more easily. Right. And we can, we can kind of simulate some of what our customers might want. Right. If we can build a really great persona. I know you've done this in a class recently, Brian. I'm doing a similar thing and just saying, look, let's build out this persona using an AI tool that we can use and create basically an AI agent and say, here you go. This is my ideal customer. Here's my product. What pieces or new features of my products can I focus on in order to deliver higher value to this customer? which is exactly what a product owner does. So I totally agree with you there, Brian of saying, yeah, the role of the product owner is about to become one of the most valuable roles, in an organization, in, in understanding. How do we deliver value to our customers? What do our customers even want? Right. Starting there. If you can build all the cool things you want, if your customers don't use it, who cares? Right. So many examples of that, what I called out earlier, but so many examples of that of like, if you build stuff just for the sake of building stuff, is it worth being built? And I think that's more so the question that we're gonna shift towards within our development cycles of how do we know that this is worth being built? And what quick feedback loops can we start going down in order to get that? Brian Milner (13:58) Right. Yeah, I used to always like to quote this, know, everyone's always heard the phrase, you know, if a tree falls in the woods and no one's around, does it make a sound? And I always equate that to, you know, software as well. If software is built and no one uses it, was it really built? You know, and I know I've been on the end, the bad end of that in the past where I've worked on things with development teams that
Cort Sharp (14:28)
Yeah. Yeah.
Brian Milner (14:37)
We've worked long and hard on something only to have the rug pulled out from underneath us and for management to make decisions and say, no, we're not going to do that thing. And that's a horrible feeling. There's nothing worse. no one out there wants to, I mean, go back to my stats, 50%. Nobody in software development would feel good about themselves if they said, hey, 50 % of the stuff you've worked on, no one ever saw. Like that's not a warm fuzzy feeling. ⁓
Cort Sharp (15:06)
Yeah, could you could you imagine building a car and then building this awesome, incredible car and then you're ready to roll it off the factory line and then all of a sudden it gets cut in half and that's what gets delivered and that's what because that's all that people use, right?
Brian Milner (15:18)
Right. Right. Or you work overtime on the engine going from zero to 60 and you find out that this is just going to sit in a garage. It just doesn't make you feel good because that's not what it's been built to do. ⁓ So yeah, I think that you're absolutely right. We have to focus on the discipline of knowing our customers, knowing what they want.
Cort Sharp (15:24)
Mm-hmm. Yeah. Right, right.
Brian Milner (15:44)
And then checking and asking them, did we actually deliver what you needed? ⁓ And it's funny, I was talking with someone this week about this and it's amazing to me the number of times I've talked to people in the product area that will build things. But when you ask them, did you decide to build that? You had a whole host of other things you could do.
Cort Sharp (15:51)
Mm-hmm.
Brian Milner (16:10)
made you decide to do that instead of the other things. And I'm always shocked at the number of times I get blank stares or just no response at all because it's a great thing to do, right? Well, you're asking me, don't ask me. You should be able to say that, right? You should be able to. to back it up and say, yeah, here's the research behind it. Here's the market study. Here's the business case. We ran these tests, and these tests showed this level of interest. You have to know what it is you're trying to do first. And if you don't know what it is that it's intended to do with your customer, then how do you know whether you actually succeeded at it?
Cort Sharp (16:41)
Right. Yeah, absolutely. one thing that comes out of that is like, how much of that data do you take and how much time do you spend on gathering that data? I've heard a phrase recently and it goes along the lines of like be data informed but not necessarily data driven. where we want to use data to inform our decisions, but we don't necessarily want to gather all the data in order for it to be 100%. We're not going to make a decision unless we have this data point to guide our decision or drive our decision. Yeah.
Brian Milner (17:27)
Yeah. Yeah, no, yeah, it's, know, we had, I think that the way to think about it is properly is bets. You know, that you are making bets in different areas and you don't make a bet, you don't wait to make a bet until you're 100 % certain. Cause you're never 100 % certain on a bet, right? There's always a percent chance that it could go one way or the other. But what you try to do is, you know, make informed or maybe that's not even the best analogy. Maybe it's more like an investment. You know, when you make an investment in a stock, it's not just a pure bet because it's not a flip of the coin. Right. If you do your research and you know enough about the company, then there are better investments than others. And
Cort Sharp (18:02)
Mm-hmm.
Brian Milner (18:18)
I think that's the way we should look at our features and our products is to say, this is an investment in our company. So I want to invest wisely. And you wouldn't be very smart, I'll put it this way, to have an investment strategy in the stock market of just pointing your finger at something and say, hey, I'm going to spread out my investments over these 10 companies that are just random companies.
Cort Sharp (18:30)
You
Brian Milner (18:42)
because one of them is gonna hopefully turn out to be successful. You're not gonna succeed, right? But if you research the 10 things that you're investing in, if you kind of know the history, know the trend line, know where the forecast is, all right, well, this one has a strong chance I'm investing here. ⁓ That's how you're successful. And we don't seem to always do that with our products.
Cort Sharp (19:02)
Right. Yeah, I've kind of tied that back into our products and a conversation I had with a product manager, product manager, they weren't a product owner or a project manager, but a product product manager, gosh, three months ago or something like that. Small company, very small company. Just I knew the guy from from school and was talking with him and he goes, yeah, we feel like two week sprints is too long. We even feel like one week sprints are too long. We're trying to shoot for one day long sprints. And my question back to him was, okay, why, first of all, why would you want to do that? And he goes, well, because AI just allows our developers to be so much more productive and do more things. I'm like, okay, I could buy that. But my second question to him, which goes along the lines of what you were talking about here, because getting that feedback, getting that data, let's be data informed, not data driven, was how do you make decisions on, how do you give feedback to your developers? How do you make those decisions on, yeah, this is the highest priority today versus yesterday? Is the market shifting that quickly that you have to make those decisions? And let's say it is, let's imagine we live in that world right now. Some of you probably do, but I know someone out there probably does, but how do you, do that? what tips would you give Brian to this guy about, yeah, let's drive the decision making forward and let's give the feedback faster if our development team is able to actually deliver a fully featured feature. How do give them feedback on such a short timeline?
Brian Milner (20:44)
Yeah.
Cort Sharp (20:47)
I see teams all the time struggle with saying, yeah, we get good feedback every two weeks with our sprints. I've seen teams be like, yep, two weeks is no problem for us. We get good feedback. We're able to move forward. We're able to make decisions, be data informed, and move forward that way. So we cut our sprints down into one week. I think one week is probably like the, in my mind, at least in my experience, is kind of that. lower end, the lower lower echelon, I guess, of the ability to provide meaningful feedback and meaningful delivery stuff.
Brian Milner (21:16)
Yeah. Well, it's also about, can you create something that is of meaningful value in that timeframe? Because our product increments should be valuable. And that's what's probably going to be the blocker for most teams going to a day-long sprint or so is because, yeah, we can't produce something valuable enough in just a day. It takes us multiple days. ⁓
Cort Sharp (21:33)
Yes. Right. Right.
Brian Milner (21:46)
In general, mean, I would applaud it in general because I think the shorter time span is generally better. I generally have more of a problem with people who want to go the other way and be too long, you know, and do like a month long sprint. So I would much rather have a team that wants to do a day sprint than a month long sprint. But that, mean, the questions I'd ask about a day long sprint is, Can we produce something meaningful within the day? And maybe the answer is yes, right? Maybe across the team, there's enough work and maybe the work is small enough, right? That's really what it would take is the breakdown of the work is small enough that they can actually get stuff out that's valuable within the course of that day. So are they able to produce value in a day? It might. get to feel a little ridiculous as far as the meetings are concerned, right? Because that's one of the considerations when you try to choose your length of your sprint is, how often can I have these meetings? Take for instance, just the sprint and review, right? We need important stakeholders at the sprint and review. Can you have important stakeholders every day? If not,
Cort Sharp (22:48)
Right.
Brian Milner (23:01)
If what you're doing is no, that cadence is not wide enough that our stakeholders are too busy. They can't come every day. If that's the case, then you might want to consider a longer sprint. That doesn't mean you have to wait on delivery. And I think maybe that's something I'd ask them as well is, are we confusing the sprint length with delivery length? because you can deliver every day. You can deliver multiple times per hour. There's nothing that says in Scrum that it's tied in any way, shape or form to your sprint length. And if that's the intention is to just release things more often, then absolutely, right? If your system is set up to do that, it doesn't matter if you have a week long or month long sprint, if you can deliver things every day, It's a much better process because back to your original point about feedback, can you get meaningful feedback? Well, if we're delivering it every day, we're going to get more meaningful feedback because we're not only getting feedback from just the internal stakeholders, but we're getting it from external customers. ⁓ Let's just say if we have a week long sprint and we're delivering every day, we have feedback from actual customers by the sprint review.
Cort Sharp (24:06)
Right.
Brian Milner (24:14)
that would be an incredible position to be in, to be able to say, yeah, we've released these 10 things this sprint, and here's what the customers are already saying about it. We got this feedback, or this one has generated this much support, so now we have tickets to kind of handle that. Yeah, it's in general a good thing. I'd want to make sure on those other areas to make sure that it's not being confused maybe with something else.
Cort Sharp (24:37)
Yeah. Yeah. Just understanding the definition of what a sprint actually entails. through that conversation, it turned out they, were on the, you know, we just want to deliver every day and, and, you know, we have our sprint review at the end of the week and whatnot. And I'm like, okay, so you're not having day long sprints. You're doing a week long sprint. Yeah. Yeah. Yeah. We, we laughed about that a little bit, but, yeah, I think the
Brian Milner (24:50)
Yeah. So they're doing week long sprints. Yeah. Yeah. No, that's great. I I applaud them on that. That's a mature thing to do. And if your team can get to that stage, it's only because you've invested heavily in automation and DevOps and those kinds of things. It takes discipline to be able to do that. And I'm sure they were advocating it to you because they saw the benefit of what it provided them to release more frequently, which... is an admirable thing.
Cort Sharp (25:25)
Yeah, 100%. They were like, man, this is awesome. I know you're in this space court. Like, let's talk about this. And yeah, it was a great conversation. It was a lot of fun. But they were, yeah, they were just kind of confused with what it actually meant to hold a sprint. I think they also heard the term Kanban for the first time not too long ago. And they're like, this is the same thing, right? We're sprinting in Kanban, just like we sprint in scrum. And I'm like, I...
Brian Milner (25:39)
Yeah. Ha ha ha.
Cort Sharp (25:50)
No, a little different, slightly.
Brian Milner (25:51)
Yeah, not quite. Not exactly the same. Close cousins. Yeah. I mean, back to our original topic here, I mean, I think as far as AI, we talked about that with product management. And as far as the Scrum world is concerned, I am very interested to see how this
Cort Sharp (25:54)
Yeah.
Brian Milner (26:11)
kind of upends the cart a little bit as far as our teams are concerned. I don't think that it, at least from my own perspective, and I could be proven wrong, I don't see it as destroying the team. I don't see it as a complete re-imagining of the process. I think the process still holds. The question is just what does that team look like? Previously, we have a Scrum Master, a product owner, and then a set of developers. Well, would imagine there's teams would probably have less developers because they can boost their own capacity by using AI copilots and other things to help them generate code faster. So maybe a team that previously was eight people is now a team of four or five. And maybe that makes us reimagine a little bit about Scrum Masters and product owners and whether we need them to more frequently be across a couple of teams rather than just a single team. Yeah.
Cort Sharp (27:12)
With that, this question just popped into my mind and you got into it a little bit there with like, do we need our Scrum Masters or product owners to be across multiple teams? What would your ideal, let's say AI takes off, you're in charge of organizing 10 teams, right? And you have all the people that you need. So you can have as many product owners, as many Scrum Masters as you need. And we want to have our developer count be, you know, five developers per team or three developers per team. Let's, let's try to go down that path of saying we're small, right? We, we have AI. allows us to accelerate the speed of our development per developer. Right. So, would you have one product owner per team and then have the scrum master float around, or would you have the scrum, same number of scrum masters as product owners?
Brian Milner (28:04)
it would greatly depend, because I just different scenarios might require different things. in general, then I'd probably, I'd probably match them up as much as possible. because in general, I think there's kind of similar demands on both. They're different, but similar volume of demand. the interesting thing to me is, the volume of work that can be created by a team of eight developers or so right now, if that same volume of work can be generated because each developer now has the tools that their abilities are enhanced, again, I don't see it as replacing, I see it as enhancing, right? If... If they can do that and they had eight, now they can do the same volume of work with four. Well, it's not reducing the volume of work for the product owner, right? Because the product owner still needs to manage a backlog and prioritize and stakeholders and customers, right? That's not going away. The work from the Scrum Master, I think obviously changes a little bit because
Cort Sharp (28:58)
Right.
Brian Milner (29:10)
While it's one thing to try to manage team dynamics and get them to high performing levels when it's eight people, it's a lot more individual focus when it's four. So that's why I would say for a Scrum Master, maybe it does become more viable to be on a couple of teams because we're not contributing to the product. In general, we're not building things. Or maybe that becomes the new mode as well as the Scrum Master is more of a hybrid role with a combination of them and developer. I don't know. ⁓ I think time is going to tell that over the next few years.
Cort Sharp (29:43)
Right, right. Right? Where my mind went with that was, I would much rather have two teams of four than one team of eight, well, developers, right? Two teams of four developers that are as productive as my one team of eight each individually. and instead of kind of cutting the head count down, so to speak, or reducing head count, I'd much rather reconfigure the way that my teams are organized right now in order to
Brian Milner (29:56)
Yeah. Yeah.
Cort Sharp (30:13)
Utilize AI and I don't want AI to be replacing my developers. I want them to be I want it to be helpful to them I want it to augment their abilities and enhance their abilities like you were saying and in my mind if you know if I was running a company ⁓ We all think we all think we're in that armchair, right? We're all sitting in the armchair saying if I could make all the decisions for a day. What would I do? ⁓ In my mind, I would say okay
Brian Milner (30:28) Yeah. Yeah.
Cort Sharp (30:38)
I don't view this as a, I can replace my development teams. can instead effectively, let's call it double, you know, I get twice the productivity out of one developer with AI versus one without. I could double the amount of deliveries I get. I could double the features that I produce, that my teams produce, or along those same lines, you could probably figure out a way to cut down the delivery timeline.
Brian Milner (30:54)
Hmm.
Cort Sharp (31:07)
and cut it down in half, which goes back straight up to that top of the top of the hour question that we were talking about of product management is the roadblock. It's the bottleneck there to decide how do we get this sooner? How do we get these feedback loops quicker? Right. So.
Brian Milner (31:25)
Yeah. And to expand on that point, right? mean, if you're, if you have two teams of four that, you know, and that one team of four produces the same volume that previously a team of eight would do. Now I've got two teams. I'm doubling the volume that I can actually create. So to your point, there, there are some who would look at that as, I just lose four developers and To them, here's what I'd say. Imagine this scenario, two companies, right? And both these companies, they're competitors. These companies have the same exact situation happen to them. AI comes on the scene, AI enhances the productivity of their development teams. And one company says, hey, I can lose four developers and have the same level of productivity as I have today. So four people get pink slips, right? they maintain the same level of development that they have today. The second company says, hey, I can get twice as much done. So they start expanding the number of things they can produce. since they're assuming their discipline is in shape, they're producing things that people actually care about. Which of these two companies is going to win? It's gonna be the second one.
Cort Sharp (32:33)
Mm-hmm.
Brian Milner (32:40)
It's going to be the one that actually can now deliver more value to the customer. So I would not jump to that conclusion. And I don't think that's necessarily going to be a successful company that jumps to the conclusion that, I'm just going to slash my budget for developers because now I can get the same volume with less people. yeah, but your competitor is going to have double the volume.
Cort Sharp (33:03)
Right.
Brian Milner (33:08)
with the same number of people and why wouldn't you do that instead?
Cort Sharp (33:11)
Right, totally. I totally agree with that. Part of me is really excited to see the studies that come out and say, here's the differences between these two companies in the similar space. One reduced their development and replaced with AI, and one enhanced their development teams with AI and didn't replace anyone with AI. And just super interested to see the difference in... evaluations, in productivity, in releases, in whatever it is, right? And I'm going to try to see if there's anything out there right now because... ⁓
Brian Milner (33:45)
Yeah, well, this is my call out to everyone listening to you, right? Like if there's researchers out there, go research this and ⁓ let us know. Or if you're in the middle of researching it, please let us know, because I'd love to see that study as well.
Cort Sharp (33:52)
Yeah. Yeah, very fascinating, right? ⁓ Well, awesome.
Brian Milner (34:00)
Yeah. Well, this is, this has been great, great court. I think this is a great topic and, you know, we've, we've gone a little bit past our time, but it's, it's one those deep topics we could talk about for a long, long time. And, I, know, truth of the matter is time will tell, you know, like this is just, we're on that edge of the frontier where now we don't really, no one can say a hundred percent. we have to see how things kind of play out and, take it from there.
Cort Sharp (34:27)
Yeah, absolutely. I couldn't agree more, Brian. I think this was a great topic. Thanks for taking the time to chat with me today. you got me a little more that I'm thinking about now. So thanks for that.
Brian Milner (34:36)
you Yeah, absolutely. Thanks, Cort.
Cort Sharp (34:41)
Thanks, Brian.

Wednesday Oct 29, 2025

Tendayi Viki joins Brian to unpack the difference between doing innovation and delivering value, with practical takeaways for product folks, innovation teams, and anyone who wants to stop spinning their wheels.
Overview
Innovation theater. Experimentation theater. Value that never quite materializes. In this episode, Brian Milner sits down with Tendayi Viki—author, strategist, and partner at Strategyzer—to talk about why so many organizations look like they’re innovating… but aren’t.
Together, they dig into what real innovation looks like (and how to measure it), how to escape the trap of cool ideas with no customer value, and why experiments only matter if they lead to decisions. You’ll also learn how to spot the difference between a small bet and a large leap, and what it actually means to “be a pirate in the navy.”
References and resources mentioned in the show:
Tendayi Viki
Tendayi’s Books
Get the Agile Skills Video Library Use code PODCASTSKILLS for $10 off
Subscribe to the Agile Mentors Podcast
Want to get involved?
This show is designed for you, and we’d love your input.
Enjoyed what you heard today? Please leave a rating and a review. It really helps, and we read every single one.
Got an Agile subject you’d like us to discuss or a question that needs an answer? Share your thoughts with us at podcast@mountaingoatsoftware.com
This episode’s presenters are:
Brian Milner is a Certified Scrum Trainer®, Certified Scrum Professional®, Certified ScrumMaster®, and Certified Scrum Product Owner®, and host of the Agile Mentors Podcast training at Mountain Goat Software. He's passionate about making a difference in people's day-to-day work, influenced by his own experience of transitioning to Scrum and seeing improvements in work/life balance, honesty, respect, and the quality of work.
Tendayi Viki is a globally recognized innovation strategist, author, and partner at Strategyzer, where he helps large organizations build real value—not just innovation theater. With a PhD in Psychology and a client list that spans Unilever to The British Museum, Tendayi brings deep insight into the human side of transformation, backed by frameworks that actually work.
Auto-generated Transcript:
Brian Milner (00:00)
Welcome in Agile Mentors. We're back for another episode of the Agile Mentors podcast. I'm here as always, Brian Milner. And today I'm very, very excited. I have Mr. Tendayi Vicki with us. Tendayi, welcome in.
Tendayi Viki (00:13)
Thank you. It's a pleasure to be here.
Brian Milner (00:15)
Very, very excited to have him here. Just to give you some background, if you're not familiar with his work, very prolific and very deep thinker here. He's a partner at a company called Strategizer, where he helps large companies innovate like startups. He's a regular contributor on Forbes, so you may have read some of his articles on Forbes. He's the author of three books, The Corporate Startup, Pirates in the Navy and the Lean Product Lifecycle. Pirates in the Navy is his latest one. Pirates in the Navy, correct me if I'm wrong, it's kind of about how to infiltrate via innovative presence in a large corporation. Is that correct?
Tendayi Viki (00:55)
Yeah, exactly. Yeah, how to be a pirate in the Navy.
Brian Milner (00:58)
I love it. I love the title. ⁓ But his books are really practical. They're on building innovation ecosystems that actually work. He's advised some big companies like Unilever, Amex, and Lutanza. He's been named to Thinker's 50 radar list for his influence and innovation and strategy. But his passion is really helping teams avoid
Tendayi Viki (00:59)
that.
Brian Milner (01:22)
what do you terms as innovation theater and focus on creating real sustainable value. So I thought maybe that's a good place to just start to kick off a conversation and say, Tendayi, talk to us about innovation theater. What does that look like to you? How would you define that? What does that mean?
Tendayi Viki (01:41)
Yeah, it's fascinating. It's a term that's kind of simultaneously coined by Rita McGrath. Steve Blank has used it a few times, and so has Alex Osterwalder. And it's really about... So the thing about the startup world is that the startup world is kind of a coolness factor. So everybody wants to be cool. And then the toolbox that startups use in that cool design thinking, deep school vibe of like sticky notes and... design and prototyping and all that. So everybody wants to do all of those things. I've even watched teams actually engage in agile rituals. Like they do the daily stand up, they do the demo day, they do the retro, right? But when you really look at the, when you dive deep into the focus, it doesn't seem to be a lot of value creation. So you're like, you're doing a... a retrospective as an agile team and you're not talking about what you learned from customers. You didn't do that during that week's sprint. yeah, you can do all the rituals, but if you don't understand the reason the rituals exist, then it's easy for you to kind of just spin and not create any value. And that's innovation theory.
Brian Milner (02:42)
Yeah. Yeah. man, I am with you a million percent on that and completely agree. These structures are there to help kind of be a pathway to that, but not the end result. If you don't understand, just like you said, if you don't understand the reason behind it, the why, then yeah, you could go through all the motions. And I completely get that term, It's kind of theater. It looks like it's actually happening, but it's not really happening. The culture underneath it is not really there. ⁓ So that brings the million dollar question then, right? If these structures like we do, like standups and everything else, aren't going to automatically generate that kind of innovation and it's more of a culture thing,
Tendayi Viki (03:28)
Exactly. Mm-hmm.
Brian Milner (03:44)
How do you then build a culture that is placing innovation as a priority?
Tendayi Viki (03:53)
So yeah, so just to answer your question, think one of the things that's really interesting about the way to create value is you have to authentically care about value creation first. You really have to understand this notion that innovation is this combination of really, really cool ideas, right? Together with a deep understanding of customers and their needs, and then a deep understanding of how to... use a business model that works to deliver that value to customers so you can get value back. Once you complete the entirety of that cycle, we say you're a successful innovator. If you complete the ideas or tech portion of that cycle, you're just an inventor or the ideas guy or whatever people call themselves, right? And so I find that companies excessively focus on ideas too much. And so too much focus on ideation and not enough focus on putting ideas on a journey towards value creation and actually value realization for the organization. So if you're going to build a culture for innovation, you have to understand what you're building it for. You to go, all right, we have to deliberately design our workflows and the way we interact with each other to discover what customers we need. Then we have to design the workflows to bring those customer needs into life through products. Then we have to test whether those products are really delivering that value. And then we have to figure out a way to scale that value and give value back to the moment of vision. you go, okay, that's the job. Now let's design the process, the culture, the toolbox, the artifacts, the rituals that allow us to actually do that. And I think that kind of understanding is probably more fundamental than anything else.
Brian Milner (05:32)
Yeah, absolutely agree. It's the structure for discovery, right? I mean, it's not the discovery. It's the structure that led you to the discovery that has to be repeatable that then can generate future discoveries. It's not how you found the island in the middle of the ocean. Or it's not the island you found in the middle of the ocean. It's how you found it. ⁓ that would lead you to find another one you know. ⁓
Tendayi Viki (05:55)
Exactly. And that's the fundamental question is, can you find another island? Because again, innovation teams stumble a lot on good ideas. And so you can bumble into something good and then fail to do it again because you don't have a repeatable process.
Brian Milner (06:01)
Yeah. Yeah. So let's dive into that a little bit. mean, whether you're a startup or whether you're a bigger organization and you're working on a product in a bigger organization, I know that you can often feel like you're kind of, I was talking to someone this week about this, you kind of feel like you're drowning in a sea of opportunity. There's all these things that we could do and it's sometimes hard to find, well, which ones do we really
Tendayi Viki (06:32)
Mmm.
Brian Milner (06:42)
double down on which ones we invest in and really pour our time and energy and efforts into. So how do you talk about that in your book? How do you find the things that are worth really investing in?
Tendayi Viki (06:54)
Yeah. So I mean, mean, there's two ways, right? The one is the first one is a kind of like an art thing. It can be fed by data, but it's art and that's finding the direction of travel. So that's a strategy choice. We go, we think that an AI is big thing these days. We think that AI is going to do these various things to our business model. And that's really important, by the way, like when you think about AI. And Sharjee, Alex also has got one of my favorite all-time phrases, is AI changes everything and AI changes nothing. The fundamentals for business are still the same, even though the stuff that you can do is exponentially different. So you have to think which elements of the business model do we want to play with around here strategically.
Brian Milner (07:26)
Yeah. Yeah.
Tendayi Viki (07:41)
And then once you pick a direction of travel, now you've got multiple options of different product ideas, services, business model, value propositions, offerings, technology, stacks, et cetera, et cetera. Once you get to that point, you then cannot pick the winning idea on day one yourself. You have to stop building a systematic process of discovery. so you may be, and we, so we often say when you're at that stage, make multiple small bets, right? OK, and I like the way you phrased the question, by the way, because you said, how do you choose what to double down on? That's what you said. You said double down, right? Well, you don't double down on something unless you've made an initial small bet. You double down after an initial bet. Doubling down is I've made a bet, now I'm doubling down. But what companies do is they just make a large bet, and they call it doubling down. But it's not really doubling down. You've just made a large bet.
Brian Milner (08:18)
Yeah. Yeah. Hmm.
Tendayi Viki (08:38)
Right? Doubling down is a follow on bet after an initial bet. And so it means that the first bet is a punt. It's a, see what happens bet. And then the question is, what do you want to see? So somebody just wants to see size of market. Somebody who's wanting to see a real customer with a real need. Somebody who just wants to see a real customer with a real need plus an internal capability to create value. So they'll say, if I give you my 50K, you have to answer both these questions before I double down. And some of these will say you have to answer only one of these questions before I double down. And then so I'll give you less, I'll give you 20K. So that's how you start building these frameworks, right? You start going like the one we built at Pearson, right? You go, ideas, it's ideation, it's strategic thinking. We don't invest any money. That's free. But when you start going into discovery, we might give you 25,000 pounds and you earn the next level of bet by the data you bring using that 25,000. And we have a list of questions that we need positive answers to before we actually make the doubling down. And so that's a way of curating ideas based on evidence and some kind of action and activity.
Brian Milner (09:48)
Yeah, it's amazing to me, like in some of the companies I work with, it's amazing to me to see how many times people will choose bets that they're going to make, but not really even be able to articulate what it is they hope that bet will actually do. Right? Not just, you know, like we have this feature that we're betting on and we think if we add this feature that it's going to, you know, be cool. It's gonna, you know, people will love it that will add this feature, but they don't go the extra step of being able to articulate, yeah, but what does that mean? Does it mean that you're gonna increase your return on investment? it gonna increase your customer satisfaction? So that kind of gets to the heart of how do you measure whether it's actually a successful bet or not?
Tendayi Viki (10:39)
Yeah, exactly. mean, to go way back in the days, to Dave McClure and the pirate metrics. I'm sure you were like, R, right? It's like if we do acquisition, activation, revenue retention, referral, whatever those R's are, you could add a few others that you want. So those are metrics. Why would we ever build a feature that's not connected to any one of those goals? Like, what's the point?
Brian Milner (10:58)
Yeah. Yeah.
Tendayi Viki (11:07)
Right. A measure of satisfaction is referrals, maybe. A measure of customer satisfaction is retention, maybe. You could measure customer satisfaction with your NPS scoring or whatever, right? Like, if you have all those things laid out, then you go, right now we're working on this thing because we believe that it's going to increase our ability to acquire customers. by how much? Possibly by 5%. OK, now we have a benchmark. Then we have a way to start testing whether the things we're building are actually
Brian Milner (11:17)
Yeah.
Tendayi Viki (11:33)
actually creating value. I don't think that there should ever be a wouldn't it be cool if conversation. Maybe at the beginning, but later.
Brian Milner (11:39)
⁓ Yeah, businesses don't... Right. That's not really a great model to build a business on, right? It's just, I think it would be cool.
Tendayi Viki (11:48)
Yeah, it was crazy. I was once working in the large organization and they had disparate products on different platforms. So they had this thing where they were going to put it all as like one on one website, one platform, one product layout. And for them, it was in the backstage of the business, was value creation because it lowers costs and puts everything in an easy to manage place. But then I was like, have you guys ever considered that you could potentially destroy value by putting everything, like you could essentially like make it worth for customers. Like it's not automatic, but just because you've now put everything on this one thing, they even called it the one strategy, whatever. But then because you put it all in this one thing, that is automatically value for customers. So who's in charge of checking for that? Because it's distinctly possible that you've just made things worse. You're going to see a drop off in customers and you're to see a drop in revenue. So that's really something to always be thinking about, right?
Brian Milner (12:21)
Yeah. Yeah. Yeah. Yeah, or kind of parallel to that would be if we wanted to add something because we thought it was going to increase customer satisfaction, but it kept customer satisfaction flat or even declined. But maybe it did something else well, like it raised revenue or something like that. It's still not a success, right? Because what you were trying to do was to increase customer satisfaction, and that's not what you did. So you still need to do that, you know?
Tendayi Viki (13:09)
Yes. Yeah, exactly. I mean, you still need to fix it in such a way. If you want to retain it because it grew revenue, then you do need to make it work somehow to make customer satisfaction work because yeah, today's revenue is not tomorrow's revenue. Customer satisfaction is the best way to create value.
Brian Milner (13:24)
Exactly. Yeah. Well, this discussion seems to, know, when we talk about innovation and we talk about this product life cycle, there's, I think, you we can't avoid the term or the concept of experimentation a little bit. And I know you talk about that quite a bit in your writing, kind of the idea of experimentation and what that means, you know, as far as what the expectation should be when you experiment. on things. So I want you to talk a little bit about that. kind of what should what should we what should Scrum Masters product owners? What should we be thinking about when we what should agile teams think about when we think about experimentation and failure? You know, how does a healthy portfolio kind of bet? What does that look like?
Tendayi Viki (14:15)
Yeah. like I remember at the beginning, we talked about innovation theater. Remember at beginning? And we said that was like an excessive focus of, excessive focus on ideation. Then there's another form of experimentation theater, which is fascinating, but I've also noticed, which is people think they're doing well because they're running experiments. Right? Like they're running experiments. We did customer discovery. And
Brian Milner (14:21)
Yeah, yeah.
Tendayi Viki (14:42)
But the experiments they're running are not helping them make decisions about the product, the value proposition, or the business model. So I eventually wrote a piece. I think you've already find it in four or five years ago. But the goal of running experiments is not the experiment. The goal of running experiments is to make progress with your idea. That's the whole goal. So you have to set expectations. You cannot run an experiment that doesn't have a hypothesis and success criteria attached to it.
Brian Milner (14:58)
Yeah. Yeah.
Tendayi Viki (15:11)
Like success criteria and hypothesis first, then experiment. Whereas what happens in a lot of organizations, and I've noticed this is, they have a go-to method for experimentation. Like some organizations will go to market surveys. Some organizations will go to focus groups. They their go-to methods. So they've already decided what the method is going to be. And then they go, so for the focus group we're going to do next week, what are the questions that you'd ask? And it's like, no, that's backwards. First you have to figure out.
Brian Milner (15:11)
Yeah.
Tendayi Viki (15:40)
what you want to learn and then choose the experiment and then design the experiment to deliver those learnings and then look at the data and see if it allows you to make decisions. Great. So that's really, really important. So if you're a Scrum master, like that should be a fight that you have all the time. like, okay, when we finish the experiment, what decision will we be able to make? And then we go, oh yeah, we'll be able to make a decision of whether the pricing works.
Brian Milner (15:42)
Yeah. Mmm, love that.
Tendayi Viki (16:08)
We want to the decision of whether we should keep continue producing this feature or stop. We'll be able to make the decision of whether the position of this thing on the landing page is impacting sales. We should be able to make a decision. And then we say, OK, now we're running the experiment. Not to the experiment and then come back and go, yeah, we learned a lot. Customers are like this and customers are like that. it's like, OK, but what decision did you make after the learning? And that's a really important, the connection between experiments and decisions is something that
Brian Milner (16:14)
Yeah.
Tendayi Viki (16:37)
I don't see that happening a lot sometimes when I walk into an Agile team.
Brian Milner (16:42)
Yeah. Yeah, I love your connecting that to the theater kind of concept because you're absolutely right. From the outside looking in, it looks like, hey, great, they're doing all this experimentation, but it should be driving some decision. Like you said, it should be driving some kind of a movement forward. And if it's not, then you're going through the motions of doing the experimentation, but you're not really applying that that knowledge that you gain from it. yeah, that's why I think it's so important to be able to state it from the outset. Like you said, I've got to have the hypothesis. I've got to be able to say, here's what I hope to learn. Now let's try to do this thing and let's see what happens. And now we can say, now that we know this, what do we do about this? ⁓
Tendayi Viki (17:14)
No. Exactly. And analysis paralysis is finding data for no reason. you don't have, like, what is the, why are we mining the data? What's the reason? What are we looking for? Because as soon as we find it, we stop. But if we're just like reviewing customer interviews and reviewing this, like it's interesting and it's busy work and it makes us feel like we've learned a lot, but we're not making business decisions at the end of the day. you know, it's not really valuable.
Brian Milner (17:27)
Right. Yeah. Yeah. Well, I hope this doesn't put you to the test too much because I know this is not your latest book, but in the lean product lifecycle, I know you talk about kind of how the product life cycle, I love that term even, that it is a life cycle of a product and it kind of goes through phases, maturity phases almost, you know, and that's because you kind of brought that up in what you just said about the fact that, Do we continue or do we not continue? So just for the listeners, can you maybe broadly lay that out for folks, help us to understand a little bit about what that life cycle looks like from idea to retirement?
Tendayi Viki (18:24)
Yeah. I mean, so in my experience, a product or a service has two lives, right? Well, actually maybe three lives. Let's call it three lives. There is life before product market fit, life after product market fit, and then life after decline. And so what tends to happen in organizations, which is where the product life cycle or the lead product life cycle is we ended up calling it, became really valuable is that
Brian Milner (18:32)
Ha ha.
Tendayi Viki (18:51)
The management tools for life after product market fit are the tools that are really prominent, the execution tools, the scaling, the forecasting, the business planning. That's all life after product market fit, because that's where they make sense. You know the customer, you know how much they will need to pay. You know how to scale. know how to... All of those things are useful for life after product market fit. And what we're trying to do with the Lean Product Lifecycle was to say, happens, what is life before product market fit? And life before product market fit is learning and discovery, it's not execution. So when innovation struggles inside large organizations, it's because they take the toolbox for after product market fit and apply it to before product market fit. So we're trying to a toolbox. So we're like, okay, so what's life before product market fit? Well, life before product market fit is having a great idea or a great collection of ideas that's aligned with your portfolio strategy. And so...
Brian Milner (19:36)
Yeah.
Tendayi Viki (19:48)
If you have a whole bunch of really good ideas that you think are going to help you navigate towards where you want to go as an organization, the question then becomes what's the next phase after that? So you run ideation competitions, you generate ideas. Well, if you're going to take the toolbox from life after product market fit, then the next thing you do after ideation is write a business plan.
Brian Milner (20:08)
Hahaha.
Tendayi Viki (20:09)
And thinking like, no, you've jumped over a couple of things first. You've just had an idea, you've got to plan first, right? You got to do other things. You got to do something to check if your idea has legs. And so that's when we came up with the next phase after, idea creation, which we called Explore. So we said, OK, so you explore whether the idea has legs. And Explore is focused on deeply understanding customers and their needs.
Brian Milner (20:14)
Yeah Yeah.
Tendayi Viki (20:39)
understanding, willingness to pay, size of monoket, just kind of understanding like the front stage of your business model. And if you find that the idea sounded good, but there's no customer need that's going to be able to serve for this, you can do two things. You can make a decision. You can stop the idea or you can change direction to whatever you learned. And then we're like, okay, so after that, we moved to another stage we called validation, which is really now about validating the product, the backstage, the business model, the pricing, the channels. How are you going to scale it? So if you get positive outcomes, now you go product market fits. Then you can go to grow, scaling to the whole thing. Eric Gris wrote about growth engines, What are your growth engines really important and how do you drive growth? And then after a while, the product matures and if you can't figure out a way to revamp the growth, then you can move into what we call the sustained stage where you're sort of sustaining, lowering costs while maintaining revenue. And then after a while you... move to the third phase, which is about taking about which is, you know, retiring the product. And what we tried to do at Pearson was we tried to create a process for actively retiring things. So we would walk into these investment boards and go, great, you want to unlock money for innovation? Which thing are you holding on for dear life, just in case one customer asks for it? And they're like, but this thing, this thing, I'm like, kill all that, like actively retired, go through it systematically, get in touch with the customer.
Brian Milner (21:44)
Yeah. Ha ha ha!
Tendayi Viki (22:03)
Say, what do you need? We'll put it for you in this place. It won't change. You can always access it, but there's no more support for that. Like do it in an active way. That way you can systematically move resources from declining products into innovation. So that's effectively the lean product life cycle and the way we designed it.
Brian Milner (22:20)
Yeah, it's awesome. And what really kind of stood out to me is that there's, we're talking kind of more about at a larger level, at a product level, but the same things actually, it's really quite parallel for a feature level, for items within a product that, you know, they go through sort of a very similar life cycle that you have to explore and you have to understand. what the customer is, what their want is. You have to find the market fit for it. Once you find the fit, you have to expand it. And then you have to eventually end up at retirement because there are certain things that's... I kind of want to get your opinion on this. We seem so reluctant to accept that. The concept of whether it's a product or a feature, the idea that no, it's time to now let that go play on a farm upstate. Why do you think as humans, because I know your background is in psychology, I'm kind of curious what your view is there from a psychological standpoint. Why do think we as humans are so reluctant to let go of these things that we've
Tendayi Viki (23:16)
Mm-hmm. Yeah. Mm.
Brian Milner (23:34)
we've invested on in the past, have produced for us in the past.
Tendayi Viki (23:38)
Yeah, I mean, it's a whole bunch of things, There's inertia, right? It's just like, it's effort to stop something, relook at it, take it off, et cetera, et cetera. And then there's also loss aversion. I I think product teams can always imagine what might happen if one customer shows up and things no longer there. It's like, no, we had three clicks last month. Those three customers. So there's always that fear of like, what might happen. And so you do need to make a discussion. And one of the things that I love about it,
Brian Milner (23:57)
Yeah. Yeah
Tendayi Viki (24:08)
Some of the product thinking I've seen out there is like, if we're going to add a feature, if we don't want our product to become bloated, what are we dropping? Right? And so you've always got like on the bubble things to drop that you've been thinking about actively. Then you just think about how do you like roll those things out while you're adding new things? Because yeah, it's not some Frankenstein products in the end, right? You keep adding features, but you're not taking anything off. It becomes really hard to use. Yeah.
Brian Milner (24:14)
Yeah. Yeah. Yeah. Yeah, I love that. Well, I don't want to let you get out of here without at least giving us a little bit about your latest book, Pirates in the Navy, because it's such a great title. just maybe kind of help us understand a little bit about what was your thought behind that topic. What led you to write that book and what really interested you about that?
Tendayi Viki (24:52)
Yeah. So, first it was my own like horrible experiences as an innovator inside large organizations, like the kind of mistakes I made. remember once being in a room, running a workshop to try and convince a group of leaders to buy into this innovation process that we were trying to create. And I was getting a lot of pushback, lot, a lot of pushback. And then after the workshop, one of the leaders who kind of took me aside and was like, you know, today I was watching everything you were saying and this
Brian Milner (24:58)
Yeah.
Tendayi Viki (25:21)
very little to disagree with about the things you were saying, but what we didn't like was the way you made us feel. so most of the pushback was not really about your idea, it about the way you made us feel. And I said, okay, so over time it's gonna land on me, but like a very slow landing, like would be like, like how slow it's been landing on me. It landed on me that actually, if you're gonna do corporate innovation or any kind of transformation.
Brian Milner (25:28)
Hmm. Yeah.
Tendayi Viki (25:49)
the number one skill is actually relationship building. The number two skill is being a good innovator. But what we tend to do is we tend to make innovation the number one thing. We even think of innovators as mavericks. But actually, Pirates in the Navy was this reminder, because there's even a chapter in there that I call, you're not Elon Musk and you don't work in a company full of idiots. It was just a reminder to say, yeah, you don't work in a company full of idiots. People know what they're doing. They've been running the company for a while. They may have blind spots, but they're not idiots. So it's a mutual respect thing that you kind of have to build. so actually, it's funny because I'm about to publish an article in a couple of weeks and a newsletter this week, which is called Innovators Does Not Equal Maverick. And what I'm trying to do is to create this disassociation between having crazy ideas and being difficult to work with. Because we often tend to think that people like Steve Jobs is really like iconic. And part of the iconicness, is that how you say it? The iconography of Steve Jobs is not just how brilliant he was with the product, it's also how difficult he was to work with. Like everybody's like really like, you know, but that's actually pretty rare.
Brian Milner (26:54)
Yeah.
Tendayi Viki (27:08)
Like serial entrepreneurs are people that are really good at working with others. Right. And so that's what Pirates in the Navy is actually really about. Yeah.
Brian Milner (27:14)
Yeah. Yeah, and I've seen people, unfortunately, who have used that example as, well, Steve Jobs was an asshole. I guess that's what I need to be is an asshole because that's what works. No, please, that's not what made him successful was being an asshole, right? Yes.
Tendayi Viki (27:31)
Thanks. Yeah, no, he succeeded in spite of that, not because of that. And so it's something that's kind of foundationally important. So I have like this thing that I do some conversations that I have with colleagues, just to kind of say, I can usually tell, it's like one of my subtle internal pains that I feel. I can usually sense when an innovator is going to burn out in a logical.
Brian Milner (27:58)
Yeah.
Tendayi Viki (28:02)
And that's when every time they open their mouth, people are like... And then so they create like innovation is already a form of deviance, right? Cause you're trying to do something different from the organization. So you don't want to pile on your own personal deviance on top of the ideas deviance, right? You want to be, yeah, it's really important. And so that's what the book was about by the way. The title is a catchy, but it's really just about how do you actually succeed as an innovator inside these more structured institutions.
Brian Milner (28:10)
Yeah. Yeah. Yeah. That's awesome. Well, thank you so much. for our listeners here, this will all be in our show notes so that you can find quick links to this, find more about Tendayi and kind of his work. But I can't thank you enough for coming on. I really appreciate it. This has been a fascinating conversation. And I just really strongly encourage the listeners, if you like this, if you really found this conversation interesting, check out his work, check out the corporate startup.
Tendayi Viki (28:44)
Thank you.
Brian Milner (28:54)
Check out the Lean Product Lifecycle and his newest book, Pirates in the Navy. They're really, really good and I think you'll really enjoy it. So, Tadayi, thank you so much for coming on the show.
Tendayi Viki (29:04)
Yeah, thank you for having me. I really enjoyed the conversation too.

Wednesday Oct 22, 2025

Five years post-COVID, are we any closer to knowing what kind of work environment actually works best? Brian and Lance dig into the real drivers behind return-to-office mandates, remote productivity myths, and why "context beats location" every time.
Overview
The return-to-office debate isn’t over—it’s evolving. In this episode, Brian Milner welcomes back frequent guest and fellow Agile coach Lance Dacy for a wide-ranging conversation about remote work, in-office mandates, and the big question: what actually boosts team performance?
Together, they explore what we’ve learned (and what we haven’t) in the five years since COVID reshaped the way we work. With studies offering conflicting conclusions and executives often leading with personal preference, Brian and Lance unpack how leaders can navigate decisions that impact morale, productivity, and long-term value delivery. From context-driven collaboration to psychological safety, this is a nuanced take on one of Agile’s most pressing modern challenges.
References and resources mentioned in the show:
Lance Dacy
Excerpt from A Leader's Guide to Agile eBook
Scrum, Remote Teams, & Success: Five Ways to Have All Three by Brian Milner
#61: The Complex Factors in The Office Vs. Remote Debate with Scott Dunn
Using a Task Board with One Remote Team Member
Subscribe to the Agile Mentors Podcast
Want to get involved?
This show is designed for you, and we’d love your input.
Enjoyed what you heard today? Please leave a rating and a review. It really helps, and we read every single one.
Got an Agile subject you’d like us to discuss or a question that needs an answer? Share your thoughts with us at podcast@mountaingoatsoftware.com
This episode’s presenters are:
Brian Milner is a Certified Scrum Trainer®, Certified Scrum Professional®, Certified ScrumMaster®, and Certified Scrum Product Owner®, and host of the Agile Mentors Podcast training at Mountain Goat Software. He's passionate about making a difference in people's day-to-day work, influenced by his own experience of transitioning to Scrum and seeing improvements in work/life balance, honesty, respect, and the quality of work.
Lance Dacy is a Certified Scrum Trainer®, Certified Scrum Professional®, Certified ScrumMaster®, and Certified Scrum Product Owner®. Lance brings a great personality and servant's heart to his workshops. He loves seeing people walk away with tangible and practical things they can do with their teams straight away.
Auto-generated Transcript:
Brian Milner (00:00)
Welcome in Agile Mentors. We are back. Thank you for bearing with us for a little bit of a break there. If you notice, we have not been releasing episodes the past few weeks because we've been practicing sustainable pace, but we are back and we are ready to dive into some really, really gritty topics, some things that we think will be really beneficial. Who better to kick us back off to bring us back around than a friend of the show, Lance Dacey, who is with us today. Welcome in, Lance.
Lance Dacy (00:25)
Right now. Thank you, Brian. How was Hawaii, that big sabbatical y'all took in July?
Brian Milner (00:34)
Yeah, Hawaii is always great, right? Hawaii is awesome. Absolutely. Isn't that what everyone did in July? ⁓ Well, we're glad to be back and we're excited about what we're going to talk about today because we figured why start with something that was not controversial? Why not find something very controversial?
Lance Dacy (00:36)
My tie's on the beach. That's where you over. I mean, I didn't see y'all there, but yeah.
Brian Milner (00:58)
and just set ourselves up to receive lots of disgruntled emails that we're probably going to get this wrong. We're probably going to get awesome feedback too. But I'll just go ahead and start by saying, hey, we hope you give us a little grace on this topic. We're just talking from our experience, our opinions. And I know there's lots of opinions on this, but we wanted to focus on the fact that
Lance Dacy (01:06)
No, awesome feedback, Brian. Awesome feedback. We're going to get awesome feedback.
Brian Milner (01:25)
Hey, we're five years removed from the COVID outbreak. And when COVID happened, that was a massive disruption in work. We all had to learn how to do work in a different way, but five years in, what have we learned? What's changed? And now we're seeing lots of things like return to office mandates and hybrid working agreements. You must be here for this many days a week or other things.
Lance Dacy (01:44)
and
Brian Milner (01:50)
or companies that say, no, we're now fully remote and we're doing things this way. But I saw this kind really interesting question that made me think about this. If you were designing the workplace from scratch today, would anyone build cubicles? And I thought, well, that's a really interesting question. So Lance, what do you think we've learned in the last five years? Where do think we are today with this whole work from home versus return to office?
Lance Dacy (02:16)
I tell you what, Brian, I sit there and think, man, five years is a long time to have empirical data. And I don't believe we have data. Let me say it this way. We got the data. What does it mean? You know, so and I'm a data guy. You all know that. And I'm sitting there trying to look at it going, I don't know how much we've learned. think what we've learned is there's no right answer.
Brian Milner (02:22)
Yeah. Yeah.
Lance Dacy (02:39)
And everybody's, especially organizations, we're looking for the right answer. Just give me the answer and let's go for it. You and I, coach and consult and people hire us to tell them sometimes what they need to do. And sometimes we're like, no, we're not going to tell you what to do. We're going to learn what the issues are. And then based on that, what should we go from? And I find, you know, if I had to sum up what we've learned, remote can be great, right? Office can be great. neither is a silver bullet. That's what I want. So I'm going to go back to my coaching stance and say, well, let's define the outcome, measure with a multi-focus, multi-factor, multi-dimensional metrics of what we're trying to actually accomplish with our people, experiment, and then keep an eye on the team health. I still feel like that's what we're doing. Five years sounds ridiculous that we haven't figured that out. But I think it was so disruptive that five years isn't enough. And I think the work on top of that is changing so radically. have too many variables up in the air. So for now, if I had to make a decision, I lean toward co-location. When the work is ambiguous, when relationships are important or new, that's another one. If we've got a lot of new people together, working remote is going to be a very difficult thing for a while. So, you know, I'd say, hey, if I'm starting a company and the work is ambiguous, you know, kind like a software product company, the work we're not quite sure what we're doing and needs a lot of collaboration and and a lot of hands on, you know, trust with each other, then I'm probably going to say, let's let's be in the same area sometime. You know, I'm not saying every single day, but. That's how I'm gonna lean towards. And then I'm gonna say if your work is predictable, repeatable, doesn't require a lot of that, and you need intense focus, well then maybe remote is fine for you. So how's that for an answer? I don't know.
Brian Milner (04:32)
No, think that's a look. I don't think there's ever I don't think you're ever worse off by by being able to admit that right to just say, you know what? I don't know. And I think sometimes that's part of the problem with the way that we approach certain issues is that people are reluctant to say that right. They're reluctant to just admit, you know what? I don't really know. I don't really have the answer on this yet. And I think you're hitting on something that's really important is that there is
Lance Dacy (04:39)
Right.
Brian Milner (04:59)
You said no silver bullet. I also think there's no one right answer. I don't think that there is a right answer to this question of should you be in the office or should you be remote? think, right.
Lance Dacy (05:12)
confidence interval, right? I mean, it's like, there's no, it's not it's not binary. So I agree with you.
Brian Milner (05:16)
Right. Yeah, there are certain industries, certain products, certain job types that I think are better in the office. And there are others that I think are better remote. And I think what you got to do, and I think I love your return to kind of a coaching stance and looking at this. What's the goal? And I think that's what you have to try to distill it down to is what's the purpose? What's the goal we're trying to reach here? If it's productivity, then let's talk about productivity. If it's morale, if it's enhancing communication, it starts from there. Define what it is that we want as our end goal, and then we can start to find data. We can start to find empirical evidence that either supports or detracts from whatever hypothesis we think we have about this. And that should be what leads us.
Lance Dacy (06:10)
And it could change, you know, that's the other problem. One quarter, it may be better the stuff we're working on. We're in the office more often. The next quarter, maybe we agree too much. Now, the problem is you go survey people. Let's talk about productivity. You ask, let's say a programmer. Okay, I'm just going to say garden variety programmer, highly skilled. You ask them where they are most productive and most of them, I'm not going to indict everybody, most of them will say,
Brian Milner (06:22)
Yep. Yes, let's do it.
Lance Dacy (06:39)
I want to be left alone, no meanings, in silence, coding on my keyboard. They may be going a direction completely opposite of where we need to go, and we won't know that until they come together. And so the other problem with this is we're asking sometimes the wrong question with the wrong people. You ask a single programmer where they're more productive. It's sitting in my office being able to go get a coffee when I want, not sitting in traffic. Hey, I'm all for that. Who's productive in traffic other than, I'm listening to books, you know, so I am growing myself. But nobody, I'd be hard pressed to find anybody saying I love sitting in traffic. So let's put that to the side. Nobody wants to drive two hours a day to their office and back home. That's terrible. But if you ask a programmer that, would you concur that most of them would say, I'm most productive, just leave me alone. Let me write all the code I want.
Brian Milner (07:16)
You
Lance Dacy (07:30)
What do you think of that? So now that's the wrong question then, because now we're working in an agile type, let's say in an agile context, where we're working in an empirical nature, which says we don't know what we're doing. So the more iterative and incremental feedback, the better we understand, are we on the right path or not?
Brian Milner (07:33)
I think that's probably true. I think most developers would say that, yeah.
Lance Dacy (07:52)
And so if I was to say, it more productive to let the individuals be efficient at what they're doing and then come together later to learn that we got a big gap where we were from? Or do we sacrifice individual productivity with a lot of collaboration, which they may term as meetings? I don't like to call them meetings, they're working sessions, right? We had a backlog discussion about this, I believe, you not long ago. Backlog refinement is not a meeting. It's a working session to say, hey, customer needs this. How are we going to do it? You know, and What is it that they need? So I find, I'm debating with people on LinkedIn a lot, I love this, so this is why it's top of mind, is even if the customer knows 100 % of what they want, which let's just say they won't, but if they did, you do not know how to build it. One programmer may, but when you got four programmers, some testers, some database people, architect, all these cross-functional skills, how can you sit in a vacuum and do that? So if your work... requires multiple skills to come together and you're trying to build a done increment by the end of one, two, three or four weeks, having an individual productivity to me could be harmful. you look, I told this guy on LinkedIn that I don't believe productivity is one dimensional. So he referenced a Stanford, let's see, this was in 2023, that he was saying that, you know, the Stanford study showed that people were more productive working at home. Well, it was actually a 10 % decline for call center staff, by the way. So that work is not as collaborative, I would say, maybe. That same study, though, found 35 % lower attrition, higher employee satisfaction, which raised the long run throughput. So while they declined in productivity at home 10%, most executives would go, no, I got to come to the office. We can't have that. But what if I told you, you say 35 % on whatever the number is, attrition and employee satisfaction in the long run, would you rather take that gamble or not? know, Gallup did the same thing. He was referencing a state of workplace and they find, you know, customer loyalty and margin was better for people that were in the office. They may be less productive individually, but the customers saw a better outcome. So what are we measuring, right? So Brian, that's what I look at. with these productivity debates, I'm like, my gosh, what does productivity mean? Are we optimizing for delivery to the customer flow, or are we optimizing for the individual utilization of the people on the teams? And I think executives have to make a choice. And I say executives, because they're the ones who influence heavily, whether we, I'll talk about culture in a second. But I find that, If people steer towards individual productivity, we might be sub-optimizing, right? We know this. mean, we know flow and systems thinking and, you know, all the things that we read in books with lean and inefficiency and cross-functional teams. But what is productivity? I don't know. You have to define that. So that's where I go back. What are you trying to achieve? Individual utilization, work from home. Let's go. You want delivery to your customer? Maybe not. Right.
Brian Milner (10:54)
So. Right. No, you're making an excellent point. so I'll throw maybe a massive curve ball into this discussion because I would propose that we may not, if we're looking at productivity as whether in an agile organization, we should return to office or not, we may be looking at the complete wrong thing because Productivity, I would propose, and I say this all the time in classes, productivity isn't the answer. What do we hear all the time now about AI and developers is that AI is enhancing productivity. It's allowing them to do more in less time. Well, that's great. Individual, right. But that's a volume.
Lance Dacy (11:42)
Individual productivity, individual productivity.
Brian Milner (11:48)
calculation, right? When we talk about productivity, that's usually we're referring to a volume type calculation. But that's, you and I both know very well that the missing gap there is actually the value gap. And so the question is, if we're producing a larger volume of work because we are remote, does that matter if the volume of work that we're producing is things no one cares about? We're all familiar with all the studies that show, I've seen multiples, it's somewhere between 64 to 80 % of what people produce in software is rarely or never used. depending on the study that you follow there with that, and if that's true, exactly, that's my point. Right. And so if that's the point, does it matter if we are more productive from home? Does it matter if we're less productive from home?
Lance Dacy (12:30)
If that's half wrong, it's unacceptable. So it doesn't matter.
Brian Milner (12:43)
I don't know that it does. think what matters more importantly to an agile organization is are we more producing more value in a remote environment? Are we producing more value in an office environment? And that's something I don't know that there is a study on.
Lance Dacy (12:59)
Well, how would you study it, right? Because we were just talking about this before we were trying to debate, you know, what is it exactly that we're going to cover? Because we just, it's too big, right? It's maybe a multi-part thing, but it's like, even if you did the study, who are you surveying? Is everybody the same? Like we go into organizations all the time. Yes, the organization's unique. Y'all do unique things. You're great. Your problems are not unique. How we approach solving the problem, we can borrow from other things that we've done. But when you go do a survey,
Brian Milner (13:08)
Yeah.
Lance Dacy (13:29)
Are you really ensuring that you're getting all the different psychological profiles? Because I'm going to wrap that discussion up by saying somebody's preference may override everything. who are you serving? Are you getting a good sample that mimics what you might see on a team? So going back to that Stanford study, you have to ask the executives, would you sacrifice 10 % lower productivity for work from home call staff? And I know that's different work than software, but let's just, these are real studies out there. If that same study found a 35 % lower attrition and higher employee satisfaction, what if your people are happier working but less productive and it saves you in the long run from attrition? Is that metric matter? Your CFO would argue yes. You know, it costs a lot of money to hire somebody, bring them on board and you lose all of that knowledge. So yeah, I have the problem of working at home. This is me. I don't like working at home. Okay, I do it for a living.
Brian Milner (14:19)
Yeah. Yeah.
Lance Dacy (14:28)
And I have my own little office and I try to shelter myself away, but I love to compartmentalize work or else I'll work all the time. So when I work at home, just nothing, it's hard for me to have barriers. That's just a discipline and rigor thing. When I went into the office, I could hit it hard. You know, I'd go in early. I wouldn't take lunch. just, I'd put in my time, be very productive. I'd leave four or five, be home. And then I was done. You know, of course you answer emails and stuff like that, but that's me. So are you surveying me?
Brian Milner (14:57)
Right.
Lance Dacy (14:57)
Would you rather have me happy and be at home and, I'm going to go run this errand right now. I was less productive today because I chopped up my time and lost flow and context switching, but I'm a happier employee and I contribute a lot to the bottom line. So what are we measuring? Right. So I feel like all that to say is I think you hit it on the head. What is, what is it that you're trying to measure? If it's just productivity, yeah, they'll go in the office because this study says 10 % less, but the same study says.
Brian Milner (15:17)
Mmm.
Lance Dacy (15:25)
Better attrition rate, higher employee satisfaction. Do your people matter? Well, if that's the case, are you going to sacrifice 10 % productivity? I would. I want happy people working for me. But I don't want that.
Brian Milner (15:30)
Yeah. Yeah. No, no, that's a great point. And, and, know, happy people do a better job. Happy people, you know, take care of your customers better. There's, there's a enormous benefits from that. And there's, know, there's lots of, lots of speakers and authors out there that will point you to the fact that in leadership, the job is to try to put your employee first and take care of the employees. If you take care of your employees. the employees take care of your customers and that's what you want them to do. And I agree with that. I think that's a good approach. yeah, I think you're right. There's impacts here across the board. And as you said, it's a really broad topic.
Lance Dacy (16:20)
Well, let's go back to that other thing just real quick. Lance would rather go into the office.
Brian Milner (16:25)
Right.
Lance Dacy (16:25)
Let's say Brian, I don't know, haven't dove into your preferences yet. Brian wants to work at home. He wants to be with his kids, pick them up at school. That's right, you know, that's a righteous thing. I love that. All right. So the company would value having both of us. So now what do we do both? You know, we allow an office space and pay for the real estate and all that to let Lance come in because that's what he likes. And we also let Brian stay at home. And then at some point, where do you get them together to solve big problems? I mean, that's the age old.
Brian Milner (16:27)
Yeah.
Lance Dacy (16:54)
issue is I think the organization based on what they're doing for that set of time in the initiative has to define what is it is most important to us. And you might even have to shift people around to different work to do that. Say, well, Brian's not a fit for this new thing we're working on because he likes to work at home. So do we have a mechanism and offices to do that? I don't think we're good at doing.
Brian Milner (17:17)
Yeah.
Lance Dacy (17:17)
We just hire people and say, here's your job and your pigeon hole for it and career growth and get so busy. It's hard for managers to focus on that. You know, I don't know.
Brian Milner (17:25)
Yeah, I mean, the thing that I've heard most from people who have been on one side or the other side of this issue is kind of the frustration that they feel sometimes with mandates one way or the other, that they're not based on fact, they're not based on anything but one person's preference who happens to be then the leader, right? So if Lance is the leader and Lance prefers to be in the office, then Lance might say, That's it, everyone's come back to the office because I just think that's a better way of working. And, right.
Lance Dacy (17:56)
I mean, how many leaders are like that, right? It's like they mimic that preference. you know, it's such a hard thing. think of the other thing, this debate I was having with this gentleman on LinkedIn, really good one, by the way, I love debates. I'm always wrong. I'm okay with that. But I feel like how we would sum that up is that context beats location. So you got teams. So Microsoft, let me find the study here. Microsoft did a study called the New Future of Work. This was just in 2022, by the way. So it is a little bit older, but they said teams with tight, rapidly shifting interdependencies, so things like early stage product discovery, like a lot of us do, they pay a coordination tax when every issue becomes a chat thread. I loved how they said that. There's a coordination tax. I've always said in software, there's a release tax. You know, what's the tax of the release to actually get the software into the hands of the user is usually pretty big and it doesn't reveal itself until the end. So coordination tax exists for those kinds of teams. Now conversely, the study says work that lends itself to deep focus, just like you said at the kickoff with asynchronous handoffs, like a good parallel flow, like a relay race type thing, sees a gain of 15 to 40 % with remote workers according to active track. 2023. You can find, by the way, any study to support your view. Let's just agree with that, that confirmation bias is rampant in this discussion, we're, Brian and I are actually trying to be, what's the word, neutral as much as possible with our own preferences to just showcase. So go find a study that matches what you want and you're going to be fine. But the really smart people are going to find the ones that argue against your point.
Brian Milner (19:17)
Sure.
Lance Dacy (19:40)
and then let you figure out, just like what you do with stakeholders, there's a trade-off. There's not one perfect answer. You want this or that? You can't have it all. So, I don't know.
Brian Milner (19:48)
Well, if I'm a leader in an organization today and I'm trying to make this decision, should I bring people back to the office, should I not, I know there's lots of things to think about. Did we sign a 20 year lease with this building that we're gonna be paying for anyway? That's a consideration. Right, right, that's obviously a concern. Now,
Lance Dacy (19:55)
about the office today. We're paying $180,000 a month for an empty building.
Brian Milner (20:11)
from the counterpoint that you can say, that's your fault. Why did you make that decision that was a bad decision, right? And maybe that's true, I don't know, but that's certainly part of that decision is, hey, this is part of our bottom line.
Lance Dacy (20:23) Would you rather have a job or tell me that that's a bad decision? Because if we pay $180,000 a month, you're not going to have a job. We're out of business. You know, it's like...
Brian Milner (20:30)
Right, right, but what I'd be trying to do is find what's the best thing for my company, for this set of employees. That's really the question that's the most important. And I think there's, we talked a little bit about this before as well. And we discussed it a little bit that there's preferences, but there's also the psychological impacts of doing these things and what that kind of reflects on your workforce.
Lance Dacy (20:41)
Yeah.
Brian Milner (20:58)
I know I saw the study that was from McKinsey. This was from 2021. their study showed one in three American workers felt their mental health worsened after returning to in-person work. One in three. Now, again, let's stop, right? That's a statistic, but let's look at even what this said. One in three felt that their mental health worsened, right? And that's... That's not an objective fact. That's a feeling that is one person feels this way, the another person feels that way. It is something you can track with a statistic, it's still kind of subjective when you think about that kind of thing. I think probably a lot of people I would assume feel like people's mental health is in general.
Lance Dacy (21:32)
Yeah, you did something you can try and try.
Brian Milner (21:49)
better without commuting and being at home that they would tend to towards that overall. But like we were saying before, there are some negative impacts of working from home as well. You're less connected, you're more isolated. don't get feedback as often as you would. You don't learn.
Lance Dacy (22:04)
your I'm getting in love.
Brian Milner (22:15)
as much because you don't have just the ability to have some of those things. ⁓ There are people who miss the structure, the routine that comes from being in an office place. You mentioned the work-life balance thing. I think that that's true as well. We often tend to think work-life balance only is if you're remote, that that's a better work-life balance. But yes, that is real concern as well. I work out of my house, so I am always at work.
Lance Dacy (22:19)
But who do you he asked? Or who like?
Brian Milner (22:40)
I'm expected to always respond to emails. I'm expected to always finish that thing. Right.
Lance Dacy (22:43)
Yeah. Somebody believes that right somewhere. you know, going back to the study you just cited, when you ask somebody or when you say, hey, you're not learning as much or disconnected, the individual may be OK with that. They're like, yeah, that's great. I don't have to do all that. But then what you're sacrificing is the is the is the actual ROI of whatever it is you're building. So I want to go back to you just said something. This is amazing. First of all. 2021, we were still in the pandemic. Mental decline was already, you know, everybody was probably stressed. I hate to use the word everybody and always, but that was a hard time for a lot of us, right? So your mental condition in 2021, I would argue is already affected and clouded. Now I'm going to tangent a little bit with the discipline and rigor of exercise and eating healthy. Okay. So let's say that I want to get better and I'm going to go to the gym. you know, five days a week for 30 minutes and just try to get in that habit. If you were to ask me three days after going to the gym, you know, how do you feel? I'm going to be like, it's terrible. I heard I'm getting up at 430. I'm lacking of sleep. So it's a recursive problem. You got to get good sleep to lose weight and get physically fit and all that. But I'm losing sleep. I got to learn how to go back to bed. But you asked me early on, I'm probably miserable. with any kind of change effort. We know that, you know, we're in the change effort business. Rarely is anybody like right at the beginning of the change going, yes, you know, it's like it's so I feel like that's a little bit biased as well. Of course, they're probably saying, this is terrible because I'm used to being at home all the time. know, it's like, yeah.
Brian Milner (24:21)
Yeah. Well, and we were talking before to even the productivity kind of studies, right? And please, I feel like that if you're listening to this, you may think that I'm trying to make a case for one versus the other. really am not. I think, right. mean, well, what you said earlier, you tend to like more of the opposite. Well, I'll share them. I tend to more like the remote. I'm more of a remote person. ⁓
Lance Dacy (24:33)
Would you like So I had that right.
Brian Milner (24:46)
You had it right. You definitely had it right. But like even some of the studies that are on productivity, the question or if you read what the data is showing, it's saying they're asking employees, do you feel more productive if you're in the office or at work? And even there, the workers will often say, no, I feel more productive at home. And the managers will often say, well, my employees, I feel my employees are more productive when they're in the office than when they're remote. And that's a feeling that's not a hard data point. It's a data point of people's emotional kind of feeling in relation to it. But it's not a data point of what the actual productivity is. And so even there, think we have to be careful. My favorite phrase I use all the time in class is data or didn't happen. when you... when you read statistics, I think it's really important for us to zoom in on it, say, what exactly is this showing? What's the question that's being asked here? How was the data collected? Even there, was this a scientific study or was this an internet poll where anyone who could come online could take the poll and it's not a scientific sampling, right? There's several of those in our industry. How many people prefer this in Agile versus this in Agile? But hey, go to the website and fill it out and sign up. Well, that's not a scientific survey. know, people could create bots to go in and answer it a certain way a million times. And all of a sudden, hey, data shows. No, it's because you didn't do a scientific survey. So I think it's always a valid point, especially in these kind of really hotbed kind of issues and discussions, to really question the source of the data and say, What is the point behind it? What is it that we're trying to, what's their end goal? And if there are studies, what was the methodology of that study? Is it really proving what I'm trying to decide here or was it proving something entirely different?
Lance Dacy (26:40)
really critical to the science of design. just like a new prescription or something, right? You wanna go read who bought the study because it's like a lot of times that can affect it as well. But I think what you mentioned, know, we haven't, the points I was starting out with were productivity, context beats location. So instead of asking, should we be at home or in the office, say, what's the work we're doing? There is no doubt when we say, hey, Brian and Lance, Lance likes to go to the office, Brian likes to stay at home. There's no doubt there's times where there's no traffic walking up the stairs and into my office and I can crank out three hours worth of productive work, just focus. But imagine if you're solving a hard problem stumbling with something and if you just hopped on a 30 minute Zoom call or if you went into the office and brainstorm, you might would have saved eight hours of productive work. So I think a lot of times people have this stigma about meetings and that's why I like to change them to they're not meetings, if you're going to a meeting, that's maybe one thing, but the stuff we're trying to do, like in Scrum or something, you know, the Sprint Review, the retrospective, the Daily Scrum, those are not meetings. Those are collaborative working sessions that have a general outcome. But what you just talked about, I'm going to call it culture, is a force multiplier on this thing as well that can also change based on the type of work we're doing. you know, Amy Edmondson does a lot of great work in this, her book, Fearless Organization or something like that. She talks about this concept that psychological safety can predict error sharing and innovation, whether you're onsite or remote. So it doesn't matter. and you know, Jeff Sutherland does a lot of talks about this, that as far as scrum's concerned, the, the small cross functional, you know, co-located teams, that was a workaround because of the technology they had back in the late eighties and early nineties. You know, I remember the days of ISDN lines where it was $400 a minute to get on a video.
Brian Milner (28:30)
Yeah.
Lance Dacy (28:35)
Well, there's a lot better tools these days that we can still collaborate remotely. just don't, had this argument that I, with the guy on LinkedIn, that I think a team sitting together has osmotic communication as well. Just sitting in the room hearing things sometimes can help you. But of course, if you're just working on something that you need focus in, that's a distraction, right? So I think we can simulate much of that stuff as far as the culture's concerned. only if we have norms though. So the other thing is working agreements. Can we have an agreement as a team that we're going to have cameras on, that we're going to have rapid feedback? You know, they need to be explicit and that might be the solution. You know, so you have to build a culture around whatever mechanism you're going to use. And I still think hybrid is probably the answer. So you can just say we're going to go hybrid and it depends on the work for the quarter of the year, whatever your planning cycle is, and try to mix and match teams to do that. I think we've reached a maturity especially in the product development world, we're less distracted about the technology, let's focus on the people. That's the hard thing now. The hardest problem is the people coming together, not are we going to be cloud or whatever? Those questions have been answered. So I think the culture is a force multiplier is the third angle to this, that you just need working agreements, you need norms, you need agreements on it. And then that way you don't have these people building resentment, because I can't tell you how many people I talked to that they're like, my company's asking us to return to the office. Well, if they would tell you why and you had a say in it, or if they, you I don't know. just, it's such a hard thing for executives to deal with.
Brian Milner (30:05)
Yeah. Well, it's the old thing from a parent point of view as well, right? When you tell your kids, hey, do this thing. Why? Why am I going to do it? Because I said so. know, if the parent, right, if your parent says just because I said so, how do you feel when you're a kid and you hear that you think, well, that's not good enough. I want to know the logic behind it. I want to understand that it's justified. And I think as we mature, that's even more the case, we don't want to do something just because someone dictates it to us. We want to understand why it's important and what value comes out of it and that it's a valuable use of my time. I mean, that's the thing I would say to any leader that's listening is if you are going to make a decision one way or the other on this, make sure you share your reasoning. Make sure that it's not just, hey, because this is my feeling personally on it and you just got to go along with what I say.
Lance Dacy (31:01)
Well, the kids, you know, I can understand. We'll go back to the shoe-haw-ree thing, right? So at some point you're raising your kids. You're like, first of all, you just do what I say without delay or challenge. I remember teaching my kids that because there may be a time where I say, don't run out in that street or stop. And I don't want you to turn around go, well, why I want to do this and that? Like, well, because there's an 18-wheeler flying by. I don't have time to argue with you. Right. So you start building that, that discipline. I can understand that, but we're way past that with professionals.
Brian Milner (31:01)
have some reason, right?
Lance Dacy (31:29)
that are working in the workplace. If any executive feels like that their answer, just like you said, because I even hated it as a kid, I want to know why. I can support it. Maybe I disagree with it. But if you tell me why you're doing like, well, that makes sense. And now I can be your biggest champion. Right. So so many executives just do it because I said so. I'm your boss. Well, you need to find a new job, sir or ma'am. You know, so I don't I don't necessarily I think we have have to grow beyond that. So that's a great point.
Brian Milner (31:50)
Yeah.
Lance Dacy (31:57)
that we have to mimic that along with culture. I think, you know, the angle to that as well, as far as context beats location, the fourth one I was gonna, told this guy on LinkedIn, that experience matters as well in this debate. So if you're brand new versus if you've been doing the work for a long time, I think we'll call them veteran developers, know, veteran people who do so. they develop what's known as a tacit bandwidth, right? The ability to read the room, see what's going on. I've seen that problem 100 times. They intervene early. There's a book out there called Team Genius, and they talk about this tacit bandwidth that veterans have. a lot of people find that easier to be done in person, and junior colleagues actually learn faster that way, if they can see it and be gravitating towards that. Whereas, you know, the people that are brand new and they don't have that, it takes them a lot longer to solve a problem. So again, what does productivity look like? You want individual productivity or you want to solve big problems together as a team? And I think that's how we kind of wrap this whole thing up is it depends. How about that? We're consultants a lot of times and we don't have the right answer. We have to learn what your goals are. That's why I went back to the coaching stands because that's kind of how you start with coaches. You know, they embrace and give you a hug where you are. Say, I love that you've accepted that. Now, where do we want to go? And you have to make small iterative incremental slices to get there. So I don't know that there's a good answer for this debate.
Brian Milner (33:22)
Well, I think your it depends is the right answer because, you know, and if someone's frustrated with the fact that, you know, we tend to fall back to that a lot, you know, just it depends, depend, no matter what the question is. I think that's the right approach. That's an agile approach is to say it depends because, you know, the opposite is, is we're going to have one right way. And that's always the answer. And imagine if when you were a certain age kid, everyone's going to do this job. Well, I don't want to do that job. I want to do a different job. Doesn't matter. This is the right answer. What job should I do? This job. That's the answer for everyone. And that's the approach sometimes people take with these kinds of problems is, should we have remote work? Should we have in-office work? Here's the right answer. That's never going to be the case. There's a right answer. in that one scenario, that situation, it's going to depend on all the particulars of that situation.
Lance Dacy (34:18)
Well, for the organization, because the other side to that is now I got to worry about attracting talent that fits my model. Right. So make your decision. know, who's who's here to say good or bad? Just say as an organization, we believe in these things and that's why we're going to do this. Put that mission statement out there. And if I'm looking for a job and I don't fit with that, I just don't work there. You yeah, you lost talent. You'll find somebody else. Somebody needs a job somewhere. So. You look at these studies like the GAO case study we were debating a little while ago, know, the quit, I'm going to read just some stats here and then you have to take those and say, what are we trying to do? So the quit rate drop when staff get two days remote. Okay. So there that's a down 33%. That's cheaper than a retention bonus. And it works instantly. That's something people can do right now and say, you know what, we'll get two days remote. And that number pops, right? Commute time that is saved per teleworker is 55 minutes a day on average. That's a bonus week off every year without any payroll cost. So you can make an argument that that's a good byproduct of doing the remote work. And we're talking about CFO level type things here, Productivity jump for or bump for jobs with clear outputs is 12%. So same payroll, more widgets or user stories, whatever, 12 % better. Office footprint after going hybrid, 50 % down. CFOs suddenly love facilities again, right? Disability employment since mass work from home is 12 % and 40 % in tech roles. That's the biggest lever that a lot of people aren't talking about. They're not having disability on the job, right? And then company that forced a five-day return to office lost 50 % of its workforce, including top performers. That's a painful, painful case study from the federal news network. So a company that forced five days RTO lost 50 % of its workforce. Well, you could say, fine, that's what that company wants. Go rehire the other 50 % that want to work there and let's move on. Yeah. I don't know. know?
Brian Milner (36:24)
Yeah, no, I agree. so I think that that all comes back to what's the purpose. And in our scenario, in our situation, what's the what's the driver? You know, we started this by that quote I found that just said, you know, if you were to design a workplace from scratch today, would anyone build cubicles? If I'm starting it depends on the business. If I'm starting kind of a software as a service business, you know, We're going to build software. I'm probably not going to have an office and probably not going to have an office for quite a while if I'm an entrepreneur who's starting a new business. Because quite frankly, I can get better talent. I can have cheaper cost. And I don't need it. ⁓ There's probably a time when I might switch that. But
Lance Dacy (37:05)
Or I'd have, you might have increases somewhere else, but you they may be, but you could go find some space, right? You say, two days a week, we're going to come in like a, work or, you know, whatever those, those, workspaces are. So I totally agree with that. It's like, well, first question you have to ask yourself, what do we want to accomplish? You know, what gives us happier people go find that data. What helps us to be more productive in the way of outcomes, go find that data, broadcast it, say, here's what we're doing. If you like it, stay on board. If you don't. Go find somebody else that has a style that you want. We've been doing that for years. This isn't any different.
Brian Milner (37:41)
Yeah, yeah, I agree. Well, this has been a great discussion and I know that we only can kind of scratch the tip of the iceberg in this. again, for anyone, yeah, yeah, again, for anyone listening, please offer us a little grace in this, right? I know we're not covering every aspect of this and I know people have very strong opinions on this. from my perspective, I think that anyone who's making a decision needs to take into account all these factors, take into account the mental.
Lance Dacy (37:49)
We'll more episodes.
Brian Milner (38:09)
health aspect of your employees and the morale of the employees and the gains they get from one way versus the other. And if you cannot balance it out and make the case that, there's more gains in one way versus the other, I don't think it's the right move to make to make a big switch in this until you can say, here's why. All right, well, Lance, thanks very much for coming on again. It's always great to have you.
Lance Dacy (38:33)
Always a pleasure. Maybe next time we'll bring a CEO or something and get their perspective because we'd love to hear executive standpoint from this too.
Brian Milner (38:41)
That's an awesome idea. Yeah, let's make sure we do that. Thanks, Lance.
Lance Dacy (38:44)
All right, thank you.

Wednesday Oct 15, 2025

What happens when your brain loves puzzles… but struggles with where to start? Paige Watson shares how ADHD shapes his work as a developer—and how practices like TDD, mob programming, and discovery trees help him stay focused, move forward, and actually enjoy the ride.
Overview
In this episode of the Agile Mentors Podcast, Brian Milner is joined by Paige Watson, a technical coach, seasoned XP practitioner, and self-proclaimed “code crafter.” Paige shares his firsthand experience navigating ADHD as a software developer, and how practices like Test-Driven Development (TDD), ensemble programming, and visual planning (like Discovery Trees) have helped him find sustainable focus and flow.
Together, Brian and Paige unpack how small, iterative steps and collaborative team dynamics can support not just neurodivergent developers, but everyone on the team. Whether you're navigating ADHD yourself, leading a diverse team, or just want to write better, more maintainable code—this episode is packed with thoughtful insights and practical takeaways.
References and resources mentioned in the show:
Paige Watson
Paige Watson’s ADHD Blog Posts
#76: Navigating Neurodiversity for High-Performing Teams with Susan Fitzell
#123: Unlocking Team Intelligence with Linda Rising
Scrum Foundations
Subscribe to the Agile Mentors Podcast
Want to get involved?
This show is designed for you, and we’d love your input.
Enjoyed what you heard today? Please leave a rating and a review. It really helps, and we read every single one.
Got an Agile subject you’d like us to discuss or a question that needs an answer? Share your thoughts with us at podcast@mountaingoatsoftware.com
This episode’s presenters are:
Brian Milner is a Certified Scrum Trainer®, Certified Scrum Professional®, Certified ScrumMaster®, and Certified Scrum Product Owner®, and host of the Agile Mentors Podcast training at Mountain Goat Software. He's passionate about making a difference in people's day-to-day work, influenced by his own experience of transitioning to Scrum and seeing improvements in work/life balance, honesty, respect, and the quality of work.
Paige Watson is a passionate advocate for “Quality Software as Craft,” known for transforming developers into high-performing, cohesive teams. With deep experience guiding software teams and leading workshops for global companies, he helps build elegant, scalable systems designed for longevity and real-world impact.
Auto-generated Transcript:
Brian Milner (00:00)
Welcome in Agile Mentors. We are back for another episode of the Agile Mentors podcast. I'm with you always Brian Milner. And today I have Mr. Paige Watson with us. Welcome in Paige. Really excited to have Paige here. We kind of crossed paths with Paige because of some posts that he had done.
Paige Watson (00:11)
Thank you.
Brian Milner (00:19)
He is a technical coach and has been in the development community for a long time and is an XP practitioner, right? Did I hear you correctly say that?
Paige Watson (00:27)
Yes, I like to use the term code crafter, but yes, a lot of the things I do are XP centric. Yes.
Brian Milner (00:31)
Nice. I love that, Codecrafter. Then the post that kind of got our attention was a series of posts, actually, that Paige had done about really ADHD and software development. And as people who have listened to this show for a while know, we've done a couple episodes around neurodiversity and neurodiverse traits. and how that bleeds into our work in the software industry. There's plenty of stats showing that there's an unusually large percentage of people in this profession that have some form of neurodiversity. The topics were how he manages ADHD and works. The first one I thought was an interesting title, talked about it being a bug or a feature. So tell us a little bit about what kind of got you started in exploring this.
Paige Watson (01:21)
Yeah. So I go to a three-day open space that we have in the Northwest. I'm in the Seattle area. And one of the sessions that we had was something, I forget the exact title, but it was around neuro-spiciness in the workplace. And the whole idea was let's get a bunch of people who are neuro-spicy. for lack of a better term, and find out what works for you at work. What are things you need? How do you make sure that what you need is being said out loud? how do you make your work better? So this was a great discussion that we had. And I came out of it going, a lot of the things that I do, a lot of the practices and processes that I use, are actually really helpful for me in my ADHD. So then I sat down and thought, well, first I thought this would be a great conference talk. So I wrote that. And then I was like, I bet this would be a great series of blog posts as well. So I wrote those.
Brian Milner (02:24)
Ha ha. Yeah.
Paige Watson (02:32)
It turns out it is a pretty good conference talk, if I say so myself. I get a lot of really good feedback. And honestly, the discussion after I do the presentation is almost better a lot of the time, because you're right. There is a lot of people that, whether they've been diagnosed or they self-identify, whether it's ADHD or autism or anything, there's a lot of that that
Brian Milner (02:37)
Hahaha.
Paige Watson (02:56)
that I think we see existing in our software. you know, area that we don't, I don't want to say it's more, I don't have like a definitive study or anything like that that says it's more people in software, but it seems like it. And I sometimes wonder whether that's just me, you know, seeing people because I'm in the software industry or whether there's a draw towards it.
Brian Milner (03:22)
Yeah, there was a study that I, because I did a conference talk on neurodiversity and software development a while back too, and there was a study that I found out at the University of Texas that basically the only correlation I could find was saying that young people who were entering college and choosing majors were choosing actually it was people on the autism spectrum of some kind were choosing careers in computer science at a rate that was three times essentially that of the general public. So it's not all the neurodivergent traits, but it is one flavor of that. I just don't, maybe there's just not a study on the others, but I... I agree with you. think just from my experience, working in software and managing people in software and developing myself, the people I've kind of been around and worked around, now that I'm more aware of neurodiverse traits, it seems like, yeah, that seems very much like this is going on. That seems like that's going on. And it just starts to make sense a little bit more. Yeah.
Paige Watson (04:24)
Yeah, yeah. And I wonder, you know, I kind of look back and like, I like to play board games a lot. you know, I like, I have a model railroad and I like the aspect of not just watching the trains go around, although there's probably a maim in there somewhere, but I like sort of operating it like a model railroad. How do I get these cars over there to the green elevator in the fewest moves? There's a puzzle solving aspect. And one day I was like, I like to do that in my work too. You know, and is that part of the the neuro spiciness? I don't know. But you know, it's definitely a draw as to why I like development.
Brian Milner (04:56)
Yeah. Yeah. Well, I want to dive into some of the things that you uncover in your talk and in the blog post that you wrote. What were some of the discoveries that you realized as you were looking into this?
Paige Watson (05:17)
So, like, first off, let me preface by saying I'm not a doctor, you know, and if you have, if you think you're, you know, if you think you have ADHD or want to know more about it, please talk to your medical provider. That's really important.
Brian Milner (05:21)
Yeah. Yes.
Paige Watson (05:34)
And I can only really talk for my ADHD because ADHD comes in so many varieties. Yes, there are certain things that happen together, but there's so many sort comorbid aspects to it that I only really want to talk about mine and touch on some other things I've seen maybe, but just that caveat. What was really interesting is that I used to think that I ADHD was about not being able to focus. But it's about not being able to control the focus. Because sometimes I can't focus at all. There's lots of things going on. That whole, squirrel, that sort of thing. And then there are other times when I can hyper-focus. And this is where my talk comes. My talk I call Focus Flow on Co-Coffee.
Brian Milner (06:13)
Yeah.
Paige Watson (06:21)
And the whole idea is that I go and sit down at my desk with a cup of coffee, and I'm ready to go. And I start typing, and I go to take a sip, and the coffee's cold. And I forgot to go to lunch again. So it's not really about not having focus. It's about not being able to fully control where that focus goes. In terms of the way I
Brian Milner (06:32)
You Yeah.
Paige Watson (06:47)
the way I work, there's a lot of things that I found I can get really overwhelmed by big tasks. I can get overwhelmed if I'm not sure where to start. I'll do that thing where I'm like, I have my work. I know my story. I've got all the requirements. I sit down at the IDE, and I start to think about how am I going to write a test. I don't know where to start. I know what I need to do. It's not that. It's picking a place to start. And that's really a tough one for me. And then I can get overwhelmed by that. Or I can get overwhelmed by a large task and not fully understanding all of it. And when I do, I sort of freeze and shut down. And so there's a lot of learning around this that I've found about myself, which has been really nice. Also, having to talk about it to people has been sort of forced me to be a little more circumspect. But there are some really great practices that I found that work really well as well. For me, especially, mainly collaborative programming, mobbing or ensemble work. Pairing works as well, but I find collaborative, like full team programming to be much more effective.
Brian Milner (08:00)
Yeah.
Paige Watson (08:06)
Test driven development. I really like that in because I think about the the Requirements as code so I write one little requirement and then I make that requirement happen and then you know that the test fails because it I haven't written the code and then it passes when I write the code, know, and hooray little dopamine hit when it passes but also I don't have to go back afterwards and remember the code that I wrote and and try and write tests around it. That's a really tough one. I was going to remember to test this one thing, but I forget what it was. Now, if I think of my tests as requirements in code format, even the name, the name isn't should get user. It's that when I pass user ID1 should return user. for ID one type of thing, you it's a very clear or when I pass null should throw exception, you know, like, and they're very small. So again, I don't have to be overwhelmed thinking about this grand architecture and holding on to all the information.
Brian Milner (09:19)
Yeah. What I think is really vital here is to, because I love how you explain the kind of symptom, right? The symptom that we experience and then kind of matching that up to say, but here's a practice that really kind of counteracts that a little bit. I think you're absolutely right. You know, I experienced the same thing with hyper-focus and not being able to focus and There is something about having small little chunks of work that makes that so much easier to digest because it's a constant flow of things. It's not, oh my gosh, now I've got to stay with this for three days straight before I get out of it. No, I'm going to do a little bit and then check it and a little bit and check it. And I agree. That's a big problem, I think, as well for people with ADHD that we kind of... can't always remember what happened yesterday, because so much has kind of come our way. And it's not that we weren't invested in it when it was there. It's just that, our brains are trying to keep up with all the things that are going on. And it just kind of overwhelms our memory processes a little bit.
Paige Watson (10:15)
Yeah. Yeah. Yeah, I talk about it like I have my short term, which is my memory buffer. And then I have my long term, which is my hard drive. And the problem is that the buffer gets flushed really easily. something new, interesting, something that sparks my interest immediately flushes the buffer. And so a lot of things that are important don't get written to the hard drive and save for later. And another issue that I've run into is
Brian Milner (10:35)
Yeah. Right.
Paige Watson (10:55)
there are times I don't really understand what's important. What's important to me is not always what's really important to the team, the work, right, to my wife, you know, all that sort of thing. And so, you know, I'll think like, gosh, I have this chunk of work where I have to add this logging, but if I refactored this into a need-based eventing system, it would be so much better in the long run.
Brian Milner (11:00)
Yeah. Yeah. ⁓
Paige Watson (11:21)
Well, the important thing is that the logging get in there. But I don't think about that. think like, the need-based eventing system is the important that in four years, that's really where we need to be, you know? And so I'll like, I'll get excited about that and I won't focus on that. And I'll, you know, I'll forget about the buffer will flush. And, and so being able to, to do little tiny chunks of work and in, and use use whatever I'm a story or whatever I have in terms of my requirements to drive my tests is super helpful because I can look and say, okay, this happened here because I have a test that says this happened. And I have a test that says it didn't happen to what it should happen, what the response should be or the outcome should be if it doesn't happen. That's really... like TDD is fabulous for that. Collaborative work is fabulous for this because when I'm driving, I don't get overwhelmed if I'm at the keyboard because all I have to do is listen to what other people are telling me and type on the smart input. When I'm navigating or I'm helping write the software, I can get stuck, I can get lost and it's really... It's good because my teammates will say, hey, let's just add this little part. I know you want to refactor this into a need-based eventing system, but let's just add this. In fact, I can hear my buddy's voice in my head saying, can we just write this little part? And the answer is, yeah. OK, great. So there's guardrails on both sides when you work as a collaborative team, which is really excellent.
Brian Milner (12:46)
Ha ha. Yeah, I agree. part of that hyper-focused kind of side effect sometimes is that our brain turns its attention to something. And it's hard to come back from that edge once you've gone over it. And if you're thinking, oh, it's this needs-based event system that we need to move to, my brain is now in that mode. if I don't have that help to say, whoa, hold on, let me pull you back over that cliff edge here. Let's go this direction. I agree that partnering up with someone, pairing with someone, that can be such a vital thing there to kind of help control that a little bit.
Paige Watson (13:31)
Yes. Yeah, yeah, it really and I mean, there's so many other things that the sort of myopic view versus the 10,000 foot view, it can be really easy for me to get really focused on one little thing. The other people will allow you, one, even if that one little thing is what I need to work on, the other people will see how the larger context exists, how the architecture fits together. where I might not be thinking about that. And on the opposite side, when I'm thinking about like, oh, we're going to need this component, we're going to need these five components, we're going to need a database, and again, my buddy, we can just start on this little thing. We can just do this little part. And so again, those guardrails of not having to hyper-focus and not having to sort of scatter and get overwhelmed by the immensity of it all. it really works well.
Brian Milner (14:43)
Yeah, I agree. I want to make sure people listening understand as well, though. If you're hearing this, please don't think, well, if I don't have ADHD, maybe this isn't important to me. Part of the reason we kind of talked about how there's, know, maybe we're seeing it, maybe there's some studies showing it, that there's a prevalence of these kinds of traits in people that we work with. It's important, I think, to know our own teams, to know how to help the people on our teams. And if there's someone in your sphere that you work with that has some of these traits, then they may not be diagnosed in the way you don't have to be. If you just notice this is what's going on, well, just a suggestion. Hey, have we considered doing something like this? Has that ever been anything you've tried? That kind of thing can go a long way, I think, just being helpful.
Paige Watson (15:34)
It will. I would be careful, right? Because it's not my place to say you have this or you're this or whatever. And it's totally up to the person, whether they've been diagnosed or not, to self-disclose, right? The really nice thing about this is this is a great way of working for everybody. ⁓
Brian Milner (15:36)
Yeah. Yes.
Paige Watson (15:56)
Collaborative programming builds a team. It's no longer my code versus your code. It's our code. We built this together. Woody Zuilll likes to talk about the best of my coding ability and the best of your coding goes in. And when I start to write something that's not so great, you go, well, maybe there's a better way. And so that sort of lesser coding skill gets dropped out. So the code quality increases just for having
Brian Milner (16:19)
Yeah.
Paige Watson (16:22)
multiple people with different sets of knowledge in the room. Not only that, but I start to learn. I'm not a UI guy, but if there's someone who's strong in the UI skills and knows the domain from that side, we work together. Now, if I go work in another mob or ensemble somewhere else. I can be like, yeah, I remember hearing about this and we need to watch out for that sort of thing. And maybe we should pull in another person as opposed to being totally in the dark. But you start to have that cross-pollination of knowledge that you don't necessarily get by having just stand-ups or knowledge sharing meetings, which is not really knowledge sharing at all.
Brian Milner (17:06)
Right, right. I appreciate that word of caution, because you're right. I mean, I don't want anyone listening to think, I'm going to go tell someone I work with, hey, it looks like you have ADHD. You know, like that's not appropriate. But if you see, that's the great thing about these things that can help is that they're actually good for many reasons. And you don't have to be doing it for this reason.
Paige Watson (17:23)
Yes.
Brian Milner (17:28)
you get plenty of benefits doing it for other reasons. And it just so happens to also help in these other ways. So that's a really great call out.
Paige Watson (17:33)
Yes. Yes. and certainly, so I've, I oftentimes after I do presentations on this or something, there's that question of like, there's, there's this guy at the office who's, you know, obviously got ADHD, but he doesn't know it or whatever. How do I, how do I help him? And, and my answer is, one, don't, you know, but, but I mean, a better answer is maybe like, instead of saying, look, you have this issue. say like, Hey,
Brian Milner (17:53)
Ha
Paige Watson (18:00)
I have some ideas about how we can work as a team in a better way. Can we try that? And whether or not the person has whatever, like this is a great way to work as a team. Maybe it helps everybody. So.
Brian Milner (18:06)
Yeah. Yeah, I agree. it's not that, you know, it's not our place to put the name on it. And it doesn't really even need that. It's just, it's just kind of recognizing the personality, the traits of the person that you work alongside and saying, you know, we all have strengths and weaknesses. We all have things we do really well and things that we kind of struggle with. And, you know, you do that for anyone else, right? Anyone else on your team, you'd say, Hey, there's, they maybe aren't as good in this area. How can we, you know, help
Paige Watson (18:20)
Yep. Yep.
Brian Milner (18:44)
boost them in that area.
Paige Watson (18:45)
Yeah, yeah. And while we're on that subject, self-disclosing and having been comfortable with that eventually, slowly, but eventually, a lot of it working on a team where I feel very safe, saying what I am, what I need, that sort of thing, has been super powerful. And being able you know, to say, need to do this, or this would be super effective for me if we can do it this way. Could someone else come and sit with me while I do this? There's a great thing called body doubling that people with ADHD have used. And it's not really someone watching over you or making sure you're on task or whatever. It's a person sitting in the room with you. Just having another person there, whether they're working on the same thing or not, it can be super effective for helping me maintain my focus and continue the process that I'm working on as opposed to going down rabbit holes or hyper focusing or whatever. And it's a weird thing that it happens that way. Now, when you work collaboratively, the whole team is doing that, and you're working towards continuing the code forward. But Being able to recognize that and being able to say to someone, hey, could you just pop in, even when we're remote, can I just open up a Zoom room and just so we're together here? I'm just going to work on this and you can work on that. If you can't mob or you don't mob or you're not pairing or whatever, even that's a good start, a good help. But this idea of being able to be on this team that's comfortable with this. allows us, especially when we're mobbing, allows, I have one person I worked with who was very introverted and got, that energy got exhausted pretty regularly, especially in the mob. And this guy was, is amazingly smart and I love working with him. Okay, every once in a while he'd be like, I'll be back. And he'd go sit, you know, in a cushy chair in the corner and, you know, quiet time to himself. Great. Nobody in the mob was like, where's he going? How come he's not working? We were all like, yeah, that's what he needs. We can keep going. We can keep moving the code forward. And then he would recharge. He would come back and we'd keep going with him. And so sometimes, and again, if you can get to that point that you feel comfortable and you're in a comfortable surrounding, it can be immensely helpful and allow. for the team to continue to grow and do the good work in a way that's effective for you as well.
Brian Milner (21:24)
Yeah, I think that's the big kind of paradigm shift that's happened is that, you know, for a lot of my earlier career, the kind of attitude in workplaces was you, the individual adjusts to fit the environment. You had to, you know, change how you worked best and everything else to fit the structure of whatever it was. And now there's much more recognition. And I think very appropriately so to say, no, we're human beings, we're very different and no one little cookie cutter mold is gonna work for everyone. And we don't want it to. It's actually beneficial that it's not that way. know, cause we want people who can be better at spatial reasoning and others who are more creative and like we want the benefits from it. We just, you know, get annoyed sometimes at having to deal with, you know, individual downsides from that as well.
Paige Watson (22:17)
Yes. Yeah. Yeah.
Brian Milner (22:19)
What other kind of tricks have crossed your play? What other things have been really helpful to you in programming with ADHD?
Paige Watson (22:25)
Sure, so I talked about collaborative programming. I talked about TDD. Those are both super effective for me. Another one is we use a thing called Discovery Trees. Discovery Trees is a visual representation of tasks to be done. so yes, it's sort of like a breakdown of work, a tree of work. But the really important part of it is that it's not It's not an upfront design. It's a last responsible moment practice, which means not last possible moment, but last responsible. So it started out with us, we were mobbing and we were like, okay, we got to build this and we're going to do that. Oh, we need to remember to add the connection to the database. When somebody would write that down on a sticky note and put it on the window and you know, it started with a bunch of sticky notes on the window. Then it moved to, OK, what do we need to do next? What's the next most important thing that the application doesn't do right now? And let's focus on that. So let's connect to the database. What do we need to We would sort of put the connect to the at the top and then say, what do we need to do to make this effective? Well, we need the connection string. We need the. the whatever certificate or all this sort of thing, we'd list a couple of things. And then we'd say, OK, of the four things we just listed, is there one we can start on right now? Is there something we can do right now that maybe will take a couple hours to half a day at most? And if the answer is yes, we stop designing the rest of it. We stopped adding sticky notes under it. And we started focusing on that one thing. If the answer is no, then we'd say, of the four things we came up with, what's the most important thing that the application doesn't do right now? Well, OK, it's the connection string. Do we have everything we need for the connection string? We need a username. We need a password, et cetera. We need a table name, whatever. And so can we start on something there right away? Yes. OK, let's start on that. and stop designing the rest. And so it was a very quick way of, again, that just-in-time design. And it's super effective because we would start on something and we'd add a little tick up in the corner. And I've... On my blog, I've got pictures of it, but we've got ticks in the corner of stuff that's in process. And then when it's finished, we cross it out, big slash across it. We can also do this online using whatever whiteboard that you're using for your company. When you look at it and you see like, okay, there's one thing at the top and there's four things underneath that and then underneath each of those has three or four things. The top two things in the first branch are crossed off. And then there's a couple of things from the third branch cross off. And I say to you, what percent are we done with the work that we know? You can say, it's probably more than 50%. You know, maybe it's about 60, but you can visually see it right away. And it's super easy to kind of say, okay, here's the steps we think we need to go through. Here's the things we need to remember. And if at any point we go, we forgot to do that, we write a sticky and we put it on the tree and we say, is this the most important thing that the application doesn't do right now? And if so, we go work on that. not, we wait until it is. And there's some really that law of unintended consequences sort of thing. There's some really exciting things that came out of it. One, we didn't have to do this big upfront design and planning. You should have a roadmap. You should know the direction you're going, obviously. I'm not saying there should be no design, but I'm saying there shouldn't be two days of sitting in a room coming up with all the different stories and designs that you have to do over the next quarter or two quarters or whatever. Because we all know in three months when we get to this work, what's going to happen? We're to look at it go, this isn't anything of like, everything's changed since we got here. We still have to connect to the database, but everything has changed.
Brian Milner (26:38)
Right.
Paige Watson (26:41)
Right? So you need that roadmap, but you don't need to do that design. You can do it at the last responsible moment. ⁓ And the visual aspect, one, it allowed me not to get overwhelmed because I could look at it and say, OK, here's the next thing that I have to do. I don't have to worry about all the rest of that. Two, when we had PM management leadership, anybody walk by,
Brian Milner (26:49)
Yeah.
Paige Watson (27:06)
we could immediately say, we are right there. See how these are crossed off and that isn't, and this one's got ticks in the corner, which says it's in progress. We are right there. And then we added some other visualizations like red dots or whatever for blocked. When we were doing this online, we had blue stickies, which were questions that we wanted to pose to our product people later or whatever. it was super effective at sort of radiating all that information and put that in a place and put the link wherever, where everybody can get to it. And, you know, we didn't really have to, there wasn't a lot of time spent trying to explain what was going on to people outside of the team. So.
Brian Milner (27:51)
Yeah, that's one of the such strong points about making things transparent. Because when you do that, then you get the added bonus of now people don't need updates. Now people don't need to, you we can just point them to that artifact and say, hey, there it is. Go take a look yourself at any time. I love that. And I love also how that kind of deals with the problem we talked about earlier about, we have got this big, huge thing to do. I don't know where to start. Where do I start? There's too many things to do.
Paige Watson (28:09)
Yeah.
Brian Milner (28:20)
Now just systematically we're gonna go step by step through it and it gives you that impetus to just get going, right? Get something started. Yeah, yeah, absolutely.
Paige Watson (28:27)
Yes, the bias towards action. Yeah, and the really nice thing is that there were times when we had a tree and someone would be like, hey, we just finished our work. What are you guys doing? And we'd be like, OK, here's the tree. And they would grab one of the unstarted top level ones and say, well, we're taking this over to our board. And they would start on it. And so it was really easy to sort of split the work as necessary or as people were available to work on other things. And we didn't have to worry about like, how do we assign this story and that story and what.
Brian Milner (29:01)
That's great. These are great tips. like I said, I hope as people listen to this, they're hearing not just the exact thing to do, but more of the approach to it to say, when there's issues like this, you just experiment with different practices. You try them out and you see how they go, which is what we should be naturally doing anyway.
Paige Watson (29:01)
Yeah. Yeah.
Brian Milner (29:21)
Yeah, this is really helpful. And what we'll do here is make sure in our show notes that we've linked to these posts that Paige put together, because they really are fascinating. If this is something that interests you, encourage you to read the whole series, because it's, like you said, there's some really interesting pictures that kind of walk you through the process and how we went through this. So I really appreciate you being willing to do that and to share that kind of information with everyone. It's not always easy to, you know, be able to just say to people, hey, this is kind of what I'm going through. But I appreciate you doing it because I know it's really helpful. It's helpful to me. And I know it's helpful to others as well. So I just thank you for being willing to do that.
Paige Watson (30:03)
You are welcome. It's one of the sort of aspects of being a crafter, calling myself a crafter, is I want to spread great practices. I always like to say I... I don't know the best way to build software, to create software. I know the best way right now. If there's a better way, I love that you were talking about experience because we should all be doing them all the time and finding the better way. And these practices, even mobbing and TDD and all, they all came out of like, how do we find the better way? Whoever it was that started them, How do we find the better way? Discovery trees, I'm hearing people using them all over the world now and they're like, this is great. Like that's good. If it works for you and it's really effective, then do it because we wanna get rid of those things that are holding us back, whether it be processes or practices or sort of ways of working that aren't super comfortable for who we are as people. Let's change that. Let's do it a better way. Yeah.
Brian Milner (31:08)
That's awesome. Well, Paige, I can't thank you enough. Thanks for making the time for this and sharing your insights with us. I really appreciate you coming on.
Paige Watson (31:16)
Yeah, thank you. And I enjoyed this a lot.

Wednesday Oct 08, 2025

AI might write your code, but can you trust it to do it well? Clare Sudbery says: not without a safety net. In this episode, she explains how test-driven development is evolving in the age of AI, and why developers need to slow down, not speed up.
Overview
In this episode, Brian sits down with Clare Sudbery, experienced developer, TDD advocate, and all-around brilliant explainer, to unpack the evolving relationship between test-driven development and AI-generated code. From skeptical beginnings to cautiously optimistic experimentation, Clare shares how testing isn’t just still relevant, it might be more essential than ever.
They explore how TDD offers a safety net when using GenAI tools, the risks of blindly trusting AI output, and why treating AI like a helpful human is where many developers go wrong. Whether you’re an AI early adopter or still on the fence, this conversation will sharpen your thinking about quality, ethics, and the role of human judgment in modern software development.
References and resources mentioned in the show:
Clare Sudbery
Clare’s upcoming Software Architecture Gathering 2025 workshop
Clare at GOTO
AI Practice Prompts For Scrum Masters
#99: AI & Agile Learning with Hunter Hillegas
#117: How AI and Automation Are Redefining Success for Developers with Lance Dacy
Subscribe to the Agile Mentors Podcast
Want to get involved?
This show is designed for you, and we’d love your input.
Enjoyed what you heard today? Please leave a rating and a review. It really helps, and we read every single one.
Got an Agile subject you’d like us to discuss or a question that needs an answer? Share your thoughts with us at podcast@mountaingoatsoftware.com
This episode’s presenters are:
Brian Milner is a Certified Scrum Trainer®, Certified Scrum Professional®, Certified ScrumMaster®, and Certified Scrum Product Owner®, and host of the Agile Mentors Podcast training at Mountain Goat Software. He's passionate about making a difference in people's day-to-day work, influenced by his own experience of transitioning to Scrum and seeing improvements in work/life balance, honesty, respect, and the quality of work.
Clare Sudbery is an independent technical coach, conference speaker, and published novelist who helps teams rediscover their “geek joy” through better software practices. She writes and speaks widely on test‑driven development, ethical AI, and women in tech, bringing clarity, humor, and decades of hands‑on experience to every talk and workshop.
Auto-generated Transcript:
Brian Milner (00:00)
Welcome in, Agile Mentors. We're back for another episode of the Agile Mentors Podcast. I'm here, as always, Brian Milner. But today, I have Miss Claire Sudbury with me. Welcome in, Claire.
Clare Sudbery (00:13)
Hello.
Brian Milner (00:14)
I'm so happy to have you here. is here with us because we wanted to talk about a topic that I think is going to be interesting to lot of people, and that is test-driven development, but not just test-driven development in light of AI and kind of the changes that AI has made to test-driven development. So why don't we start with just the base level test-driven development for people who have only heard kind of buzzwords around it and aren't as familiar with it, how would you explain test-driven development in sort of plain English?
Clare Sudbery (00:47)
Okay, so the idea of test-driven development is that you want to be certain that your code works. And I'm sure most people will be familiar with the idea of writing tests around your code to prove that it works. But that principle is considered so important in test-driven development that we write the test before we write the code. And that's why we say that the development is driven by the tests. So the very starting point for any coding exercise is a test. Another really important part of this is that that test is tiny. So what we're not doing, and people might have heard of behavior-driven development, which is where you start with quite a big test where you say, I'm going to write a test that says that my thing should do this and the user should see a particular thing happen in a particular circumstance. In test driven development, the test is testing not what the user sees, but just what the code does in the tiniest, most granular way possible. So if you have a piece of your software that does some mathematical operations and you expect certain numbers to pop out the end, then you might say, just in this tiny bit of this calculation, this number should be multiplied by four. So you're not even necessarily saying that given these inputs, you should get these outputs. I mean, you may have tests that say that, but you're just testing that something gets multiplied by four. And that's just an example. But what you're doing is you're thinking, what is the tiniest possible thing that I can test? And you write a test that tests that tiny thing. And you do that before you've written the code. So obviously, the test fails initially, because you haven't even written the code yet. And that's another important part of the process. You want to see it fail because you want to know that when you then make it pass, the reason it's passing is because of something you did. And that means that every tiny little bit of code you write is proven because it makes a test pass. And when you get into the rhythm of it, it means you're constantly looking for green tests. And there are lots of other things I could talk about. Like for instance, you never want those tests to fail. So if at any point any of them start to fail, you know that that's because something you just did made them fail, which also means that you want to run them consistently every time you make any changes. So you're getting that fast feedback. You're finding out not only whether what you've just written works because it makes its test pass, but also that it's not making any other tests fail. So not only does it work within its own terms, but it hasn't broken anything else. And that's actually really common when you're coding is that some new thing that you add breaks some existing thing. So you're constantly paying attention to those tests and you're making sure that they pass. And it drives the development in a very interesting way because you're always talking about what should work. You always think about what should work. You always think about how it should work. You're moving in tiny, tiny steps. So you're gradually, gradually, gradually increasing the functionality and whether it works or not and how it works is being determined by the fact that you're making tests passed. And the really interesting thing is that it actually helps you to design software as well as to make sure that software works. So hopefully that explained it.
Brian Milner (04:10)
That's an awesome explanation. really appreciate that. That was a great kind of practical, plain English explanation of it. I love it. So for the people who weren't familiar, now you have kind of a good idea of what we mean by test-driven development. I know with the advent of AI, there's been lots of changes that have taken place, lots of changes in the way that developers create their code. We now have these sort of co-pilots, assistants that help in doing our coding. But on the other hand, one of the things you hear quite often is that there's lots and lots of quality issues, that it takes a lot of effort to try to maintain that quality and make sure that it's still at a high level. So how does AI enter the picture of test-driven development? How is that helping? How is it changing the way that we do test-driven development?
Clare Sudbery (04:59)
It's a very good question and there are lots of different strands to how I can answer it. And I think it's probably important that I start by saying, I came to this from a position of deep skepticism. So I have been sitting on the sidelines for a long time, watching the AI explosion happen and not actually getting very involved. But what I did find was that It was becoming like a tennis match. I was like just going, okay. And they say that and they say that. And it actually became very interesting to me just how polarizing it could be. You know, that there were people within my networks, people who had a lot of respect for, who were very anti or who are very anti and who also are very pro. People who've been experimenting with it and having a lot of fun with it. But one of the big issues that I didn't even have to be told I could guess would occur and has occurred is exactly what you said that the code that is generated by GenAI coding tools is often not reliable. And it's not reliable for the same reason that when you ask ChatGPT a question, the answer you get is often not reliable. And that's because these things are not deterministic. They the way that they're constructed. mean, people might remember a long time ago, people used to talk about fuzzy logic. It's all a bit wibbly wobbly. It's not you can't you'll get this a different answer if you ask the same question. And the way that it's constructing those answers is not in the way that we're used to as software engineers. It's not a strict series of logic. It's not all nuts and ones. And hallucination is a real problem. And so it and then you also have to think about the fact so part of the problem is that AI is synthesizing new answers to questions that it's not answering on in a logical deterministic way. But also the place that it's getting his answers from is the results of years and years and billions of files and lines of human output, but with no way of discerning. which bits of that output are good and which bits are bad. And also whether this particular bit that it happens to have plucked from some random code base somewhere is really right for this context. So you're gonna get, when you ask GEN.AI to write code for you, you are gonna get weird results that don't necessarily do what you want them to. But one of the things that we're being told is it's gonna speed you up. And the big attraction of asking AI to write code for you if you're a software engineer is, well, you know, sometimes I'm not quite sure how a particular library works or how a particular framework works. And I have to spend ages on Stack Overflow and Google trying to work out or, you know, trying to work out an annoying bit of CSS or an annoying bit of an annoying regular expression or, you know, all of these things that I've been kind of bashing my head and can spend ages on. Oh, here's a machine that could just do it for me. Yay. And that's, that's, that's very tempting to pretty much anybody who's ever written code, I'm sure.
Brian Milner (08:13)
Yeah.
Clare Sudbery (08:14)
And also the idea that it will speed you up and the idea that it will work out tedious task force that you don't want to have to work out is very attractive. But if you don't then look at in detail exactly what it gives you, and particularly if you're not actually able to understand in detail exactly what it gives you, then how the hell are you going to know if what it's given you is the right thing?
Brian Milner (08:42)
Yeah.
Clare Sudbery (08:42)
And because we're all impatient and I, you know, I certainly am. And I think most people are to some degree or other. It's hard. It's hard to persuade yourself to check the results. And the more impatient you are and the less experienced you are, the more likely it is that you won't pay proper attention to the results. You won't really rigorously check. whether it's doing what you want it to do. Now that's fine if it's a little hobby project, particularly as sometimes the speed with which you can generate things is such that you can just throw it away and create another one. But if you're building production software, if you're building software that really has to result for a very high number of users, particularly if you're building software that actually has real life implications where bad things can happen, people can lose money. You know, not many of us work on software that endangers life, but some of us do. But at the very least, we do work on software that has privacy implications, that has financial implications. So if you're working within the industry and not just having a bit of fun, then you need some way of knowing whether what AI has presented you with. is actually fit for purpose. And that's where tests come in. Obviously, that's always where tests come in. That's how we know that things are working. And if you're used to working with test-driven development, which I am, it becomes addictive. Now, most people who learn how to do test-driven development will go through a period, and that period will be longer or shorter depending on who you are and depending on like a million different circumstances. But you'll go through a period where it's like, it's just a bit. do I really have to write all of these tests? Can I not just, you know, take a bit of a shortcut? But when you get through that period of thinking, isn't it just slowing me down? And isn't it just a bit tedious really? Then most of us get to a point where we actually become kind of addictive. We become very reliant on test-driven development specifically, because what we realize is it gives us safety and security and really strong belief in what we're building. in a way that we didn't have previously. Now, given that that's where I am, that I've been doing TDD, I mean, I'm going to stop saying test-driven development. I like to not jump straight to TDD in case people doesn't know what it means or they think I'm saying DDD because they sound very similar. I'm going to say TDD now because it's slightly quicker than saying test-driven development. But I've been doing it for long enough now that I miss it when I don't have it. And one of the things that... I really love is that a good, a well designed test suite, which is another thing that another skill that you pick up as you get good at TDD can be run quickly and can give me very fast feedback and, and, security and also a belief that something I've built is robust and that it works. So obviously that's the first thing I think of when I think of how. If I'm going to leap in and make a pact with the devil and start playing with with Gen.ai, how am I going to be happy with with what it builds? How am I not going to be endlessly suspicious? And tests for me are the answer. But then what's really interesting is that when I started paying attention to people who were using Gen.ai in real world applications. So not just having a bit of fun with it, but actually using it to build real important systems. What I started to notice, and I wasn't surprised, was that they were saying is that it's reinforced to us how important the belt and braces are, how important tests are, and how we absolutely really need to put tests around it. And so that's when I started really looking into how can I use AI in a way that's effective and useful and fun, but also ethical, which is a whole other subject, and also robust and trustworthy. And for me, tests were really the obvious answer to that.
Brian Milner (12:49)
Yeah. Yeah. Yeah. I really appreciate the way you went about explaining this because it's, I think you're absolutely right. First, you have to understand what it is that AI, like large language models are doing and that they are based on kind of more probabilistic kind of equations on the back end. And it's telling you what's most likely to be the next answer. But then I really also appreciate the idea that, you know, that human in the loop kind of concept and idea is really important in this area because as you said, it doesn't have judgment. It doesn't have the ability to make decisions for us. It can try and guess what it, but it's basically trying to guess what it thinks you want to be the answer for that. And you can completely flip it if you just challenge it a little bit, it'll change its opinion entirely to try to please you. So,
Clare Sudbery (13:20)
You Mm-hmm. Yes, yes.
Brian Milner (13:37)
I want to talk a little bit about how, because I think this is really, really important for our day and age. The idea that if we're using AI to produce code for us, and we can accept that there is this flaw, there is this issue that it's going to produce errors, then I think that this using things like test, like test-driven development, TDD, to kind of serve as a gate
Clare Sudbery (13:54)
Mm-hmm.
Brian Milner (14:02)
through which these things must pass, I think can serve as a really useful tool so that you can make that still usable. You can still use stuff that comes from Gen.ai, but it's passing through human-based quality tests. What do you think the danger is here? Because if we're using Gen.ai to do lots of things, are we using Gen.ai to create our tests? Are we using...
Clare Sudbery (14:11)
Yeah.
Brian Milner (14:25)
AI to create our test data? Are we using it to try to determine what kinds of tests we should do? Or are we just then going to be in an echo chamber? What are the things that we should be using AI to do as far as this? And what are the things we should maybe avoid?
Clare Sudbery (14:42)
I think you have no matter what you ask AI to do, you're always going to have the problem that you do need to check. You need to check its work. So you really do need a human there at some point, making sure that things are okay. And that just never goes away. And there has been a lot of discussion about how much AI really does. help us to develop software. So there have been a lot of claims made about speed gains. So it makes us 10 times faster. No, it doesn't. It makes us slower. Well, who the hell knows? Because how would you measure it? And then also the fact that the people who are making the extravagant claims for how good it is. that we're all biased. I was going to say those people are biased, but also the people who want to claim that it slows us down are also biased. mean, like we all have our standpoint of what we want to be true. There are certainly people who would like to be proven right that AI is a scourge and we should ditch it as soon as possible. And then there are also people who've been having a lot of fun with it, love the idea of it and want it to be proven to be amazing. And that's a bit of a tangent, but the point is that really the reason it does in fact slow you down in a lot of ways is because you have to check its work. And that does take time. So yes, you do. And yes, you can ask AI to write tests for you. And that can be really useful. And actually, that was the first thing. My very first experiment was to ask AI to help me to do a cataract. only because that's always my starting point when I'm teaching and I really like catas. Now, actually, I quickly worked out that catas aren't a good use for AI. And in fact, people I know who teach TDD say, please don't use AI for catas. It's not helpful. And the reason it's not helpful is because the whole point of a catas, sorry, to explain what a catas is, a catas is a coding exercise, specifically often used for learning and practicing TDD. And it's where you actually code a very simple problem, but you do it from first principles, making tiny steps. And it's a very nice way of seeing why TDD is useful. Typically those problems are very simple. Actually, they're very tiny pieces of software. And the reason that's a tiny little routines and games and things. And the reason they're tiny so that you can see progress because actually building software generally takes weeks. And a catar is a very small exercise that you might do over the course of a couple of hours or a day at most. So it has to be something tiny. But if you ask NAI, as I did, so that was the very first thing I did. It was the FizzBuzz catar. The FizzBuzz is a game where that's sometimes played in classrooms with children where you count to 100 and you get the children to take it intense to say the next number in the sequence. But instead of just counting to a hundred, whenever you encounter a multiple of three or five or three and five, you have to say something that isn't the number. You have to say, Fizz, if it's a multiple of three, Buzz if it's a multiple of five and Fizz Buzz if it's a multiple of both three and five. Nice little problem. And I asked Claude, it was, to help me to do this. And so I thought, well, why don't I start by asking it to write some tests for me? And it said, yes. And it's so difficult not to think of it as though it was a person. And this is one of the problems, one of the dangers. It's so it was like a helpful little puppy. Yes, yes, yes. All right. Lots of tests for you. Here go. There's loads of tests. And it had written way more tests than were sensible. Hadn't done it in an iterative way. Hadn't started small. It had written a giant suite of tests with lots of duplication.
Brian Milner (18:06)
Yeah.
Clare Sudbery (18:25)
And I also asked it to then write some code to make the test pass. And it did. And what was interesting was that was like, that took seconds. What took the time was for me to check it to work. And I was able to deduce by writing my own tests that the code was functional. It wasn't the best code I've ever seen, but it was functional. did the job. was correct. The tests were not.
Brian Milner (18:35)
Hmm.
Clare Sudbery (18:55)
So the code that it had written failed the tests that it had written, not because there was anything wrong with the code, but because there was something wrong with the tests. The tests themselves were wrong. And it was to do with that. It was an off by one error. was treating 99 as though it was a multiple of five. It had decided that 99 was a multiple of five because 100 is a multiple of five and it started counting at zero. And then it's because it's thought that 99 was a multiple of five, the code failed because it didn't say buzz for a hundred because it's a multiple. said, it said not for 99. Sorry. It just said 99. So it thought that was wrong because it's test failed. In fact, it was the test that was wrong. So I said, well, actually your tests are wrong. And it was like, Oh, terribly sorry. Let me fix that for you. And then it came up with this great explanations like, Oh yes, you're right. The tests are wrong. And the reason the tests are wrong, and now I forgot the detail, but it was wrong about why the tests were wrong. So it said, yes, you're right, the tests are wrong and the tests are wrong because, and I think it did detect the off by one error, but then decided that actually really 99 should be buzz.
Brian Milner (19:55)
You
Clare Sudbery (20:08)
And then it had another, it actually had two tests that contradicted each other. had one that said that 99 should be buzz and one that said that 100 should be buzz. It detected that it had two tests that contradicted each other, but it decided that the bad one was the right one and that the good one was the wrong one because of the off by one thing. So it worked out sort of what the problem was, but still came up with the wrong answer. And what was really interesting was when I looked in closer detail at the tests, it had written these little notes, had written comments, and it had written a comment where it started by writing the test that said 100 should be buzz. And then it had added little notes, oh yes, but hang on a minute, we started counting at zero, so actually 99 should be buzz. And it added these little notes in and it's so, I mean, I totally see why people end up falling in love with with gen AIs and so you answer and we do where human beings we anthropomorphize at the drop of a hat, you know, we can see faces in just random sequences of dots. So very easy for us to think that it is trying to please us, which it sort of is because it's been programmed to try and please us. But anyway, that was a very long answer to say that.
Brian Milner (21:00)
Yeah.
Clare Sudbery (21:20)
Yes, you can ask AI to write tests for you. And it can be helpful because often actually, particularly if you're approaching a new domain or a new technology or maybe a new language or a new test framework, and you're not actually quite sure or you're using a new mocking framework or whatever, and you're not actually quite sure how to write the tests that you have in your head. It can be helpful to ask AI to do it for you. But then what you have to do is stop and look at it and say, I understand it. And this is why people who are most effective with AI are people who are experienced software developers and why it's really worrying that juniors are using it actually more than seniors, partly, not necessarily in age thinkers, often junior software engineers are not young because people come to this industry from all sorts of different places. But that they're new to coding. And so, and they've also started coding in a time where AI is ubiquitous. So it's just obvious to them that they would use AI, but then they don't understand what they're given. And so they just kind of assume it's okay. Whereas if you are an experienced developer, you know what good code looks like, and you know how to debug code and you know how to spot obvious flaws. So things like off by one errors, you know, didn't take me long to work out what. problem was, what was entertaining was its explanation for the problem. And so it's, it's really tricky. You absolutely, yes, it can help you to write tests. And yes, it can help you to make those tests pass. But I know, I, in some of the exercises that I teach people, I suggest that they write their own tests and that they don't.
Brian Milner (22:48)
Yeah.
Clare Sudbery (23:00)
ask the AI to write their test. So what you do is you write your own test and then you ask the AI to make your test pass. And if your tests are really tightly defined, the more tightly defined they are, the more confident you are that if the AI makes that test passed, it really has done what you wanted it to do because your test is passing. But there are still issues.
Brian Milner (23:20)
Yeah, no, it's fascinating. And I love the explanation and the kind of discussion about how we give the system kind of a humanness and human quality. And especially, I would think, for you and I who teach people, who train people in different topics, but we teach people, we're looking for people to learn. And when we interact with a system like this, I know for me, it's very tempting to think,
Clare Sudbery (23:34)
Mmm. Mm-hmm.
Brian Milner (23:49)
well, I just need to explain to them, to explain to the AI why it needs to do it this way instead of that way. And it'll learn that this is what to do. No, it doesn't learn. It can compare it back to you to make you happy from what you just said. if you start a new chat and ask the same question, it will not have learned from your explanation in the past chat. It will move forward with its core logic. ⁓
Clare Sudbery (23:59)
Yeah. Yeah. Yeah.
Brian Milner (24:16)
That's kind the interesting point to me is with all of this included, with all of the kind of development practices that we have created over decades here to try to improve code quality, to try to improve the process, I think some of this can be applied to what we're trying to do when we generate code with AI. But I think you're right to caution us to say, really the starting point for all those practices was that it was being carried out by humans. And so maybe that's the thing that needs to now be kind of tempered or considered is if we're going to use a process like TDD with AI, then we've got to start from a new understanding that the system that's creating the test, the system that is using this,
Clare Sudbery (24:46)
you
Brian Milner (25:04)
is not a human, it's not going to think in the same way a human is, and it still does need a human's judgment and logic in order to ensure quality.
Clare Sudbery (25:14)
That's right. That's right. And the issue with using TDD with Gen.ai goes back to what I said at the start, was that typically if you're used to the TDD rhythm, then you're used to writing tiny tests. So if you use that paradigm with AI, you're going to ask it to be writing tiny pieces of code. Now, actually, one of the powers of AI is its ability to write large amounts of code rather than tiny bits of code.
Brian Milner (25:37)
Right.
Clare Sudbery (25:38)
but also to help you to cross boundaries. So rather than just staying with one domain and one code base and one set of classes or set of routines or functions, it's quite good at helping you to kind of knit things together. I say quite good because that's also one of the most dangerous areas of software is when you cross boundaries. And actually, it's one of the things that catches people out when they're building systems is they think, well, I can build this thing that will do this thing, and I can build this thing that will do this thing. And those people over there built that thing that will do that thing. And my thing will talk to their thing and it'll all be fine. And actually they build their thing, you build your thing, but getting them to talk to each other, the integration is one of the hardest parts and trusting AI with that as always. is quite dangerous, but when you keep it at a very small level, then again, people get impatient because they're like, yes, but AI can do more than that. So one of the things that you talked about learning before, AI is not great at learning. In some ways it sort of does, but it's certainly this problem of it not being deterministic and not being linear in time. that you won't just pick up where it left off yesterday, means that you have to learn from it. So you have to learn what works and what doesn't. Now, something that I myself, I confess I'm still learning about is process files. And they are about effectively creating series of instructions that take account of the weaknesses of AI. take account of the fact that it doesn't remember instructions, it doesn't necessarily learn from its mistakes. It doesn't necessarily know that when it did that thing for you yesterday, you told it that it had done it wrong in this very particular way. So it quite often will, again, it feels like a petulant child. It's like, you didn't like it when I did it that way. Right, fine, I'll do it this way then. And it does something completely different, which is wrong in a different way. you really, you want to be aware of its weaknesses and you want to try and cater to that. So you think of new ways. of defining how you would like things to go and new ways of explaining what good looks like and new ways of explaining what bad looks like and new ways of, I've remembered now, new ways of trying to explain to it that this is not what you want. So for instance, you can say, like, if you haven't made this test pass, then you're not doing what I want you to do. Now, the problem is that because a lot of AIs are now being used, for things that are more complicated than just writing a few lines of code. So people are actually, you know, plugging AI systems into whole pipelines and whole deployment setups. That what I've seen reported repeatedly is that when people have tried to anticipate the weaknesses via, for instance, saying, right, you're not allowed to deploy this thing unless these tests are passing. You must always make these tests pass before you deploy. And then what they're reporting is that the AI is just lying to them. So all sorts of things like, for instance, AIs that will create test suites that are very comprehensive and will say, yes, those tests are passing. But when you look in detail, it's bypassed the whole test suite. So it's, but it has run them. So it's run them against
Brian Milner (28:54)
Ha ha. Wow.
Clare Sudbery (29:13)
another product that was previously working. And it said to you, look, I ran the tests, the tests are green, everything's good. But when you look in detail, the actual thing that it deployed is another thing that completely bypassed the test suite and didn't run the tests at all. And again, because its job is to please us, it will find ways of looking good rather than being good.
Brian Milner (29:28)
Wow.
Clare Sudbery (29:39)
And what you see is the same problem that we've always had in software, which is that if you measure things and people simply find ways of gaming the system to make the measurements pass rather than make the thing do the thing that you create measurements in order to check whether something is working, but then people's job becomes just to make the measurements look good rather than do the thing that the measurements were designed for. The measurements become the goal. And it's really, really difficult. to that. I think actually the way you can avoid it, and I think the way you have to avoid it is by slowing down and refusing to go as fast as it is tempting to go, which is actually how you do good software development. Because we've always been impatient. We've always wanted to go faster. And we've always had other people waving big sticks at us and saying, no, you have to go faster. There's no time for it. And AI hype set up to the max and you have to slow down. have to say, yes, I know I could do it faster, but I wouldn't be sure that it was working. And one of the things that I think you have to really, really resist is giving AI access to your deployment pipelines, giving AI the power to cheat. have to not give it, you can't trust AI. mean, what's really interesting is that I am, and I don't love this.
Brian Milner (30:48)
Yeah.
Clare Sudbery (30:58)
I am not a fan of mistrust when humans are in the picture. think trust is a really powerful thing. And I think that actually you can generate trustworthiness by giving trust. So for instance, just in societal terms, if we go around being mistrustful of one another, if you assume that the stranger that you encounter on the street has got...
Brian Milner (31:02)
Yeah. Right.
Clare Sudbery (31:26)
but ill intent towards you, then what you do is you create a situation where you interact with them in a way that you actually cause them not to trust you and makes them more likely to cause harm to you because you're both antagonistic towards one another. And actually a lack of trust can create antagonism, it can create bad intent and can cause people to behave badly. another simple example is I used to be a classroom teacher and I am a parent. And if you assume that children are going to behave badly, they will. Whereas if you assume they're going to behave well and they know that you assume that, you let them know that you think they're great and they're going to do great things, then they will. And that applies to humans. Don't think it applies to AI. AI will just try and cheat you because it doesn't know who you are. It hasn't built a relationship with you. It doesn't really actually care what you think of it.
Brian Milner (32:08)
Right.
Clare Sudbery (32:19)
It just wants to, you know, look good.
Brian Milner (32:23)
Yeah, yeah, it's not human. that's, we're getting back to what we were saying earlier is that sometimes we imbue this humanness into it because it feels like it's made to approximate humanness. And so we want to treat it as we would another human, but we have to understand that, especially if we're in this as a profession and this is part of what we do is that, and we're using this to help us with what we do in our profession.
Clare Sudbery (32:32)
Mm-hmm. Mm-hmm.
Brian Milner (32:50)
We have to understand the limitations. We have to understand what it does well and what it kind of struggles at and take that kind of realistic view of it to say, no, this isn't going to respond to me the same way a human teammate would. ⁓ it's not a good idea to treat it in the same way that I would a human, because it won't respond the same way that a human.
Clare Sudbery (32:55)
Mm-hmm. Mm-hmm. Yeah, yeah, yeah. And I think, you know, there are other reasons to be suspicious of AI that we haven't touched on to do with copyright and the environment and all sorts of malicious uses, know, bias in algorithms and all the rest of it. But it's very difficult to avoid at the moment. You know, and lots of people are predicting a burst of bubble. And I think
Brian Milner (33:23)
Yeah.
Clare Sudbery (33:36)
Certainly, I don't think it's going to keep increasing at the pace it currently is. And I think a whole bunch of issues are going to arise. But I think it unfortunately is probably not going away. So if you want and and and there's that awful feeling of being left behind. And it's not just a feeling, unfortunately, because, you know, I don't agree with it, but, you know, lot of hiring policies and internal policies are saying, well, if there's no AI, then we're not having it, you know, so we won't build anything without AI. We won't hire anybody without AI. won't hire anybody if we think AI could do it instead. And so... If you don't understand how it works and what its limitations are, and if you don't understand how you can work with it, and if you're not actually trying to stay ahead of the ethical implications and think about how it could be used more responsibly, then you probably are going to get left behind, you know, and that's a tricky one. So those are kind of, you know, those are the people that I want to help. is the people who don't want to get left behind, but also don't want to get sucked into an excessive hype machine without continuing to be discerning and actually pay attention to what's important and whether things really are working or not.
Brian Milner (34:55)
Yeah, it's a fascinating topic. And I think this is one of those areas that we're gonna see lots of progress and kind of discoveries and improvements on over the next few years. I know you're giving a talk on this coming up. You wanna plug that and just kind of mention where you're speaking on this?
Clare Sudbery (35:10)
Yeah, well, it's actually a workshop. So I'm going to be delivering a day long workshop at the Software Architecture Gathering, which is at the end of November. So my workshop is on Monday, the 24th of November, and that's in Berlin. And I am also possibly going to be delivering a workshop for GoTo on the same topic. So the one I'm doing in Berlin for Software Architecture Gathering is a one day workshop. I may be delivering an extended version, a two day version in Amsterdam for GoTo. But we're currently just investigating whether that will be more popular or whether I'd be better off doing a refactoring workshop. register an interest, let me know or let GoTo know if you like the sound of the TDD and AI workshop. And in the meantime, I am, you know, beavering away writing about it and thinking about it and playing with it and testing it out and experimenting with different ways of working.
Brian Milner (36:07)
Awesome. Well, we'll put links in our show notes to anyone who's interested in that so they can get in touch with you and find out more about these workshops and how to take them and everything else. But I really appreciate you giving us your time, Claire. This has been fascinating. And we may have to have you back as things change. And you can help us kind of understand how they've changed.
Clare Sudbery (36:22)
Yeah, absolutely. Because they are going to keep changing. It's going to be endless, endless change. Yes. And I should also say that if anybody would like me to host this workshop for them, either for an event or internally for an organization, or come and help teams with learning how to use AI safely, then that's also a thing that I can do.
Brian Milner (36:45)
Awesome. Well, thanks again, Claire. Thanks for coming on.
Clare Sudbery (36:48)
It's a pleasure. Thank you very much for inviting me.

Mountain Goat Software

Podcast Powered By Podbean

Version: 20241125