Events

Tech Leader Chats: How to use DORA metrics to improve software productivity with Nathen Harvey

Title of talk with photo of speaker

The DORA metrics have become the industry standard for good performance in software development teams.

The research has been around for over a decade now – initially starting as a collaboration between Puppet and researchers including Dr. Nicole Forsgren, Gene Kim, and Jez Humble. It later spun out and ran as its own entity before being acquired by Google Cloud, which is where it sits today.

Since the 2018 publication of Accelerate, the 4 key metrics referenced in the book have become known as the DORA metrics and gained traction as the standard for measuring success in engineering. Every year, the DORA team at Google Cloud refreshes the research, identifying new emerging trends.

This talk provided an intro to DORA and tips on how we can use these metrics to improve.

About the speaker

We were fortunate to hear from Nathen Harvey, who leads the DORA team at Google Cloud. Nathen has been involved with DORA since nearly the beginning! Nathen has learned and shared lessons from some incredible organizations, teams, and open source communities and regularly supports companies who are putting insights from DORA research to work.

How to use DORA metrics to improve

See below for:

  • Key takeaways from the talk
  • The recording & slides
  • Links to resources mentioned in the talk
  • A transcript of the talk
Key takeaways
4 key reminders on using DORA

It comes down to learning!

Throughput and stability are key

3 things to do when using DORA

Insights from 2023 DORA research

Recording and slides

You can view the slides from the talk here - and see below for the full recording.

Bonus

Nathen very generously followed up with replies to the questions we didn't have time to get to in the talk – see below for those:

  1. When do you think DORA metrics shouldn’t be used?
  2. What are some strategies to speed up code reviews?
  3. What are ways to reduce the cycle time?

Resources mentioned in the talk

Transcript

Lauren Peate  0:00  
All right. All right. So now that we've gotten that started, welcome, welcome to our tech leader chat, we are really excited about this one, I will introduce our speaker in just a moment. But first just wanted to share a little bit about the purpose of this group. So this is a space for human centric tech leaders to learn and grow together. And our goal is with these chats to bring you some really amazing speakers, I'm really excited that our speaker lineup and then also give you a chance in the second half, to connect with each other to share things that might be most top of mind for you. And I mentioned before we do have a code of conduct, we want to make sure that this is a space where we're really treating each other with respect and supporting everyone here. And my name is Lauren. I'm the CEO and founder of multitudes. We're an AI agent for product delivery. We care a lot about how to think about metrics and human centric ways. So this topic is near and dear to my heart. But before I introduce the speaker, let me just run through the plan for the day. So the first half is where we'll hear from Nathan will give you a chance for q&a. We've also brought in the questions that you shared beforehand from the meetup RSVP. So we'll weave those into this part of it will be recorded. And we'll share that afterwards. I know there's some folks who are waiting for the recording. So that'll come in the next day or two. The second half, then we'll turn off the recording, we'll move into small breakout groups. And so this is a chance to meet some peers to talk about some things that are top of mind maybe some challenges that you didn't want to do in the broader q&a. And Nathan is going to stick around and he'll jump between a couple of groups. And so you'll have a chance to ask him more questions than if you would like to and if he's able to join your group. And then I mentioned, put your questions in the chat. We've got Jenny here from the multitudes team who's going to help monitor the chat and make sure we get to your questions. So that's all the admin. Let's dive now to the exciting part. So Nathan, and I knew Nathan's work have actually been a member of the door community for quite some time. And I'm sure David Nathan will talk more about that in a moment. And then had the chance to meet in person last year at heavy bet conference on AI. So had some great chats. And I am something I really appreciate about Nathan is how he brings such a human focused approach to the metrics. And it's the metrics are always in service of people. That's something that I can really see in in how he thinks about it, and how he approaches it. So really excited to learn from from him and his work in this space. And I should mention that he leads the Dora team at Google Cloud. And he has been involved with Doris since the very beginning. Basically, he hired Nicole for his grim so one of the door researchers for a job. So you know, if we have time for it, he also has lots and lots of stories about how Dora has evolved and changed over the years tail. So with that, I'm gonna stop talking and stop sharing. And I'll hand it over to you, Nathan. Welcome. All

Nathen Harvey  3:01  
right. Thank you so much. Thanks, Lauren. And thanks, everyone, for having me. I'm really, really excited to be here today. I do want to just make one minor correction there. I didn't actually hire Nicole, I may have helped her get hired. I was her manager for a little bit of time, once at a company that I worked at that we both worked on, obviously before Google. But yeah, yeah. So Dr. Nicole Ford is going to is certainly the one who really brought all of the science to Dora. And she then founded a company called Dora that company got acquired by Google Cloud. And I've been working on it completely since then, for sure. And it's super exciting. Nicole has moved on from Google Cloud and continues researching in this space. Sorry, no pun intended, if you're following her work. And if you aren't, then that wasn't even funny. But she continues this her work in this space. And we're really thankful. I'm really thankful that we're still very close friends, and we get to chat and collaborate on all of this stuff. So when I'm presenting our slides, except, oh, no, these aren't the slides. These aren't the slides we were looking for. But what I wanted to start off today with is kind of inspired by some of the questions that I see over here in the chat. I wanted to share this resource that we recently built. It's called Conversations dot Dora dot Dev. And I think that word conversations is so powerful when you think about metrics, metrics of any kind. I really, truly believe that metrics are there to help us start a conversation. And it is in using those metrics that, you know, we can really drive conversations about how we want to change as an organization as a team, et cetera. So I think it's really important we have this conversations that Dora tab, you'll see maybe some provocative questions, something that is inspiring you I would encourage you maybe maybe in your next next retrospective, maybe in your next team meeting, pull up conversation, stop Dora dot Dev, and see what question pops up and maybe just spend five for 10 minutes as a group answering thinking about that question. Oh, and if you like, I mean, you know, what kind of meet up would this be? If I didn't give you something to do? If you click on this little question mark here, you can actually suggest a question so you can contribute questions right back to the rest of the community and help us all drive great conversations about how we get better at getting better. Okay, now off to my slides. Let's see, here they are. They're kind of just navigate the desktop. Well, Lauren, thanks again, for the great introduction. I'm so excited to be here. So let's talk about or really quickly. So that's being Hi, I'm Nathan, look, I have one shirt, one shirt, it's, it's always the same. It's always the avocados. That's just it's my jam, if you will, so So first, why, why, Dora? Why, why do we even care about this stuff? Why, why does door exist as a program? Well, one of the things that we know for certain, I don't know exactly where you work, I know where one of you works, because you mentioned it. And I know the folks that multitudes where they work. But I don't know much about the rest of your companies. Except I do know one thing about all of your companies are software companies. And it doesn't matter what the business of your business is, you are in a software company. And getting better at software is important to the goals of your business. It's important to the daily work that you do. And it's probably important to your own well being it's probably something that you enjoy doing. I sure hope so. But we know that technology enables great digital experiences. And it gives great user experiences. It brings real value not just to our businesses, but to our users. And maybe not even just to our visit our users. But maybe maybe there's a loftier goal. Maybe we were helping improve the world. Yeah. So So what is this all about? I mean, the door research program has been around for over a decade. Now, it's kind of hard to believe there are still things for us to learn. We get excited to learn every single year new insights and new ideas. A few years back, though, Dr. Nicole Forrester and got together with Jess humble, excuse me and Gene Kim, and wrote this book accelerate, maybe you've read this book, if you haven't, I highly recommend it. This book is a summation of the first few years of the research and it really talks about the research methods that are used, and some of the initial findings. And it's all about, you know, what we need to do as technology organizations, there's all of this pressure for us to accelerate. We need to deliver faster, we need to engage better, we need to anticipate things that are changing, we need to respond to security threats and things of that nature, faster and faster and faster, always faster. But you know, faster isn't the only thing that matters. And it's certainly not the entire focus of Dora. So what do you actually need? Because it is more than just about going as fast as you can? Maybe there's this question that we need to answer how can your organization optimize value delivery from an investment in technology, and technologists, and in you and me, we are the technologists. In order to get there, one of the things that you're going to need to have is a way to assess how are you doing today? What's happening today? How are we performing as a team as an organization? Once you've done that, you're next going to need a way to identify areas to invest and make improvements. The truth is if you stop for five minutes and think about areas that you can improve your team, your organization, your company, you're going to come up with 10s, maybe 100 different ideas. Well, that's too many to fix tomorrow. So how do you prioritize those? How do you focus in on the one thing, were the two things that you need to change right now. And along with that prioritization, as you're making changes, you're going to need to wait to understand how those changes are impacting things, you're going to need that feedback loop to understand and help learn how you're progressing. Because the truth is, as we make improvements, sometimes we take steps forward. And sometimes our improvements become in scare quotes, improvements, which take a step backwards, that's okay. Because we learned along the way. And speaking of learning, this is something that you're really going to need that organizational muscle and institutional culture, to repeat all of this over time, and to take those lessons and understand that learning and failing is no different than learning and succeeding is the learning that matters. And so really, what you need is a way for you and your team to get better at getting better. This is what this this one statement, I think really summarizes the ethos, the philosophy behind Dora. It's all about continuous improvement and continuous learning. How do we as a team get better at getting better? So we want to take one small step today and another small step tomorrow. Sometimes like I said, those steps will be sideways sometimes It'll be backwards. That's okay. As long as we keep learning. Now, many people know Dora for the Dora metrics, you've probably heard of the door metrics. If you haven't, I'm gonna give you a quick intro to them, and give you my sort of thoughts on how these metrics work. So from the very beginning, we in the door research program have had these four metrics. And you can actually take the four and boil them down to two, what we're measuring when we think about software delivery, performance is throughput and stability, the throughput of your system and, and the stability of your system. And specifically with Dora, we focus on software delivery performance. But well, that's one of the areas that we focus when we talk about these four keys. That's what we're talking about as software delivery performance. Why software delivery performance? Why not? The full flow from idea all the way through to customers? Why did we decide as a research group to focus in on software delivery performance? One of the primary reasons is that when we look across the industry, we saw at the beginning of this research program, and we continue to see today that delivering that software, delivering change is a constraint, a bottleneck within many, many organizations. Maybe maybe in your organization, you have an innovation center, or an innovation lab where maybe you have innovation days where you get together and you innovate. And sometimes that innovation yields great things.

Products, tools, things that you built that you can put your hands on. And then sometimes you struggle with getting those products and ideas out of the lab and into the hands of your customers. That's a software delivery problem. How do we solve that? We also, of course, want to have a way to measure it so we can assess how we're doing. So these are the four keys. Like I said, we think about throughput and stability. I like to boil these down to four basic questions. How long does it take for a change to go from committed to the version control system or change on the developer's workstation? All the way through to production? How frequently are you updating production? When you do update production? What is your change fail rate? That's kind of the scientific name for this metric. I have a different name for it. I call it the Oh expletive rate. Because it's what happens when you push to production. And someone shouts out an expletive Oh, expletive, we better roll that change back or Oh expletive, something went wrong with this, this change, we have to push forward a hotfix. What you can't do is wait until the next release, you have to take immediate action. And when you have to take immediate action like that, how immediate Are you? How immediate is the results of those action, your failed deployment recovery time when you have a bad fail a bad deployment? How quickly can you recover from that and get your customers back to a place where they're happy. Now a couple of other things about these four metrics. The first and most important thing is that you have to look at all of them. And you have to have shared accountability for all of them. You know, we aren't working in the days of your where we would turn to our developers and say, hey, those throughput metrics, those are yours, you have accountability for those, and then turn to our operators and say those stability metrics, they're yours, you control those. When we're like that, we know what happens. I used to be an operator, a system administrator, I was responsible for all of the production systems. How do I keep my production system stable? Easy, I accept no changes. Like if you want to change something about the system, no, you can't do that. Because you're gonna mess with my bonus, you're gonna mess with my potential to get promoted, you're gonna mess with my metrics. That's a bad way of working, we're fighting against each other, we have to have shared accountability and joint ownership of these. The second thing I'll say about these is that these metrics are really meant at the level of an application or service. I don't ask the question. For example, How frequently does Google deploy code to production? Well, the answer is high. It's a very large number. And I don't know what it is. And even if I did, I couldn't use that to do anything. It wouldn't inform any decisions for me. Instead, we look at this at an application or service level. So it perhaps you work at an old bank, this bank has been around for 150 years, we're not one of these newfangled startups that just gets to move fast all the time. I'm working in a bank with 1000 different applications. I mean, it's you're probably if you want to roll door out across the entirety of the bank, you'll probably have 1000 different measurements around or Well, I guess that makes it 4000 different measurements. But think about it. If you think about the the mainframe that runs the back end of the bank, can you measure these four metrics? Absolutely. And they probably matter. But in the bank, you probably also have a retail banking site and a mobile application. We would never expect these four metrics to be the same across all of those different types of applications. So, but these measures can help us understand again, how are we doing today so that we can start to prioritize where we want to make investments. And yeah, so you can use them for any type of technology that you're delivering, they should sit at the I was about to say team at the application or service level. And the reason I don't say team is because oftentimes an application is owned or managed or worked on by a cross functional team, many different teams come together to do that. So

use them all. Share, join accountability, on any type of application at the application level. Alright, I want to take us all over to door i dot dev for a minute, because I want to show you a couple of resources that again, you'll be able to take away after this afternoon's this morning's this evening's call, wherever you are in the planet in the world. Okay, so here I am over on Dora dot Dev and just I want to confirm that you can see that it says get better at getting better right at the top. Alright, great. It's got a thumbs up there. I am ready to get better. So how are you doing today? Well, one of the things that I want to show you here first is Dora quick check. So you can come right over here and with your team. And this is how I recommend you do this. Get your team together and your next retrospective or planning session or whatever. Next time your team is all together, throw this up on a wall on a projector and have a conversation. Hey, what is our lead time? How long does it take for a code change to go from the developer's workstation all the way through to production? And of course from the developers workstation means committed into your version control system? Because you're all using version control, right? Because if the answer to that is no, we don't have any other questions for you. We should all like pause and go get some get training or some mercurial whatever. Just put all of your code and configuration of version control, please. I'm sure you're already doing that. Sorry for the soapbox. Okay. So lead time, how long does it take committed lands in production team I worked on years ago, we were agile works in two weeks sprints at the end of every two weeks, we push code to production. So my answer there would be one week to one month, your deployment frequency, how frequently you're deploying, again, we're deploying every two weeks, so between once per week and once per month. Next up the expletive rate when we pushed out how often did we have to roll back? I'm not proud to admit it. But the truth is, it was about 30% of the time we had to roll back those changes. And then how long did it take us to recover when we pushed through a bad change? usually less than an hour, but sometimes it took a little more than an hour if I'm being honest. So I'm going to say less than one day. All right, I've answered these four questions. It was easy for me because I didn't have to discuss the answers with anyone. But when you do this together as a team, you might actually have some differences of opinion. And why? Well, because you have different lived experiences depending on where you sit in the organization, what you do with the application, and so forth. But now that I've done that, I can view the results. And what this is going to show me as sort of how this team lands relative to all of the teams that we've talked to in our research program, we get a score, in this case of a 5.8. Is that good? Or is that bad? I don't know. What was it last week? Or more importantly, last month or last quarter? How has it changed over time? You see this one number is kind of meaningless, except it gives you a good place to start. Okay. But knowing where we are is only the first step. The next step is how do we prioritize what we want to do to improve that? So how do I get better at my throughput and my stability? Well, if I come over here to the Research tab really quickly, this is going to show me the Dora research model, I'm just going to zoom in a little bit here, I want to talk a little bit about how the research actually works. So each year we go out and we run a survey, we collect a bunch of data from organizations of every shape and size. We analyze all that data, we learn a lot from it. One of the most cool things that we can do from this is build a predictive model. And so what we can say through our research is that software delivery performance, those four metrics, they predict better organizational performance, and better wellbeing for the members of your team. So that's good. But how do you get better at that? How do you get better at software delivery performance? I'll give you a hint. It's not by mashing the Deploy button over and over and over and over again. I mean, that will increase your deployment frequency, but it's not actually doing anything. Instead, you have to think about and look at the capabilities that enable your team to achieve that good software delivery performance. And the capabilities that we look at in our research. And this is where we go pretty deep, our technical capabilities, things like continuous integration, but they're also things beyond technical capabilities. Things like process capabilities. What does it take for that change to go from committed to a proved, and then into production, Change Approval Process that's heavyweight and sort of opaque. That's not good for your software delivery performance. And probably most importantly, though, is culture. How are your teams working together? Are you it truly embracing learning as sort of a foundational principle for your organization? Are you okay? With taking a step backwards from time to time, as long as you're learning as you go? Now, as I said earlier, when you have a minute to think about all of the things you can improve, you're going to come up with lots, you can't do lots of things tomorrow, you can't improve lots of things all at once. This diagram is lots of things. And this diagram also represents a subset of the things that we've just, like, investigated over the years. So we're already trying to sort of tighten up the scope, but it's not tight enough, we've still too many things. So if I go back to the quick check, though, beyond this 5.8, we also help you by starting to ask you questions about three of those capabilities. But it's really important for us as researchers, we can't ask you, do you do continuous integration? That's kind of like asking, do you do DevOps? Well, what does DevOps mean? I don't know. Or I do know, but my definition is definitely different than yours. And it's different than Warren's and it's different than it's ridiculous, right? So instead of asking, do you do continuous integration, we asked about the characteristics of that particular capability, code commits, resulting in automated build of the process. Is that true? Do you agree with that strongly disagree, etc? So we asked you a bunch of or we put in front of you a bunch of statements and ask how do you respond to those? Oh, except I want to change these really quickly, because I was on the wrong side of the scale there, we were not so good at with the continuous integration. Next up, we have loosely coupled architecture. There's, there's a thing about loosely coupled architecture, we call it architecture. And so you think systems, but it's really about the teams are they loosely coupled teams. And if you read those statements, which I'm not going to do for you now, but you'll see kind of what my meaning is there. And then finally we get to culture. And so this is, you know, information actively sought our responsibilities truly shared, are you learning from mistakes, and so forth? Okay. Now, with all of these randomly selected, not at all, considered answers given, you should not do that you should consider them all with your team, we can now get a stack rank of how are these capabilities lining up? Well, what does this tell us? Well, just by looking at this, I can tell you, your culture is probably in pretty good shape, I wouldn't make investments in making that a little bit better, because it's probably not the thing holding you back. This is about sort of the theory of constraints, find your bottleneck, what's holding you back. In this case, based on these answers, I would say it's continuous integration. So now, continuing on door dot Dev, I can learn more about that particular capability. We have these capabilities, articles, their format is basically the same. The details here aren't all they're super important, but not for right now. I'm going to give you just a quick pass through them. So we describe what that thing is how to implement that thing. We talk a little bit about some of the objections that you're going to run into when you try to implement that thing. Some of the common pitfalls that you want to watch out for, and then maybe more importantly, ways to measure Oh, boy, more metrics. Dora is well more than four metrics, there are all kinds of measures and metrics around so pick the ones that you want, pick the ones that are going to be meaningful for you. But now as a team, maybe we say, hey, for the next two iterations, Sprint's whatever, however you managed to work for the next quarter, we're going to focus on improving our continuous integration capability. And we expect that as we improve our continuous integration capability, remember, that's over here and one of these technical capabilities, it's going to help us get better at continuous delivery, which is going to help us improve the software delivery performance metrics, which is going to help us get better organizational performance and have happier humans on our team. We're all gonna win when this happens. Except just remember, this is improvement work. It's hard work. It's it's work that you have to be committed to. It's worth it sometimes going to work and sometimes going to fail, and just learn as you go along the journey. Okay, back to my slides, I want to make sure we have plenty of time for questions, and so forth.

Each year, as part of our research, we publish a report, I want to share some of the findings from the 2023 report. These are just the high levels. The quick pass, it's a nice hefty report. I absolutely recommend you download it and read it and I'll show you exactly where you maybe you've already found it on door dot tab. But here we go. Really like lightning round, healthy culture. Without a healthy culture. It doesn't matter how good your continuous integration is. If your teams blame each other, if they fight with each other, if the incentives are all messed up, it's not going to matter. Healthy culture is key to this. The second big findings from 2023 Is that focusing on your users is super important. In fact, we found that teams and organizations where they focus on the users and the needs of their users, they get about 30% better organizational performance. What does it mean to focus on your users? Certainly, this helps us with software delivery performance. But it also helps us with things like reliability, How reliable do your users need your system to be? They need it to be reliable enough so that when they use the system, they get from it what they expect that it's able to meet the promises that you've made to them. Does that mean five nines? I don't know, it might mean nine fives, probably not nine fives. But you know, somewhere between nine fives and five nines, it's probably the reliability that you need. But it truly is based on the needs of your users. And so really thinking about and focusing on the users is important. Next up quality documentation. I know you've heard this before, we should write better documentation. We have data that backs this up. This data shows us that those teams with stronger documentation are able to better implement those technical capabilities. And beyond that, it sort of enhances them so that they have an even more impact on the overall performance of the organization. Next up flexible infrastructure. Using the cloud is awesome, unless what you're doing is using the cloud in the exact same way that you used to use your data center and you have to open up a ticket in order to get resources. Think about the flexibility that cloud offers and utilize that next step, underrepresented groups, this is always an interesting thing for me to talk about. I am clearly an over represented member of our community, middle aged white dude, here, hello. But the findings that we have are fascinating here. One of the things that we found as an example is that those individuals that take on more toils or that do more toilsome work, tend to have higher levels of burnout. That's not a surprise. But what we also find is that underrepresented members of our community tend to take on more toil. So work, where maybe that's how work gets distributed on their team. And as leaders, maybe that's something that we can watch out for, so that we can distribute the work evenly and equitably so that everyone has a chance to shine, and everyone carries their weight. And then the sixth quick capability or finding that I'll share is code review speed. This one was also fascinating. teams with faster code reviews have 50%, higher software delivery performance 50% Higher delivery performance with faster code reviews. This is fascinating for a bunch of reasons. First and foremost, when you think about code reviews, it's a tools problem. Oh, but it's also a process problem. Oh, and it's also a people problem. Like all three things kind of come together. When you think about how to code reviews actually work. I think it's also fascinating to think about, if your organization is having problems, like if code review is a bottleneck for your team, and you're thinking about all AI is here, it's going to save us we're going to write a whole lot more code. Well, if code reviews are your bottleneck, and the AI starts pumping out just more code, like that's not helping that bottleneck at all. So maybe this whole idea of setting a baseline finding your bottleneck and then trying to improve there can also guide some of the use cases that we have for where should we use artificial intelligence? All right there, I've done my bit to mention artificial intelligence, it is 2024, you can't not mention it. Let's, let's jump into some of the questions. So what questions do you have? And I'm gonna start with one that came in first, because I've pre loaded a couple of these. So the first question, what and how am I doing all the time? Okay, well, we were gonna

Lauren Peate  28:53  
say, Let's do maybe a couple of minutes of the questions that you pre loaded, we have some more in the chat that I want us to get to. So maybe five.

Nathen Harvey  29:02  
Great, I'm gonna do one of these. And then we'll go to the ones in the chat, because I really would rather talk to the people that are here, what's changing in the door reporting? Are we adding a new metric to measure start to finish I mean, idea to operate. So I talked a little bit about this. Those are like flow metrics are out there. That's a real thing. And maybe that's a place to look at, when software delivery performance is no longer the bottleneck, it's a good opportunity to look elsewhere. Or if software delivery performance is not a thing that you have direct control over. Maybe that's a good idea, a place to look elsewhere. And the truth is, we're always changing something about our research and metrics that we have. Alright, let's take a question from the folks that are here.

Speaker 1  29:41  
Yeah, I've been compiling them in the background. And there are some overlaps between the questions asked and the ones that people seem to in beforehand, so I thought a big one a big thing that came out is how to get started. And so it's things it's all the human side of things like how do you get buy in from leadership? How do you reduce sort of defensiveness or worries People are going to be micromanaged when you start introducing metrics for the first time, and how do you get that adoption throughout the team and organization? Yeah,

Nathen Harvey  30:07  
that is a great question. And I think I just want to come back to this diagram here. And as we think about those four metrics, there's a couple of things versus metrics. Unfortunately, I see that this happening all the time, they can be misused. The truth is, they can be misused by leaders. I've heard leaders say every one of my teams has to be an elite performing team by the end of the quarter or by the end of the year. And what happens that forces people just to try to meet that metric, that metric has now become the goal. That's not a good healthy way to do this. And the flip side to that is, sometimes those metrics are misused by the practitioners, oh, my leader wants us to up the deployment frequency, great. I'm gonna like, automate this clicking the button every two hours. Now I'm deploying every two hours, nothing's changing. But I'm clicking the Deploy button every two hours. And honestly, you know what? I liked that you're misusing it that way. And what you're actually doing is exercising your deployment pipeline every two hours and seeing that it works. Maybe that's not so bad. So sometimes misuse, you know, we try to build these metrics. When you think of them all four together, there's some tension between them, you think, and they just sort of help each other reinforce each other. So the way to get started, though, how do I bring a leader on board? Well, you bring in someone like Nathan Harvey, or or Lauren and have them talk to your leaders. And we'll, we'll we'll help with that. Maybe. But the truth is, I think that it is important that you, you have a conversation with your leaders, and maybe maybe the thing that your leader doesn't want to hear is, Hey, boss, we're gonna go in and play our or improve our software delivery performance. Maybe the thing that your leader would rather hear is, Hey, Boss, what, what's keeping you up at night? What concerns you about our current engineering excellence, our current engineering, productivity, our current engineering practices, and and start to understand where they are, and build that empathy. And then you can start talking about how software delivery performance can predict better organizational performance, better wellbeing on the team, and so forth. And then just follow that process that I outlined at the beginning, set a baseline, identify something that you want to fix, go fix that thing. retest your baseline, did this change help? Or did it hurt? And then the final thing I'll say here, sorry, such a long answer is beware of the J curve. Do you know the J curve, I'll I'll explain the J curve to you, when you go to fix a thing, here's what's going to happen, I can virtually guarantee it, and we see it in the data all the time, you go to fix a thing, and you'll see some immediate gains, things are getting better. But over time, as you continue to work on that thing, your productivity enhancements are going to drop off, you're going to drop into that sort of trough of disillusionment, your leader needs to know that in advance, and you need to know that in advance because you need the patience for that J to come back up. And then really start to see that that performance paying off. That is true for reliability practices. It's true for continuous integration. It's so early, we can't yet say it's true for AI. I'm pretty sure it's going to be true for AI. So just beware. Alright, sorry. Next question. Let's go.

Speaker 1  33:24  
No, that's great. I'm actually gonna make that that question a bit longer, because there were like, literally eight people who asked sort of similar questions. So what about the flip sides rather getting buy in from the higher ups? What about getting buyer and buying from the teams themselves and the individual contributors who might have been like, burned in the past with metrics or have some baggage and they're just worried about being micromanaged. And, you know,

Nathen Harvey  33:44  
the way that I think the best way to help your teammates, and the practitioners get on board with these metrics, is to talk about how software delivery performance helps organizational performance, but talk about how it improves well being and talk about how the way that we're going to measure these metrics is not with a dashboard, we're going to start by measuring these metrics with one of my favorite tools that are is readily accessible to almost everyone a post it note, just write down. What were your software delivery metrics over the last iteration? All right. As a team, let's have a conversation, what could we do to improve that? It just gives us a nice common language and grounding for how to have these conversations and where to make those investments. And as a team, if we stay committed to improving, it doesn't really matter how our software delivery metrics are going. They're they're going to take care of themselves. But let's focus on where's the pain across the team? And hey, wouldn't you like to alleviate some of that team? The truth is that most teams know what hurts. Most teams have an idea about how they can alleviate some of that pain and some of that bottleneck. And unfortunately, too many teams simply don't have To the time capacity, the cognitive availability to go focus on those. And that's where as a team, you have to commit to it. Dave Farley in his book, modern monitoring software engineering, has a quote that goes along the lines of this, I'm going to paraphrase him, why do we as software engineers, we're one of the few professions where we have to ask to be allowed to do a good job. Right? We know, we know what's right, we should just decide to do the right thing.

Speaker 1  35:37  
Awesome, I have two questions about documentation and quality. So one question was like, that could be sort of merged. One question was, is there any sort of practices at Google to make sure that documentation is always up to date. And then another one was any recommendations you have for quality documents, especially for teams that are using manual testers and have low automated test coverage? How can use documentation to help? Misha great,

Nathen Harvey  36:02  
great, so I'm gonna get really quickly I'm on that research page, I'm gonna click on 2023, which is where you can find the research from planning 23. And then I'm gonna go to questions. These are the questions that we asked in the survey. And then from there, they're kind of organized by topic, and then alphabetically. And I'm just going to scroll down to the documentation questions. And the reason I do this is it's really important to understand how do we measure documentation? And these are the statements or the questions that we ask of folks. And you'll see here that we talk about, can you rely on that technical, technical documentation? Is it easy to find is it updated as changes were made? So this is how we assess. And I think, again, we're focusing here on internal documentation. And these are great questions to ask of your team. Now, inside of Google, do we have processes to help keep our documentation up to date? Yes, we absolutely do all of our documentation, all of our Doc's have like a freshness date. And once that freshness state hits some threshold, like 12 months, bugs are automatically opened in the system and assigned to engineers to go refresh or at least review and update the freshness state of that documentation. That's one way. But probably the more important way that we keep our documentation up to date is in performance reviews, we talk about and reward the documentation work that engineers do. And it's engineers that get to do the documentation work. And we use that as part of their annual review process, and in one on ones and so forth. It's not, I'm a software engineer, and those are the tech writers, it's their job to Write the Docs, I just do the engineering. It is a collaboration and we reward that work. Love

Speaker 1  37:45  
that. Yeah, making it a first class citizen and making sure that toil is recognized as like, the real work. That's great. Cool. I think that's all the time we have. I know, there are a couple of other questions. But maybe the breakout rooms might be a good time to brainstorm on some of those. There was one about strategies for speeding up code reviews, and then the use of feature flags and deployment and trick deployment frequency and things. So I'll pass over to Lauren for the breakout rooms.

Lauren Peate  38:11  
Yeah, so first of all, so we're gonna shift gears here from this larger group thing session to the smaller group discussions. And but first, I just want to do a big thank you, Nathan, for making the time and sharing your insights. And two big things that always stand out to me one, it always comes back to culture and how we collaborate. So really love seeing the continued research on why we should keep focusing on that and how we work. And then I loved what you said around how actually, it doesn't matter if we learned and failed, or if we learned and succeeded. The question is do we learn? And I think this might be something else for the breakout rooms of how do we build that learning culture because it's, you know, easier, it's still uncomfortable to fail. And we know that. So thank you so so, so much. And I'm gonna I'm about to turn up the recording. So thanks to the folks in the future who are watching this. Just to note, our next talk is also going to be a really good one. It's on ethics and AI, you know, AI, big topic, but the bigger question is, how do we responsibly use it? So check out the meetup group for that. And with that, I turn off the record and I have 30 more

Nathen Harvey  39:16  
seconds. Yes, sorry, before we ended the recording. All right. Before we end the recording, I have two quotes from Rick Rubin, who's one of my heroes, the first one, there are no shortcuts, you have to do the work. Don't forget that. The second one and this is important for everyone to stick around for the group discussions. If you're listening or speaking, either way, you are participating in the conversation. So we're really happy that you're here to participate with us. And then finally, finally, finally, I don't have a slide for this, please. Also, while you're over at Dora dot Dev, make sure that you go to the top of the page and click the community link and join the Dora community where we can continue this conversation as well. Alright, Lauren, now we could stop the recording. Thanks. Fantastic.

Lauren Peate  39:55  
All right. I'll stop that. And then

Transcribed by https://otter.ai

Contributor
James Dong
James Dong
Operations
Support your developers with ethical team analytics.

Start your free trial

Join our beta
Support your developers with ethical team analytics.