What's the point of DORA metrics?
The core objective of DORA metrics is to assess software development team performance, help leaders prioritize improvements, and validate progress — as highlighted by DORA Leader Nathen Harvey.
Capital One, a top 10 bank in the US, shared a case study where implementing DORA Metrics and DevOps practices led to significant improvements, including a 20x increase in release frequency without production incidents. Similarly, Octopus Deploy shipped 47% more PRs by leveraging insights from the Multitudes platform, which included analysis of their DORA metrics performance.
Let's unpack how DORA can help you improve your development processes, and even your entire organizational culture.
Sections:
The core objective of DORA metrics is to understand the capabilities that determine the performance of a software development team and help leaders identify what actions can be taken to improve it.
DORA seeks to help teams achieve 3 key objectives:
These metrics serve as a compass, guiding engineering teams towards practices that not only improve their development processes but also contribute to overall organizational success. In fact, Broadcom found that companies in the "elite" tier for DORA are 2x as likely to exceed profitability goals and achieved 50% higher market growth over three years compared to less mature organizations.
Additionally, DORA's 10+ years of research and the book Accelerate consistently show a direct link between high-performing tech teams, psychological safety, and financial performance. This highlights the importance of creating a trust-based environment to implement initiatives with cooperation and buy-in from the team.
Let's examine each of these objectives in detail:
The primary goal of using DORA metrics is to measure your team's software delivery performance. By measuring the 4 key areas, you can identify bottlenecks, streamline processes, and ultimately deliver better software faster — ultimately accelerating your development process. According to the 2023 Accelerate State of DevOps Report, teams with strong software delivery performance have 30% higher organizational performance. That's not just a win for engineering – it's a win for the overall business.
DORA metrics aren't just about hitting targets. They're about fostering a culture of continuous improvement — just 1% better everyday leaves you 37x better by the end of the year. They provide a framework for teams to regularly assess their performance and find ways to level up their game. Atlassian has found that teams who embrace continuous improvement practices often see substantial benefits in performance and efficiency. It's not about reaching a specific number and stopping – it's about a culture of always getting better.
DORA metrics help bridge the gap between technical performance and business outcomes. They provide a common language for tech and business teams to discuss progress and impact, facilitating better communication across the organization. By improving software delivery speed and quality, DORA’s 2019 research has found that these metrics play a role in improving profitability, productivity, and customer satisfaction. For example, the "Change Lead Time" metric directly reflects the organization's ability to respond to customer needs and requests. A shorter lead time means faster delivery of new features or bug fixes to the customer.
The ultimate goal isn't just about increasing development speed – it's about delivering value efficiently and effectively to end-users and stakeholders.
DORA metrics, when first introduced by Google, focused on 4 key metrics (”the four keys”) that are strong indicators of software delivery performance. Over time, these metrics have evolved, leading to updates and the introduction of a fifth metric:
DORA categorizes their metrics into assessing two key dimensions which describe software delivery performance:
DORA metrics focus on performance, and they correlate with customer value creation and the financial performance of companies. Tracking these four key metrics helps teams pinpoint areas for improvement by benchmarking against the industry standards below.
Here's a summary of the latest 2024 DORA metrics benchmarks:
Deployment Frequency tracks how often an organization deploys code to production or releases it to end users. This metric is a key indicator of your team's ability to deliver value continuously, and more importantly, it shows how often our customers get new value from our development work.
Top teams respond quickly to customer needs and rapidly iterate their products, as shown by the Deployment frequency benchmarks:
Change Lead Time measures the time it takes from first commit to code successfully running in production, representing one of the most controllable stages by the engineering team. It also shows how quickly you can get features into the hands of customers, which is when value is truly delivered.
Benchmarks suggest that a meaningful goal for Change Lead Time may be:
Change Failure Rate represents the percentage of changes that result in degraded service or require remediation (e.g., lead to service impairment or outage, and require a hotfix, rollback, fix forward, or patch). This metric reveals how often teams can’t deliver new value for customers due to a failure, and indicates the quality of your software delivery process.
Change Failure rates for top performing teams based off DORA benchmarks are:
Failed Deployment Recovery Time measures average time it takes to restore service when a software change causes an outage or service failure in production. It’s important because it shows how long your customers are unable to experience the full value of the app because of incidents. A low Failed Deployment Recovery Time indicates high efficiency in problem-solving and the ability to take risks with new features.
Based off DORA benchmarks, the time taken for top performing teams recover from failed deployments are:
However, research on the Verica Open Incident Database (VOID) highlights that may be issues around taking averages incidents data, such as high data variability and positive skew distributions often captured, making it an unreliable metric at times. As a result, using supplementary measures for incident response data are becoming more popular. Mean Time to Acknowledge (MTTA) is an example, which measures the average time it takes someone to acknowledge a new incident in production, which we include in Multitudes.
At Multitudes, we believe the metaphor of “putting some spinach in your fruit smoothies” — that there isn’t a single metric to rule them all and we should present data in multiple ways to illustrate the full picture.
Now that we understand the DORA metrics, the big question is how we can implement strategies to improve them – that's where real value is created. Let's explore practical approaches to enhance each metric, followed by some organization-wide improvements that can support all of them.
Deployment Frequency is a key indicator of your team's ability to deliver value to customers rapidly. Improving this metric involves streamlining your deployment pipeline and adopting practices that enable more frequent, reliable releases.
To enhance Deployment Frequency:
Remember, the goal is to balance increased deployment frequency with system stability. Automated testing is crucial in maintaining this balance, enabling faster and more reliable releases.
Change Lead Time reflects your team's agility in responding to new requirements or market changes. Reducing this metric involves optimizing every stage of your development process, from code creation to deployment.
Strategies to reduce Change Lead Timeinclude:
By focusing on these areas, teams can enhance their responsiveness and flexibility, enabling faster delivery of code changes and improving overall performance.
Failed Deployment Recovery Time (formerly Mean Time to Recovery) is crucial for maintaining high service availability and reliability. Minimizing this metric involves both proactive measures to prevent failures and reactive strategies to address issues quickly when they occur.
To improve Failed Deployment Recovery Time:
Regular reviews and updates to incident response plans are essential to ensure they remain effective in addressing potential issues.
Change Failure Rate is a critical indicator of your team's ability to deliver high-quality changes. Lowering this metric involves implementing practices that enhance code quality and reduce the likelihood of deployment failures.
Strategies to lower Change Failure Rate include:
By focusing on these areas, teams can significantly reduce deployment failures and enhance overall software delivery quality.
In addition to metric-specific improvements, the 2023 Accelerate State of DevOps Report identified the 5 organization-wide strategies which drive performance (all below findings and stats are from this report):
Implementing these strategies requires ongoing commitment and refinement. By consistently measuring, improving, and refining your processes, you can significantly enhance your team's performance across all DORA metrics.
Overemphasizing speed can harm stability metrics, leading to negative consequences for software quality; conversely, focusing solely on stability can reduce development speed. Remember, you're not just improving numbers on a dashboard. You're creating a more efficient, effective, and enjoyable development process for your team. And ultimately, that is in service of your customers and creating business value.
To effectively track and analyze DORA metrics, teams can use Multitudes.
Multitudes is an engineering insights platform for sustainable delivery. Multitudes integrates with your existing development tools, such as GitHub and Jira, to provide insights into your team's productivity and collaboration patterns.
With Multitudes, you can:
By leveraging Multitudes, you can improve your DORA metrics while giving your teams more time to act on insights, enhancing their productivity and satisfaction.
Ready to unlock happier, higher-performing teams?