People & Process

DORA Metrics Core Objectives: What’s the point?

What is the point of DORA Metrics Core Objectives?

What's the point of DORA metrics?

The core objective of DORA metrics is to assess software development team performance, help leaders prioritize improvements, and validate progress — as highlighted by DORA Leader Nathen Harvey.

Capital One, a top 10 bank in the US, shared a case study where implementing DORA Metrics and DevOps practices led to significant improvements, including a 20x increase in release frequency without production incidents. Similarly, Octopus Deploy shipped 47% more PRs by leveraging insights from the Multitudes platform, which included analysis of their DORA metrics performance.

Let's unpack how DORA can help you improve your development processes, and even your entire organizational culture.

Sections:

1. Core Objectives of DORA Metrics

The core objective of DORA metrics is to understand the capabilities that determine the performance of a software development team and help leaders identify what actions can be taken to improve it.

DORA seeks to help teams achieve 3 key objectives:

  1. Measure outcomes of the software delivery process
  2. Improve software delivery performance through continuous improvement
  3. Align technical practices with business outcomes.

These metrics serve as a compass, guiding engineering teams towards practices that not only improve their development processes but also contribute to overall organizational success. In fact, Broadcom found that companies in the "elite" tier for DORA are 2x as likely to exceed profitability goals and achieved 50% higher market growth over three years compared to less mature organizations.

Additionally, DORA's 10+ years of research and the book Accelerate consistently show a direct link between high-performing tech teams, psychological safety, and financial performance. This highlights the importance of creating a trust-based environment to implement initiatives with cooperation and buy-in from the team.

Let's examine each of these objectives in detail:

Measure outcomes of the software delivery process

The primary goal of using DORA metrics is to measure your team's software delivery performance. By measuring the 4 key areas, you can identify bottlenecks, streamline processes, and ultimately deliver better software faster — ultimately accelerating your development process. According to the 2023 Accelerate State of DevOps Report, teams with strong software delivery performance have 30% higher organizational performance. That's not just a win for engineering – it's a win for the overall business.

Promoting Continuous Improvement

DORA metrics aren't just about hitting targets. They're about fostering a culture of continuous improvement — just 1% better everyday leaves you 37x better by the end of the year. They provide a framework for teams to regularly assess their performance and find ways to level up their game. Atlassian has found that teams who embrace continuous improvement practices often see substantial benefits in performance and efficiency. It's not about reaching a specific number and stopping – it's about a culture of always getting better.

Aligning with Business Goals

DORA metrics help bridge the gap between technical performance and business outcomes. They provide a common language for tech and business teams to discuss progress and impact, facilitating better communication across the organization. By improving software delivery speed and quality, DORA’s 2019 research has found that these metrics play a role in improving profitability, productivity, and customer satisfaction. For example, the "Change Lead Time" metric directly reflects the organization's ability to respond to customer needs and requests. A shorter lead time means faster delivery of new features or bug fixes to the customer.

The ultimate goal isn't just about increasing development speed – it's about delivering value efficiently and effectively to end-users and stakeholders.

2. Understanding DORA Metrics

DORA metrics, when first introduced by Google, focused on 4 key metrics (”the four keys”) that are strong indicators of software delivery performance. Over time, these metrics have evolved, leading to updates and the introduction of a fifth metric:

  • Change Lead Time: The time it takes to go from first commit to code successfully running in production.
  • Deployment Frequency: How often an organization deploys code to production or releases it to end users.
  • Failed Deployment Recovery Time (Formerly Mean Time to Recovery): The time it takes to restore service when a software change causes an outage or service failure in production.
  • Change Failure Rate: The percentage of changes that result in degraded service or require remediation (e.g., that lead to service impairment or outage, and require a hotfix, rollback, fix forward, or patch).
  • Rework rate: This fifth metric was introduced later in 2024, and together with Change Failure Rate provide an indicator of software delivery stability. Since it's a newer addition, there aren’t yet established quantifiable benchmarks and so this metric tends to receive less focus.

DORA categorizes their metrics into assessing two key dimensions which describe software delivery performance:

Factor Description Metrics
Software delivery throughput Speed of making updates of any kind, normal changes, and changes in response to a failure • Change lead time
• Deployment frequency
• Failed deployment recovery time
Software delivery stability Likelihood deployments unintentionally lead to immediate, additional work. • Change failure rate
• Rework rate

DORA metrics focus on performance, and they correlate with customer value creation and the financial performance of companies. Tracking these four key metrics helps teams pinpoint areas for improvement by benchmarking against the industry standards below.

Here's a summary of the latest 2024 DORA metrics benchmarks:

DORA 2024 Report Benchmarks - page 13

Deployment Frequency

Deployment Frequency tracks how often an organization deploys code to production or releases it to end users. This metric is a key indicator of your team's ability to deliver value continuously, and more importantly, it shows how often our customers get new value from our development work.

Top teams respond quickly to customer needs and rapidly iterate their products, as shown by the Deployment frequency benchmarks:

  • “Elite” level — On demand (deploying multiple times per day)
  • “High” level — At least once per week

Change Lead Time (Formerly Lead Time for Changes)

Change Lead Time measures the time it takes from first commit to code successfully running in production, representing one of the most controllable stages by the engineering team. It also shows how quickly you can get features into the hands of customers, which is when value is truly delivered.

Benchmarks suggest that a meaningful goal for Change Lead Time may be:

  • “Elite” level — Less than one hour
  • “High” level — Less than one week

Change Failure Rate

Change Failure Rate represents the percentage of changes that result in degraded service or require remediation (e.g., lead to service impairment or outage, and require a hotfix, rollback, fix forward, or patch). This metric reveals how often teams can’t deliver new value for customers due to a failure, and indicates the quality of your software delivery process.

Change Failure rates for top performing teams based off DORA benchmarks are:

  • “Elite” level — Less than 5%
  • “High” level — Less than 20%

Failed Deployment Recovery Time (Formerly Mean Time to Recovery)

Failed Deployment Recovery Time measures average time it takes to restore service when a software change causes an outage or service failure in production. It’s important because it shows how long your customers are unable to experience the full value of the app because of incidents.  A low Failed Deployment Recovery Time indicates high efficiency in problem-solving and the ability to take risks with new features.

Based off DORA benchmarks, the time taken for top performing teams recover from failed deployments are:

  • “Elite” level — Less than 1 hour
  • “High” level — Less than 1 day

However, research on the Verica Open Incident Database (VOID) highlights that may be issues around taking averages incidents data, such as high data variability and positive skew distributions often captured, making it an unreliable metric at times. As a result, using supplementary measures for incident response data are becoming more popular. Mean Time to Acknowledge (MTTA) is an example, which measures the average time it takes someone to acknowledge a new incident in production, which we include in Multitudes.

At Multitudes, we believe the metaphor of “putting some spinach in your fruit smoothies” — that there isn’t a single metric to rule them all and we should present data in multiple ways to illustrate the full picture.

3. Strategies to Improve DORA Metrics and Improve Developer Productivity

Now that we understand the DORA metrics, the big question is how we can implement strategies to improve them – that's where real value is created. Let's explore practical approaches to enhance each metric, followed by some organization-wide improvements that can support all of them.

Improving Deployment Frequency

Deployment Frequency is a key indicator of your team's ability to deliver value to customers rapidly. Improving this metric involves streamlining your deployment pipeline and adopting practices that enable more frequent, reliable releases.

To enhance Deployment Frequency:

  • Break updates into smaller, manageable pieces: This approach facilitates easier movement through the delivery process and aids in recovering from failures. Google research shows writing smaller Change Lists (CLs) boosts Deployment Frequency — with guidance that the right size for a CL is one self-contained change. No hard and fast rules but a Cisco study revealed that no more than 400 lines of code at a time; beyond that the brain’s ability to find defects diminishes.
  • Implement automation: Utilize CI/CD pipelines to streamline testing and deployment processes, increasing efficiency and reliability. GitLab research  showed that having CI/CD pipelines led to superior code quality, with a case study from global financial firm Goldman Sach’s increasing its code builds from 1 per fortnight to to over 1,000 builds per day.
  • Employ feature flags: This technique allows for the decoupling of deployments from releases, providing greater control over feature rollouts. LaunchDarkly’s 2024 Research show that teams using their feature management deploy code 84% more frequently than non-users, with 59% deploying several times a week or more.

Remember, the goal is to balance increased deployment frequency with system stability. Automated testing is crucial in maintaining this balance, enabling faster and more reliable releases.

Reducing Change Lead Time

Change Lead Time reflects your team's agility in responding to new requirements or market changes. Reducing this metric involves optimizing every stage of your development process, from code creation to deployment.

Strategies to reduce Change Lead Timeinclude:

  • Streamline code review processes: Speeding up code reviews is one of the most effective paths to improving software delivery performance. Google’s 2023 Accelerate State of DevOps Report found that teams with faster code reviews have 50% higher software delivery performance.
  • Optimize workflows: Identify and eliminate bottlenecks in your development pipeline to speed up the entire process. A popular approach for optimizing workflows is the Flow Framework, developed by Dr. Mik Kersten, which is grounded in Value Stream Management (VSM) principles.
  • Break work into smaller chunks: Smaller, more manageable pieces of work can move through the pipeline more quickly. Google’s research (also referenced earlier) highlights faster reviews are enabled by smaller changes.

By focusing on these areas, teams can enhance their responsiveness and flexibility, enabling faster delivery of code changes and improving overall performance.

Minimizing Failed Deployment Recovery Time

Failed Deployment Recovery Time (formerly Mean Time to Recovery) is crucial for maintaining high service availability and reliability. Minimizing this metric involves both proactive measures to prevent failures and reactive strategies to address issues quickly when they occur.

To improve Failed Deployment Recovery Time:

  • Enhance monitoring and observability: Implement robust monitoring tools that provide real-time insights and alerts for rapid issue detection and resolution. Datadog’s 2023 Annual Observability Report found that teams using advanced monitoring and observability tools are able to achieve a 40% reduction in Failed Deployment Recovery Time.
  • Develop effective incident response plans: Prepare for worst-case scenarios and ensure teams are ready to respond quickly and efficiently. S&P Global has found that less than 50% of companies have incident response plans in place.
  • Deploy smaller code changes: This approach simplifies testing and recovery processes, reducing the complexity of potential failures. Google’s research (also referenced earlier) highlights that smaller changes also reduce the likelihood of bugs.

Regular reviews and updates to incident response plans are essential to ensure they remain effective in addressing potential issues.

Lowering Change Failure Rate

Change Failure Rate is a critical indicator of your team's ability to deliver high-quality changes. Lowering this metric involves implementing practices that enhance code quality and reduce the likelihood of deployment failures.

Strategies to lower Change Failure Rate include:

  • Invest in automated testing: Integrate automated tests into your CI/CD pipeline to validate code changes quickly and reliably. Microsoft found that the transition from ad hoc to automated testing led to a 20% decrease in test defects.
  • Implement rigorous testing: Ensure thorough testing of code changes before deployment to minimize the risk of failures. Rigorous testing practices contribute to better code maintainability. It’s found that rigorous testing, especially early on in the process leading to earlier detection, can lead to cost savings up to 30 times compared to fixing defects post-release.
  • Monitor and analyze failures: Use each failure as a learning opportunity to improve processes and prevent similar issues in the future.

By focusing on these areas, teams can significantly reduce deployment failures and enhance overall software delivery quality.

Organization-Wide Strategies

In addition to metric-specific improvements, the 2023 Accelerate State of DevOps Report identified the 5 organization-wide strategies which drive performance (all below findings and stats are from this report):

  1. Establish a trust-based environment: To encourage collaboration, learning, and continuous improvement, as opposed to simply gaming the metrics, a trust-based culture is required to provide psychological safety and buy-in from the team.
  2. Build with users in mind: Prioritize user-centric development to ensure your efforts align with customer needs and expectations. Google found that teams that focus on the user have 40% higher organizational performance.
  3. Invest in quality documentation: Develop and maintain high-quality documentation to amplify the impact of technical capabilities. It was found that when high-quality documentation was in-place, Trunk-based development had significantly improved organizational performance.
  4. Utilize cloud infrastructure: Adopting public cloud services can result in a 22% increase in infrastructure flexibility compared to non-cloud solutions. This flexibility can lead to a 30% improvement in organizational performance compared to organizations with less adaptable infrastructures.
  5. Ensure equitable workload distribution: Make sure work is fairly distributed to reduce burnout and boost overall team productivity. Underrepresented groups, such as women and minorities in tech, experience 24% more burnout compared to their counterparts, highlighting the need for equitable workload distribution.

Implementing these strategies requires ongoing commitment and refinement. By consistently measuring, improving, and refining your processes, you can significantly enhance your team's performance across all DORA metrics.

4. The Big Picture: Balancing Speed and Stability

Overemphasizing speed can harm stability metrics, leading to negative consequences for software quality; conversely, focusing solely on stability can reduce development speed. Remember, you're not just improving numbers on a dashboard. You're creating a more efficient, effective, and enjoyable development process for your team. And ultimately, that is in service of your customers and creating business value.

To effectively track and analyze DORA metrics, teams can use Multitudes.

Multitudes is an engineering insights platform for sustainable delivery. Multitudes integrates with your existing development tools, such as GitHub and Jira, to provide insights into your team's productivity and collaboration patterns.

With Multitudes, you can:

  • Automatically track DORA metrics like Change Lead Timeand Deployment Frequency
  • Get visibility into work patterns and types of work, such as feature development vs. bug fixing
  • Identify collaboration patterns and potential silos within your team

By leveraging Multitudes, you can improve your DORA metrics while giving your teams more time to act on insights, enhancing their productivity and satisfaction.

Ready to unlock happier, higher-performing teams?

Try our product today!

Contributor
Multitudes
Multitudes
Support your developers with ethical team analytics.

Start your free trial

Get a demo
Support your developers with ethical team analytics.