People & Process

Using DORA to elevate Developer Productivity: A Breakdown

dora developer productivity

Developer productivity goes beyond individual output – it's about how teams collaborate, how efficiently they work, and whether they're delivering meaningful value.

DORA metrics offer a thoughtful approach to measuring team performance. By tracking 4 metrics which measure software delivery throughput and stability, teams get a comprehensive view of their performance.

In this article we’ll explore developer productivity, and how DORA can be used to improve it.

Sections:

1. Defining developer productivity

Bill Gates once said: “Measuring programming progress by lines of code is like measuring aircraft building progress by weight.”

The idea of developer productivity has evolved significantly. Early attempts, such as 1982 at Apple, to measure based on lines of code (LOC) have largely died away. There is an appreciation that developer productivity is nuanced and should focus more broadly on entire teams.

While there is no one universal definition, we’ve found that most organizations are seeking a common goal: to efficiently and effectively deliver value to customers

It's a concept that includes many factors like:

  • The ability to deliver features that meet user needs
  • Efficiency in problem-solving and debugging
  • Collaboration and knowledge sharing within teams
  • Ability to adapt to new technologies and methodologies
  • Code quality and reliability of systems

2. Using DORA to improve developer productivity

What are DORA metrics?

DORA metrics focuses on 4 key quantitative metrics (”the four keys”) that are strong indicators of software delivery performance.

  • Change Lead Time: The time it takes to go from first commit to code successfully running in production.
  • Deployment Frequency: How often an organization deploys code to production or releases it to end users.
  • Failed Deployment Recovery Time (Formerly Mean Time to Recovery): The time it takes to restore service when a deployment causes an outage or service failure in production (whereas previously Mean Time to Recovery also included uncontrollable failure events such as an earthquake disrupting service).
  • Change Failure Rate: The percentage of changes that result in degraded service or require remediation (e.g., that lead to service impairment or outage, and require a hotfix, rollback, fix forward, or patch).

You can read more about DORA metrics here.

How does DORA improve developer productivity?

DORA metrics offer a thoughtful way to measure what really matters in software development.

Instead of focusing on individual metrics that don't tell the whole story, DORA looks at 4 key indicators that indicate speed and quality and thus have a natural tension.

These metrics work together to paint a picture of sustainable delivery. For example:

  • If your Change Lead Time drops, it means your team can get changes out faster.
  • When your Change Failure Rate stays low, it shows you're maintaining quality while picking up speed.

DORA metrics have grown to be an accepted industry standard. As such there is growing evidence supporting DORA metrics from their 10+ years of research and related book Accelerate consistently show a direct link between high-performance on DORA metrics and outsized financial outcomes relative to competitors.

Additionally, there benchmarks available enabling organizations to clearly understand where they measure up. You can find the latest 2024 benchmarks here. However, it is important to note that teams should always set their own benchmarks and talk about metrics transparently across their team to ensure the data is used in an empowering way.

3. Implementing DORA Metrics in Your Development Process

Ready to try DORA with your team?

Before jumping in, remember that a high-trust environment needs to be created first. Without trust, these metrics can be gamed and misused, leading to fear and uncertainty among team members.

Picture this: developers splitting their work into smaller, more frequent deployments just to improve their DORA scores. It’s Goodhart’s law in action: “When a measure becomes a target, it ceases to be a good measure”

So how can we implement DORA in thoughtful way? Here's our approach:

  1. Be Clear about the Business Goal: Why do we want to bring in metrics? What are the bigger outcomes we're hoping to support? Make sure your chosen metrics align with the value you are trying to create (e.g., shipping a new release, improve reliability and quality of core services). And don't keep it a secret – let everyone know why these metrics matter. We’ve written more about the core objectives of DORA here.
  2. Establish a Baseline: Take a snapshot of your current performance against the selected metrics. This gives you a reference point for seeing how you compare to benchmarks and tracking progress as you run experiments.
  3. Integrate with Existing Tools and Processes: Getting metrics should help you, not slow you down. This can look like:
    • Incorporating tracking into your git tooling, CI/CD pipelines, issue tracking, and IMS
    • Weaving metrics into your team practices like stand-ups, retros and 1:1s to stay across bottlenecks and issues early. Most dashboards are built and then get forgotten; you don't want that to happen here!
    • Supporting team members on the metrics and encouraging open communication and feedback during the integration process.
  4. Run an experiment: By now, hopefully your team has live access to key metrics, woven the metrics into your team practices, and are having rich discussions about what's going well and where you can improve. So its time to run an experiment! Pick one metric you want to improve, come up with a hypothesis on how to do it as a team, and then give it a go!
  5. Track Progress and Celebrate Successes: Ideally, your tooling helps you check in how things are going (see how our platform can help you with this). Are you seeing the improvements you hoped for? Or not quite there yet? Whether the experiment worked or it’s time to try a new approach — the most important part is to celebrate every win or learning, no matter how small. And remember, spread the knowledge! Your wins could inspire other teams and contribute to a culture of continuous improvement by making failed experiments a common and celebrated part of work.

From there, it's rinse and repeat for steps 4-5. Choose one area to improve, then run an experiment, track progress and iterate.

This approach will help build a culture of continuous improvement and blameless experimentation.

Getting into a habit of experimentation lowers the barriers for teams to change their ways of working, and builds the muscle of tackling new challenges and shifting priorities.

If you master continuous improvement and get 1% better each day for one year, you'll end up 37 times better by the time you're done.

4. Short case study: Octopus Deploy shipped 47% more PRs using DORA

At Octopus Deploy, a rapidly growing team presented unique challenges in balancing workloads and maintaining productivity. Engineering Manager Kim Engel turned to DORA metrics through Multitudes to get a clear picture of bottlenecks affecting their development process.

The data highlighted two major issues: a low and declining Merge Frequency (indicating fewer pull requests being merged and a proxy for a lower Deployment Frequency) and a rising Change Failure Rate (signaling an increase in post-release issues). These metrics revealed that a single principal engineer was handling 42% of code review feedback, creating a bottleneck that not only impacted this engineer’s workload but also limited team-wide knowledge sharing and slowed down development.

Using these insights, Kim engaged leadership in discussions about redistributing review responsibilities across the team. With the support of Multitudes’ PR alerts in Slack, PRs needing review were flagged for any team member to pick up, ensuring timely feedback and reducing reliance on a single reviewer. This strategic shift produced immediate benefits:

  • Feedback Distributed: The principal engineer’s review load dropped from 42% to just 5%, freeing them to focus on high-level planning.
  • Improved Collaboration: Feedback Given rose by 56%, with more team members actively contributing to reviews.
  • Enhanced Productivity and Quality: PRs shipped increased by 47%, and overall Change Failure Rate trended downwards leading to better-quality releases.

By leveraging DORA metrics effectively, Octopus Deploy improved  their review process, creating a balanced workload and driving substantial gains in productivity and release quality.

You can read the full write-up about Octopus Deploy here and our other success stories.

5. Software to measure developer productivity

To effectively track and analyze DORA metrics, teams can use Multitudes, which is an engineering insights platform for sustainable delivery. Multitudes integrates with your existing development tools, such as GitHub and Jira, to provide insights into your team's productivity and collaboration patterns.

With Multitudes, you can:

  • Automatically track DORA metrics like Change Lead Time and Deployment Frequency
  • Get visibility into work patterns and types of work, such as feature development vs. bug fixing
  • Identify collaboration patterns and potential silos within your team

By leveraging Multitudes, you can improve your DORA metrics while giving your teams more time to act on insights, enhancing their productivity and satisfaction.

Ready to unlock happier, higher-performing teams?

Try our product today!

Contributor
Multitudes
Multitudes
Support your developers with ethical team analytics.

Start your free trial

Get a demo
Support your developers with ethical team analytics.