increase in <code-text>Feedback Given<code-text> (7-week rolling average median)
decrease in <code-text>Change Lead Time<code-text> (7-week rolling average median)
The Challenge
As Octane AI’s new Head of Engineering, Gabriel Menezes was all too aware of the collaboration challenges that come with a globally-distributed team of engineers. He noticed that reviews were piling up, slowing down his team’s ability to deliver work, and he suspected that it had something to do with their collaboration patterns.
“Having collaboration data has had a positive effect on review practices at Octane – I can open My 1:1s right before a meeting and see at a glance how things are going. Using Multitudes makes me feel more comfortable in experimenting – it takes the guesswork out of it.”
— Gabriel Menezes, Director of Engineering, Octane AI
When Gabriel looked at the Multitudes data, he could immediately see that <code-text>Change Lead Time<code-text> (a DORA metric, previously called <code-text>Time to Merge<code-text> in the app) was high. He also noticed that <code-text>Review Wait Time<code-text> was also high – this metric is a subset of <code-text>Change Lead Time<code-text>, and shows how long PRs sit idle before getting feedback. When PRs sit for long periods without a review, it delays the follow-on steps to revise and merge the PR.
Looking deeper, he could see that there was a low amount of <code-text>PR Feedback Given<code-text> – and it had been trending down in his team over the preceding few months. With that, the full picture was clear: With fewer reviews being done across the team, each PR had to wait longer for feedback, and this was slowing down how long it took to complete the work.
Gabriel used Multitudes’s dynamic 1:1s questions, which change based on the underlying behavioral data, to spark conversations with the team about why there were fewer reviews. Based on that, the team decided to treat code reviews as a top priority in their workday and set a goal to complete all code reviews within one business day. To support this, they set up reminders in Slack to nudge them to do code reviews. In addition, the team was also encouraged to write fewer big epics – they worked to slice new features into smaller tickets.
“People get happy when you show data. I had a feeling about some of these things, but it’s nice to know the numbers.”
— Gabriel Menezes, Director of Engineering, Octane AI
The team made huge improvements – over the following seven weeks, <code-text>PR Feedback Given<code-text> increased by 50%. Following that, <code-text>Review Wait Time<code-text> dropped 56%, holding at a median of 3 hours for the months to come (that means most PRs got a review within a half-day). The progress in these areas culminated in a 48% decrease in <code-text>Change Lead Time<code-text>, which meant that Gabriel’s teams were able to deliver work faster – all by improving how they collaborated on reviews.
Gabriel’s team could set up the <code-text>Daily PR Alerts<code-text> in Slack to show them exactly which people and PRs are the most blocked. Read more here!