In this blog post, Bitrise’s CTO and co-founder Viktor Benei and VP of Engineering Gergely Hodicska share their insights about how we approach metrics, performance, and success in our engineering teams.
We’ve recently stumbled upon an article about building excellent engineering teams, stating: “High-performing engineering teams don’t just happen. They’re created.” They draw on Dan Ariely’s statement: “Human beings adjust their behavior based on the metrics they’re held against. Anything you measure will impel a person to optimize his score on that metric. What you measure is what you’ll get.”
We’ve decided to dive into how we measure performance internally and how we make sure that we stay at the top of our game. This blog post will discuss what we look for during the hiring process, how we keep each other motivated, and how we maintain an environment of continuous learning.
Finding the right people
What do you look for during the hiring process?
Since Bitrise is in a hyper-growth phase and the organization is always in flux, we have to solve problems that are, let’s say, ‘unscripted’. The people we hire have to be able to approach these problems with a growth mindset, even if they come from larger companies with advanced processes. Above all, this is what we look for — along with holistic, strategic thinking ability, for the long run. On top of that, we want to find people who are gritty, proactive, and resilient.
It’s also true that when it comes to software engineering it’s not the best individuals who win, but the best team. Hiring shouldn’t be just about getting the most experienced candidates, but rather, the best team players, who will be able to work well together. Being a good cultural fit is important, but this doesn’t mean that we’re looking for one specific type of person. We encourage diversity and inclusion to support a broad range of approaches and to be able to build unique solutions to unique problems.
Creating the right environment
How do you encourage people to experiment and constantly improve?
Since one of our company values is transparency — as we say, “we’re direct and open-minded” — each team member should be able to openly accept and give feedback. In our company culture, experimenting and taking risks are strongly encouraged. Everyone has to learn how to fail, get up, and try again, but they also have to be humble and listen to the people around them. As a company, we like to support our people in taking ownership of outcomes and having a sense of authority, regardless of their seniority level.
What are team structures like at Bitrise? Is there a strong hierarchy or are you moving toward a horizontal setup?
We’re growing rapidly so we’re still in the phase of defining the optimal team structure. We do believe in a more or less flat setup, but it has its limits. We’ve recently started bringing in tribes — a group of teams with tech leads leading the teams together, based on constant communication. We want to make sure that we maintain an environment of free collaboration, where everyone is approachable no matter what position they have.
Defining the right metrics
Let’s go back a bit to Dan Ariely’s statement,
“Human beings adjust behavior based on the metrics they’re held against. Anything you measure will impel a person to optimize his score on that metric. What you measure is what you’ll get.”
What do you think about this? What exactly do we measure at Bitrise when it comes to engineering efficiency? Is it the velocity, work distribution, or the outcomes?
We believe in measuring outcomes instead of output but we found that OKRs (Objectives and Key Results) aren’t always the optimal technique. Balance is key — we have different measurements for velocity and quality as well. The exact measurements depend on teams and functions, but we all try to implement the DORA tracking metrics, which are very useful from a DevOps performance perspective. This approach helps measure both software delivery throughput (velocity) and stability (quality). Among other data points, this method specifically measures the following things:
- Deployment Frequency (DF)
- Mean Lead Time for changes (MLT)
- Mean Time To Recover (MTTR)
- and Change Failure Rate (CFR)
These metrics determine how successful the company is from the engineering perspective and how well our teams deliver software to our customers, and in what quality. This method helps us gauge concrete data on the entire organization’s performance, making it easier to see the necessary steps for further improvements.
Setting the right goals
How do you make sure that you set a high bar for success without being unrealistic?
We try to implement the so-called S.M.A.R.T. criteria into our goal-setting:
- Specific: These goals answer the five Ws: who, what, when, where, and why.
- Measurable: We gather data that we can measure to see the progress from the beginning of the project through completion.
- Attainable: We identify the means that it will take to achieve our highest goals.
- Relevant: We create goals that matter to you and align with other relevant goals.
- Time-bound: Can we achieve these goals in the time allowed?
Besides this, we also regularly conduct team health checks, which essentially means that we evaluate the wellbeing of each team. A ‘team joy score’ helps keep things in check when it comes to motivation or the risk of burnout. We do sprint health checks as well: we do a simple survey at the end of each sprint, by which we measure 9 different dimensions — to help create a meaningful conversation about how we can improve our efficiency. We’ve recently done an in-depth qualitative survey with 80 participants to discover and reflect on pain points. It was time-consuming but it’s definitely worth doing it, the details and the diversity in the answers helped management see what we need to improve.
Learning and improving
In what aspects do you see room for improvement?
Since we’re a scaleup, we’re still experimenting and still trying to crack the code of the best tactics for setting measurement and performance goals. In such a rapidly growing industry, the workload is often heavy but we want to make sure team members don’t need to do overwork. This would ultimately result in a loss of work quality and burnout, so we’re finding ways to remain flexible. We plan to heavily invest into lean and agile metrics to accelerate and improve our workflows.
The Covid lockdown has made things more complicated, it’s not as easy to check up on people and see how they are doing. We always want to emphasize that we’re in this together — we believe that transparency, constant communication, and making company-wide decisions about our approaches are very important.
- Column: You are what you measure, by Dan Ariely: https://hbr.org/2010/06/column-you-are-what-you-measure
- How to build high-performing engineering teams, by Akahiro Asahara: https://www.sleeek.io/blog/building-high-performing-engineering-teams
- Measuring DevOps Success With DORA Metrics, by Varun Shakya: https://klera.io/blog/measuring-devops-success-with-dora-metrics/
- S.M.A.R.T. goals — How to make your goals achievable: https://www.mindtools.com/pages/article/smart-goals.htm