We encouraged a positive attitude towards change at Haplo so we could adapt to the constantly changing situations we found ourselves in. We needed to continually change the way we worked together as our customers’ requirements changed, our team grew and changed, and the world around us changed.
We were able to make changes rapidly and safely, using a semi-formal process to ensure we only made small and effective changes. Not all changes are going to work out, and we need to determine whether they work before committing to them.
Anyone could initiate a change. As part of onboarding, we invited newcomers to offer their perspective, as they could see what we were doing with fresh eyes. And no permission was needed. All you had to do was get colleagues to agree and follow the process.
There was no resistance to change from anyone at Haplo, as change meant improvements to our working lives. Changes were small, well thought-out, and there was no danger of being stuck doing something silly.
Haplo’s change process
Ideas could be suggested at any time, but mostly they were raised, discussed and tracked in the weekly developer meetings.
Our process was adapted from Toyota’s “Improvement Kata”, as described in Toyota Kata by Mike Rother:
- Describe the end goal: Where do we want to be eventually? This can be quite vague and aspirational.
- Define the experiment: What’s the smallest possible thing we could do to get a tiny bit closer to the goal?
- Set the success criteria: How will we know if it worked? The more quantifiable, the better.
- Did it work? After two weeks, see if the success criteria were met. If so, keep the change.
- Iterate: If it didn’t work, try something else. If it did, and we still need to improve, review the end goal and try another small change.
We used a very simple management system to track the experiments and remind us to verify they worked. This was very lightweight, but effective — and provided evidence of continual improvements for our ISO27001 audits.
Let’s say that colleagues are waiting too long for code reviews. We run a few reports on our code review system, which confirms this by finding that only 25% are done on time. However, we notice that most reviews are going to a small number of reviewers. We ask why some people do more reviews than others, and find that they are seen as having more knowledge, and perhaps more significantly, they actually get reviews done promptly. Our top reviewer reports that she is spending too long on reviews and needs to write more code.
We think giving all the reviews to a small group of people is a bad idea, and probably causes the late reviews. We want to fix this emerging behaviour. So, we follow the process.
In an ideal world, code reviews will be completed on time and within a couple of days. The code review workload will be spread evenly around the team, so no one is blocked from writing code, and knowledge about how our code works is spread across the team.
We’re never going to get there in two weeks, but the point of the end goal is to set out why we want to change and what we want to achieve with our change. If we could get there in two weeks, the goal would be far too specific to help.
The smallest change
Currently, when submitting a code review, the reviewer is assigned manually by the developer. We’ll change our code review system so that it is assigned randomly. Because some things really do need to be reviewed by a subject expert, developers are allowed to override this, but it’s something we discourage.
Here, we have a tiny configuration change to make. It’s largely automatic, so no one really has to think much when they do it. We still allow the behaviour which caused the uneven allocation, so it’s important we check there aren’t too many manual allocations.
After two weeks, we’ll look at the reports again. Our experiment is a success if:
- No one has done 10% more or less reviews than anyone else. (we guessed that the uneven allocation is the main reason why we have a problem, so we set this criteria to test our theory)
- 50% of reviews requested in the two weeks are completed on time. (a useful and statistically signifiant increase to check we are having a positive effect)
- Less than 5 reviews were manually assigned. (are colleagues following the new rules?)
- No significant adverse effects are reported. (a simple safety check)
Of these criteria, only the last is subjective. It’ll only take a couple of minutes to check the reports and see if it worked.
An effective process for change
When we started our company, it was just two of us. We had no spare time, and every single minute counted. We had to keep making improvements that worked without taking too long over it. So we focused on quick changes which improved a process or product a little bit at a time, rather than doing something that might make a far bigger improvement, but take months.
But eventually we needed a more formal process for change when we found that we were getting stuck in our ways, and colleagues with ideas weren’t suggesting them. We needed an effective process, and as with everything we did, we thought carefully about what that would look like for us.
As with most things, we approached it by thinking about what behaviours and culture we wanted to see at Haplo. We wanted to keep on making continual improvements, so that over time, we made big improvements while seeing the value of our work immediately.
Produces positive change
This is probably a self-evident requirement. Changes must be positive by giving better results and making our working lives easier.
We wanted change to happen, and we wanted it to happen all the time, so the process needed to be quick and easy to follow.
Proposing a small change and setting out the experiment shouldn’t take more than 10 minutes. Having to know it works in two weeks means it has to be a small change. And when you’re reminded that the two weeks are up, it’s easy to check whether the change should be kept.
If you’re going to continuously change, you need to be sure you won’t make things worse. When your process is safe, you can allow anyone to make changes, because they can be confident that they can’t do anything bad.
Setting out an experiment with success criteria before you make the change forces you to think about what success looks like. If you can’t see whether it worked, it’s either pointless, or not a safe change because you don’t know what it could do.
When the two weeks is up, it’s easy to check whether we should keep the change, as we set out the criteria in advance. This has some really big benefits:
- there’s no temptation to keep something by default,
- we’ll be entirely evidence-driven in our approach, ensuring it’s as likely as possible to result in positive change,
- everyone is happen to abandon changes, as the decision was agreed 14 days ago.
We will probably get some things wrong. It’s entirely possible we’ll get the success criteria wrong by measuring the wrong thing, getting our data wrong, or there being unintended consequences.
By making the process iterative and continuous, we will revisit decisions that didn’t work out. We’ll just make changes in response to the problems they cause.
Enables big changes
We are ambitious, and want to achieve big things. Our change process must not limit ourselves to small tweaks.
We deliberately look to long term end goals so that we make sure we’re doing something that is actually going to make a difference.
How this can go wrong
Haplo’s process for change was a huge contributor to our success and effective working environment. We avoided problems by being committed to the principle of many small changes, and making big changes as a long series of small individually worthwhile changes.
But in reflecting on how we worked together, I can see how we could have done even better. Potential problems are:
Not following it all the time: If you create a workplace where change is easy and seen as safe, colleagues will assume mistakes will be caught. If you don’t follow the process which provides this safety, mistakes will be missed. Enforcement of the process is essential. Our enforcement was imperfect, and we were largely lucky with the changes which slipped through. I am uncomfortable with relying on luck and limiting the downsides through the small size of changes.
Communicating changes: When change happens, people need to know about it so we actually make the changes, and our experiment is valid. We mitigated this mainly by using software to guide our work. But if you missed the discussion where it was decided, I’m not entirely convinced you’d always hear about it.
Documentation: When you make lots of small changes, how do you document it? Our systems recorded the changes, but we ended up with lots of records of small changes. While embedding the way we worked in software helped an awful lot, I’m sure that up-to-date documentation would have made on-boarding easier.
Not abandoning enough changes: Thinking back, I’m not sure we rejected enough changes. As it’s safe to try things, shouldn’t we be more ambitious in our choices and set harder criteria? Perhaps there should be a target acceptance rate that is monitored regularly? For a relatively mature organisation like Haplo, which aimed to move carefully and sustainably, that might be about 90%, but could be much lower for a younger team.
Failure to iterate: There’s a danger you could slip into a local maxima and do something that works, but is not the best possible thing. If I were doing this again, I might consider adding another rule that you absolutely must iterate a few times. However, I think this was largely mitigated by setting ambitious end goals.
Table of contents