As developers, we all know that code reviews are a good thing in theory. They should help us:
- Find bugs and security issues early
- Improve the readability of our code
- Provide a safety net to ensure all tasks are fully completed
The reality is that code reviews can frequently be an uncomfortable experience for everyone involved, leading to reviews that are combative, ineffective, or even worse, simply not happening.
Here is a quick guide to help you to create an effective code review process.
WHY do we do code reviews?
The first question to answer when reviewing your code review process is: what's the purpose of our code reviews? When you ask this question, you soon realise that there are many reasons to perform code reviews. You might even find that everyone in the team has a different idea of why they're reviewing code. They might think they're reviewing
- To find bugs
- To check for potential performance or security problems
- To ensure readable code
- To verify the functionality does what was required
- To make sure the design is sound
- To share knowledge of the features that have been implemented and the updated design
- To check the code meets standards
...or one of hundreds of other reasons
If everyone in the team has a different "why", they're looking for different things in the code. This can lead to a number of anti-patterns:
- Code reviews take a long time as every reviewer will find a different set of problems that need to be addressed
- Reviewers become demotivated as every review throws up different types of problems depending upon who reviews it
- Reviews can ping pong between the reviewer the code author as each new iteration exposes a different set of problems
Having a single purpose for your code reviews ensures that everyone taking part in the review, whether they're a code author or a reviewer, knows the reason for the review and can focus their efforts in making sure their suggestions fit that reason.
WHAT are we looking for?
Only when we understand why we're doing the review can we figure out the things we want to look for during the review. As we've already started to see, there are a huge number of different things we could be looking for during our review, we need to narrow down the specific things we really care about.
For example, if we've decided the main purpose of our reviews is to ensure the code is readable and understandable, we'll spend less time worrying about a design that has already been implemented, and more time focusing on whether we understand the methods and whether the functionality is in a place that makes sense. The nice side effect of this particular choice is that with more readable code it’s easier to spot bugs or incorrect logic. Simpler code is often better performance too.
We should always automate as much as possible, so human code reviewers should never be worrying about the following:
- Formatting & style checks
- Test coverage
- If performance meets specific requirements
- Common security problems
In fact, what a human reviewer should be focusing on might be fairly simple after all - is the code "usable"? Is it:
These are checks that cannot be automated. And these are the features of code that matter most to developers in the long run.
Developers are not the only people who matter though, ultimately the code has a job to do. Our business cares about: does the code do what it's supposed to do? And is there an automated test or set of tests to prove it?
Finally, does it meet so-called non-functional requirements? It is important to consider things like regulatory requirements (e.g. auditing) or user needs (like documentation) in these checks.
WHO is involved in code reviews?
With a clear purpose and a set of things to be looking for in a review, it's much simpler to decide who should be involved in the review. We need to decide:
Who reviews the code? It's tempting to assume that it should be one or more senior or experienced developers. But if the focus is something like making sure the code is easy to understand, juniors might be the correct people to review - if an inexperienced developer can understand what's happening in the code, it's probably easy for everyone to understand. If the focus of the review is sharing knowledge, then you probably want everyone to review the code. For reviews with other purposes, you may have a pool of reviewers and a couple of them are randomly picked for each review.
Who signs off the review? If we have more than one reviewer, it's important to understand who ultimately is responsible for saying the review is complete. This could be a single person, a specific set of people, a quorum of reviewers, specified experts for particular areas of the code, or the review could even be stopped by a single veto. In teams with high levels of trust, the code author might be the one to decide when enough feedback is enough and the code has been updated to adequately reflect concerns that were raised.
Who resolves differences of opinion? Reviews may have more than one reviewer. If different reviewers have conflicting advice, how does the author resolve this? Is it down to the author to decide? Or is there a lead or expert who can arbitrate and decide the best course? It's important to understand how conflicts are resolved during a code review.
"When" has two important components:
When do we review? Traditional code reviews happen when all the code is complete and ready to go to production. Often code will not be merged to trunk/master until a review is complete, for example the pull request model. This is not the only approach. If a code review is for knowledge sharing, the review could happen after the code has been merged (or the code could be committed straight to master). If the code review is an incremental review that is supposed to help evolve the design of the code, reviews will be happening during implementation. Once we know: why we do reviews; what we're looking for; and who takes part, we can more easily decide when is the best time to perform the review.
When is the review complete? Not understanding when a review is complete is major factor that can lead to reviews dragging on indefinitely. There's nothing more de-motivating than a review that never ends, a developer feels like they've been working on the same thing forever and it's still not in production. Guidelines for deciding when the review is complete will depend upon who is taking part in the review, and when the review is taking place:
- With knowledge sharing reviews, it could be signed off once everyone has had a chance to look at the code
- With gateway reviews, usually a single nominated senior (the gatekeeper) says it's complete when all their points are addressed
Other types of reviews may have a set of criteria that need to be passed before the review is complete. For example:
- All comments have been addressed by fixes in the code
- All comments have either led to code changes, or to tickets in the issue tracker (e.g. creating tickets for new features or design changes; adding additional information to upcoming feature tickets; or creating tech-debt tickets)
- Comments flagged as showstoppers have all been addressed in some way, comments that were observations or lessons to learn from in the future do not need to have been “fixed”
WHERE do we review?
Code reviews don't have to happen inside a code review tool. Pair programing is a form of code review. A review could be simply pulling aside a colleague and walking through your code with them. Reviews might be done by checking out a branch and making comments in a document, email or a chat channel. Or code reviews might happen via github pull request or a piece of code review software.
There are lots of things to consider when doing a code review, and if we worried about all of them for every code review, it would be nearly impossible for any code to pass the review process. The best way to implement a code review process that works for us is to consider:
- Why are we doing reviews? Reviewers have an easier job with a clearly defined purpose, and code authors will have fewer nasty surprises from the review process
- What are we looking for? When we have a purpose, we can create a more focused set of things to check when reviewing code
- Who is involved? Who does the reviews, who is responsible for resolving conflicts of opinion, and who ultimately decides if the code is good to go?
- When do we review, and when is the review complete? Reviews could happen iteratively while working on the code, or at the end of the process. A review could go on forever if we don't have clear guidance on when the code is finally good to go.
- Where do we review? Code reviews don't need a specific tool, a review could be as simple as walking a colleague through our code at our desk.
Once these questions are answered we should be able to create a code review process which works well for the team. Remember, the goal of a review should be to get the code into production, not to prove how clever we are.