Your code sucks, mine is better, and why code review doesn't have to be painful

Written by on

We've been experimenting with different ways to make code review part of our culture. Although we'd like to, we don't pair program regularly for a variety of reasons. We're a remote team in 6 different time zones and we currently have 4 different projects for only 3 engineers. As a result, we try to keep context switching to a minimum, but code review naturally drives context switching for us.

Code review is a great way to improve code quality! 

Well, that's great, but how do you do it?  We set out to seek an answer to that question.

First of all, it's important to realize that all organizations are different. The people are different, the projects are different, the tools are different, etc. Sure, many of us share similarities, but for the most part, everything is slightly different. Although you can read and read about how other organizations do it, you have to find your own way through your own experimentation.

Experimentation is fun. If we find a new tool we'd like to use or a new idea we'd like to add to our workflow, we try it. We don't have a formal experimentation period. We just try it. If we like it, we find a way to keep it. If we don't like it, we throw it out. 

For example, we recently wanted to try Blossom. Up until that point, we were exclusive to Basecamp for project planning. We knew Basecamp was missing something, but transitioning clients to Basecamp has been pretty easy. Blossom has started to fill the hole in our hearts and we've decided to keep it. A word of warning though: some of your clients won't love your tools as much as you do. Not all of our clients liked Blossom. 

Like Blossom, code review is a specific piece of our workflow we're actively experimenting with.  We've tried a few different things so far, and we haven't been completely satisfied yet. That doesn't mean we've given up. We're just adapting and changing our experiments to make us happy.

Experiment #1  - Code reviews live on GitHub

This is obvious. All of our code is on GitHub. Their collaboration tools and commenting system is easy and if you've worked on open source, it's pretty easy to work with. We can review the code and comment inline on the lines of code in question. We can open up issues when we see them and suggest ways to refactor. 

Long ago when we first started doing this, we knew it was a success right at the start. To improve quality, this was definitely a win. 

One of the advantages of using GitHub for your code reviews is the rich history. If you're constantly talking about your code and using pull requests to signal when code gets merged, you'll have a rich history of discussions, bug fixes, and screenshots. If you practice continuous deployment, you'll have a history of deployments as well.

History is invaluable. You'll be able to answer questions like, "why did we do it this way?" You'll be able to see screenshots of the app as its being built. The history will serve as a seed to help you recall much more about the code written.

Although reviewing code on GitHub is great, code review comes with a price. We're a distributed team and sometimes only one engineer is online. How do you practice code review if you value the speed at which you execute? This was one of the problems. If we wanted to integrate code review into our organization, how do we do it in these cases? Do we sit and wait until another engineer comes online? This was a problem we didn't know how to address.

Experiment #2 - Assigning a code review todo on Basecamp 

Reviewing code on GitHub was definitely becoming part of our organization; however, we wanted to make sure we were doing it. Since we were using Basecamp at the time, we decided it was a good idea to create a todo on Basecamp and assign it to someone for code review. That person would be notified and they would review it, right? 

Much like Experiment #1, we still needed someone to be online. If one of our engineers completed a bug fix and no other engineers were online, what happens? The todo gets created and the code sits there until that person gets online. 

Another problem for us was context switching. If someone was online, should they immediately switch and do a code review? It doesn't seem like a good idea, does it? 

Experiment #3 - Added a code review step in Blossom 

As we started to use Blossom, we started to ditch our usage of Basecamp. We setup our kanban flow and decided to add a code review stage. It seemed like a code review stage was just what we needed.

We placed our code review stage after our QA stage (we're big fans of having a QA person to test the code as a user).  Over time, we started to see quite a bit of friction in our workflow. Code reviews weren't getting done "on time". We were ending up with code that had passed QA, but wasn't reviewed by another engineer.

We had been accustomed to merging code upstream as soon as something passed QA, and we were feeling pain now. The workflow wasn't working. 

Experiment #4 - Continuous code review 

After all of our experiments so far, we finally realized that code review shouldn't be a formal part of the process. Instead, we should train ourselves to do it all the time. We call this continuous code review.

We review code earlier in the process. Sometimes, we'll write some code, open the pull request, and pair up with another engineer to review. If code is in the QA stage before the first code review, we've failed.

If I had to come up with a rule, I'd say you should do your first code review halfway through implementation.  We don't really like rules though. Smart engineers can use their own discretion and figure out when a review would help them out. This has helped as immensely so far, but it's still too early to tell if this is the experiment we'll call a success.

Our goal is to ingrain code review in our culture. We know it improves quality. We know it makes us smarter. We know we want to do it. So do it.