How I Do Code Reviews at Mozilla

Wednesday, October 22nd, 2014

Since I received some good feedback about my prior post, How I Hire at Mozilla, I thought I’d try to continue this is a mini-series about how I do other things at Mozilla. Next up is code review.

Even though I have found new module owners for some of the code I own, I still end up doing 8-12 review/feedback cycles per week. Reviews are only as good as the time you spend on them: I approach reviews in a fairly systematic way.

When I load a patch for review, I don’t read it top-to-bottom. I also try to avoid reading the bug report: a code change should be able to explain itself either directly in the code or in the code commit message which is part of the patch. If bugzilla comments are required to understand a patch, those comments should probably be part of the commit message itself. Instead, I try to understand the patch by unwrapping it from the big picture into the small details:

The Commit Message

  • Does the commit message describe accurately what the patch does?
  • Am I the right person to make a decision about this change?
  • Is this the right change to be making?
  • Are there any external specifications for this change? This could include a UX design, or a DOM/HTML specification, or something else.
  • Should there be an external specification for this change?
  • Are there other experts who should be involved in this change? Does this change require a UX design/review, or a security, privacy, legal, localization, accessibility, or addon-compatibility review or notification?

Read the Specification

If there is an external specification that this change should conform to, I will read it or the appropriate sections of it. In the following steps of the review, I try to relate the changes to the specification.

Documentation

If there is in-tree documentation for a feature, it should be kept up to date by patches. Some changes, such as Firefox data collection, must be documented. I encourage anyone writing Mozilla-specific features and APIs to document them primarily with in-tree docs, and not on developer.mozilla.org. In-tree docs are much more likely to remain correct and be updated over time.

API Review

APIs define the interaction between units of Mozilla code. A well-designed API that strikes the right balance between simplicity and power is a key component of software engineering.

In Mozilla code, APIs can come in many forms: IDL, IPDL, .webidl, C++ headers, XBL bindings, and JS can all contain APIs. Sometimes even C++ files can contain an API; for example Mozilla has an mostly-unfortunate pattern of using the global observer service as an API surface between disconnected code.

In the first pass I try to avoid reviewing the implementation of an API. I’m focused on the API itself and its associated doccomments. The design of the system and the interaction between systems should be clear from the API docs. Error handling should be clear. If it’s not perfectly obvious, the threading, asynchronous behavior, or other state-machine aspects of an API should be carefully documented.

During this phase, it is often necessary to read the surrounding code to understand the system. None of our existing tools are very good at this, so I often have several MXR tabs open while reading a patch. Hopefully future review-board integration will make this better!

Brainstorm Design Issues

In my experience, the design review is the hardest phase of a review, the part which requires the most experience and creativity, and provides the most value.

  • How will this change interact with other code in the tree?
  • Are there edge cases or failure cases that need to be addressed in the design?
  • Is it likely that this change will affect performance or security in unexpected ways?

Testing Review

I try to review the tests before I review the implementation.

  • Are there automated unit tests?
  • Are there automated performance tests?
  • Is there appropriate telemetry/field measurement, especially for error conditions?
  • Do the tests cover the specification, if there is one?
  • Do the tests cover error conditions?
  • Do the tests strike the right balance between “unit” testing of small pieces of code versus “integration” tests that ensure a feature works properly end-to-end?

Code Review

The code review is the least interesting part of the review. At this point I’m going through the patch line by line.

  • Make sure that the code uses standard Mozilla coding style. I really desperately want somebody to automated lots of this as part of the patch-submission process. It’s tedious both for me and for the patch author if there are style issues that delay a patch and require another review cycle.
  • Make sure that the number of comments is appropriate for the code in question. New coders/contributors sometimes have a tendency to over-comment things that are obvious just by reading the code. Experienced contributors sometimes make assumptions that are correct based on experience but should be called out more explicitly in the code.
  • Look for appropriate assertions. Assertions are a great form of documentation and runtime checking.
  • Look for appropriate logging. In Mozilla we tend to under-log, and I’d like to push especially new code toward more aggressive logging.
  • Mostly this is scanning the implementation. If there is complexity (such as threading!), I’ll have to slow down a lot and make sure that each access and state change is properly synchronized.

Re-read the Specification

If there is a specification, I’ll briefly re-read it to make sure that it was covered by the code I just finished reading.

Mechanics

Currently, I primarily do reviews in the bugzilla “edit” interface, with the “edit attachment as comment” option. Splinter is confusing and useless to me, and review-board doesn’t seem to be ready for prime-time.

For long or complex reviews, I will sometimes copy and quote the patch in emacs and paste or attach it to bugzilla when I’m finished.

In some cases I will cut off a review after one of the earlier phases: if I have questions about the general approach, the design, or the API surface, I will often try to clarify those questions before proceeding with the rest of the review.

There’s an interesting thread in mozilla.dev.planning about whether it is discouraging to new contributors to mark “review-” on a patch, and whether there are less-painful ways of indicating that a patch needs work without making them feel discouraged. My current practice is to mark r- in all cases where a patch needs to be revised, but to thank contributors for their effort so that they are still appreciated and to be as specific as possible about required changes while avoiding any words that could be perceived as an insult.

If I haven’t worked with a coder (paid or volunteer) in the past, I will typically always ask them to submit an updated patch with any changes for re-review. This allows me to make sure that the changes were completed properly and didn’t introduce any new problems. After I gain some experience, I will often trust people to make necessary changes and simply mark “r+ with review comments fixed”.

Reviewing Merges in Mercurial

Thursday, April 17th, 2008

In the brave new world of distributed version control, history is no longer linear. It can branch freely, under no central control. But what is perhaps more interesting is that you can merge disparate branches back together. This makes it much easier to do long-term feature work, because it is possible to track a moving target. For the ActionMonkey project, we are following a procedure where all changes are reviewed before they are pushed to the actionmonkey branch. This means that we won’t have a huge blob of changes that need review before they are integrated into mozilla-central.

This creates a new class of problem: how do you perform merges? Is it necessary to review merge changesets with the same rigor as regular changes? How would one actually go about reviewing a merge changeset?

Today, I wanted to update ActionMonkey with the latest changes from mozilla-central. There is enough divergence between ActionMonkey and Mozilla 1.9 that this is not always a simple task: fortunately for me, the only conflicts that required any real merging were fairly simple. I went ahead an performed the merge and did some basic testing. But I really want Jason Orendorff to review my changes for sanity. There is no simple way to see what I actually did:

Merge graph of actionmonkey

It isn’t hard to diff this changeset against its mozilla-central parent, or against its actionmonkey parent. But neither of these diffs give you a sense of the real work involved in the merge. What I really want to give to Jason is a static view of a three-way merge as it already happened, highlighting the source locations where I made conflict resolutions or manual changes. Does anyone know if such a tool exists for Mercurial, or for any other distributed version control system?

Because I don’t know of a tool like this, I did the next-best thing: in my checkin comment, I carefully listed every file and function where I made a conflict resolution or manual change. This will at least make it clear in the future where I may have goofed.