Best Practices for Peer Code Review
Smart Bear Software |
The Latest Code Review Tips & More:
blog.smartbear.co
| 978.236.7860 | www.SmartBear.com
Best Practices for Peer Code Review
Introduction
It’s common sense that peer code review – in which software developers
review each other’s code before releasing software to QA – identifies bugs,
encourages collaboration, and keeps code more maintainable.
But it’s also clear that some code review techniques are inefficient and
ineffective. The meetings often mandated by the review process take time
and kill excitement. Strict process can stifle productivity, but lax process
means no one knows whether reviews are effective or even happening. And
the social ramifications of personal critique can ruin morale.
This whitepaper describes 11 best practices for efficient, lightweight peer
code review that have been proven to be effective by scientific study and by
Smart Bear's extensive field experience. Use these techniques to ensure your
code reviews improve your code – without wasting your developers' time.
1. Review fewer than 200-400 lines of code at a time.
The Cisco code review study (see sidebar for details) showed that for optimal
effectiveness, developers should review fewer than 200-400 lines of code
(LOC) at a time. Beyond that, the ability to find defects diminishes. At this rate,
with the review spread over no more than 60-90 minutes, you should get a
70-90% yield; in other words, if 10 defects existed, you’d find 7-9 of them.
The graph on the following page, which plots defect density against the
number of lines of code under review, supports this rule. Defect density is the
number of defects per 1000 lines of code. As the number of lines of code
under review grows beyond 300, defect density drops off considerably.
In this case, defect density is a measure of “review effectiveness.” If two
reviewers review the same code and one finds more bugs, we would consider
her more effective. Figure 1 shows how, as we put more code in front of a
reviewer, her effectiveness at finding defects drops. This result makes sense –
the reviewer probably doesn’t have a lot of time to spend on the review, so
inevitably she won't do as good a job on each file.
The World’s Largest Code Review Study
at Cisco Systems®
Our team at Smart Bear Software has spent
years researching existing code review studies
and collecting “lessons learned” from more
than 6000 programmers at 100+ companies.
Clearly people find bugs when they review
code – but the reviews often take too long to
be practical! We used the information gleaned
through years of experience to create the
concept of lightweight code review. Using
lightweight code review techniques,
developers can review code in 1/5
th
the time
needed for full “formal” code reviews. We also
developed a theory for best practices to
employ for optimal review efficiency and value,
which are outlined in this white paper.
To test our conclusions about code review in
general and lightweight review in particular,
we conducted the world’s largest-ever
published study on code review,
encompassing 2500 code reviews, 50
programmers, and 3.2 million lines of code at
Cisco Systems®. For ten months, the study
tracked the MeetingPlace® product team,
distributed across Bangalore, Budapest, and
San José.
At the start of the study, we set up some rules
for the group:
All code had to be reviewed before it was
checked into the team’s Perforce version
control software.
Smart Bear’s Code Collaborator code
review software tool would be used to
expedite, organize, and facilitate all code
review.
In-person meetings for code review were
not allowed.
The review process would be enforced by
tools.
Metrics would be automatically collected
by Code Collaborator, which provides
review-level and summary-level reporting.