Learn everything about our new Bitergia Research branch!

Reviewers and companies in the WebKit project

Share this Post

Table of Contents

As promised, here you have the second part of our series on WebKit, which we started with the analysis of companies focused on who is authoring reviewed commits.

Commits by reviewing company (whole history of the project)
Commits by reviewing company (whole history of the project)

Now we come back with an analysis on who is reviewing, and to which companies reviewers are affiliated. To provide some context, it is worth citing the commit and review policy of the WebKit project, which explains how only specific developers, with long and recognized experience in the project, can become reviewers: “A potential Reviewer may be nominated once they have submitted a minimum of 80 good patches. They should also be in touch with other reviewers and aware of who are the experts in various areas.” Since changes to source code have to be accepted by a reviewer, to some extent they are acting as the gatekeepers of the project, those in charge of ensuring that new code matches quality standards, and are in line with the project guidelines.

Therefore, the analysis of which companies are employing reviewers shows the project from another perspective, this time focused on those involved in this gatekeeping activity. The landscape now is different from that shown by the authorship analysis.

Commits by reviewing company (2012)
Commits by reviewing company (2012)

The top figure is a summary of all reviews, for the whole life of the project, by company. Apple is clearly the most reviewing company, followed by Google. However, it is interesting to compare with the situation for 2012 (left). In this case, Google and Apple exchange their positions, with Google almost doubling the reviews by Apple. This trend was also obvious in our previous report on authorship, but taking into account that reviewers need to have experience (and therefore, time of stay) in the project, this is showing how Google is not only contributing a large share of WebKit work, but also taking the responsibility of reviewing contributions.

However, there are still differences in authorship versus reviewing data (see chart below). For example, while by number of authors Google is by far the first (433), by number of reviewers (43) it is second to Apple (58), which employs 11,01% (112) of the authors. In other words, more than half of the developers from Apple are also reviewers, while barely 10% of Google developers are reviewers. And comparing again with the activity charts (number of commits), it seems Google reviewers are more active than Apple reviewers (their share of reviews is larger, while their number is similar), while Apple authors are more active than Google authors (their share of commits for the whole project is similar, while the number of Apple authors is much smaller)

Authors per company (left) and reviewers per company (right) for the whole life of the project.

This is also observed for the total activity in authoring and reviewing by company (compare this analysis on reviewing with our previous study on authorship). As expected, most of the commits (changes to source code) are reviewed by Apple or Google. In fact, Apple has reviewed 47,84% (38,044 commits) of all accepted submissions, while Google has reviewed 34,97% (27,804 commits). Besides these numbers, and given that the population of reviewers is smaller than that of authors, the number of companies with reviewers is also smaller. 39 Companies have been detected to participate in the Webkit project with at least one accepted and reviewed patch. But only 17 of them also have at least one reviewer. The aggregated share of reviews by Apple and Google (about 83%) is also larger than their aggregated share of contributions (about 75%), probably due to the number of companies contributing but doing little or no reviewing.

Number of commits based on reviewed commits and submitted by authors (left) and checked by reviewers (right)

Coming back to the general numbers, it is interesting to notice the big gap between the number of authors and the number of reviewers. For example, in July 2008 there were 33 reviewers for 55 (a ratio of 1.66), while in July 2010 they were 55 reviewers for 196 authors (3.56 authors per reviewer) and in July 2012 they were 309 and 89 (3.47). It seems that during 2012 the trend is starting to be reversed, but due to the quick increase in authors, still much more reviewers should be elected to come close to the ratios of 2008.

Evolution of number of unique authors and unique reviewers per month (only for reviewed commits)

This trend may indicate potential problems if the number of authors keep growing while new reviewers do not keep up with that trend. On the other hand, having a controlled number of reviewers provide a more stable product. Further work, to be more conclusive in this area, should analyze time-to-attend and time-to-review, because maybe reviewers are just being more efficient, and the project really do not need more of them.

All in all, as the authorship study already shown, it is clear that Webkit is a lively community growing in number of people and companies interested in the project, whose employees are taking responsibility and being involved in different roles.

To conclude these comments, it is worth mentioning the movement of Opera joining WebKit. It will be interesting to analyze their movements in terms of activity and how the will find their place in the community, either authoring or (probably later on) reviewing activity. Indeed, some other companies already followed this process. Some examples are those already mentioned in previous posts, such as RIM with currently 54 authors (during the whole history of the project) and 7 reviewers.  It took about 8-9 months for them to reach the number of 7 active reviewers (June 2012), while they had started to contribute strongly around November 2011, reaching a peak in authors in August 2012 (28, and 142 commits).

With respect to the details, this report only contains information from reviewers activity. This means that authors are not represented in this analysis, but only the reviewers that checked each commit. Regarding to the rest of the methodology, this report has used same version of the database as the one used for the authors analysis, with small improvements in the assignation of developers and affiliations. Thus, it is possible to trace all of the process since raw data is retrieved.

As a conclusion, we expect this type of analysis may help to make decisions related to how to approach and interact with the WebKit community. Even when it is an open community with clear and well defined rules, from a business perspective it is also necessary to have data about how it is performing to fully understand associated risks, and avoid false expectations. Becoming a reviewer is a process directly related to expertise in the project, and gaining it takes time. But being a reviewer is also going one step up in terms of responsibility in the project, something that companies may be interested in. And it is important to notice that reviewer status is gained by persons, not companies. Therefore, hiring reviewers could be another, faster strategy to get a stronger reviewing status.

Many other questions related to the review process are still open:

  • How much time does it take to become a reviewer?
  • Is experience related to a decrease in the time to accept a patch? (or the role?)
  • Are companies playing fair in the review process?
  • What are the areas of influence of each of the companies and reviewers?

There are still loads of open questions pending, and we will try to deal with them in following posts about WebKit. Meanwhile, if you have any kind of doubts or question, please comment or contact us.



More To Explore

Do You Want To Start
Your Metrics Journey?

drop us a line and Start with a Free Demo!