Review Board

A New Patching Process for RBTools 5.1

RBTools, our command line tool suite for Review Board, is getting all-new infrastructure for applying patches. This will be available in rbt patch, rbt land, and any custom code that needs to deal with patches.

A customer reported that multi-commit review requests won’t apply on Mercurial. The problem, it turns out, is that Mercurial won’t let you apply a patch if the working tree isn’t clean, and that includes if you’ve applied a prior patch in a series. That’s pretty annoying, but there are good reasons for it.

It can, however, apply multiple patches in one go. Which is nice, but impossible to use with RBTools today.

So this weekend, I began working on a new patch application process.

How SCMs apply patches

There are a lot of SCMs out there, and they all work a bit differently.

When dealing with patches, most are thin wrappers around GNU diff/patch, with a bit of pre/post-processing to deal with SCM-specific metadata in the patches.

The process usually looks like this:

  1. Extract SCM-specific data from the diff.

    The SCM patcher will read through the patch and look for any SCM-specific data to extract. This may include commit IDs/revisions, file modes, symlinks to apply, binary file changes, and commit messages and other metadata. It’ll usually validate the local checkout and any file modifications, to an extent.
  2. Normalize the diff, if needed.

    This can involve taking parts of the diff that can’t be passed to GNU patch (such as binary file changes, file/directory metadata changes) and setting those aside to handle specially. It may also split the patch into per-file segments. The results will be something that can be passed to GNU diff.
  3. Invoke the patcher (usually GNU patch or similar).

    This often involves calling an external patch program with a patch or an extracted per-file patch, checking the results, and choosing how to handle things like conflicts or staging a file for a commit or applying some metadata to the file.
  4. Invoke any custom patching logic.

    This is where logic specific to the SCM may be applied. Patching a binary file, changing directory metadata, setting up symlinks, etc.

SCMs can go any route with their patching logic, but that’s a decent overview of what to expect.

Depending on what that logic looks like, and what constraints the SCM imposes, this process may bail early. For instance, some will just let you apply a patch on top of a potentially-dirty working directory, and some will not.

Mercurial won’t, which brings us to this project.

Out with the old

RBTools uses an SCM abstraction model, with classes implementing features for specific SCMs. We determine the right backend for a local source tree, get an instance of that backend, and then call methods on it.

Patches are run through an apply_patch() method. It looks like this:

def apply_patch(
    self,
    patch_file: str,
    *,
    base_path: str,
    base_dir: str,
    p: Optional[str] = None,
    revert: bool = False,
) -> PatchResult:
    ...

This takes in a path to a patch file, some criteria like whether to revert or commit a patch, and a few other things. The SCM can then use the default GNU patch implementation, or it can use something SCM-specific.

At the end of this, we get a PatchResult, which the caller can use to determine if the patch applied, if there were conflicts, or if it just outright failed.

The problem is, each patch is handled independently. There’s no way to say “I have 5 patches. Do what you need to do to apply them.”

So we’re stuck applying one-by-one in Mercurial, and therefore we’re doomed to fail.

This architecture is old, and we’re ready to move on from it.

In with the new

We’re introducing new primitives in RBTools, which give SCMs full control over the whole process. Not only are these useful for RBTools commands, but for third-party code built upon RBTools.

Patch application is now made up of the following pieces:

  • Patch: A representation of a patch to apply, with all the information needed to apply it.
  • Patcher: A class responsible for applying and committing patches, built to allow SCM-specific subclasses to communicate patching capabilities and to define patching logic.
  • PatchResult: The result of a patch operation, covering one or more patches.

Patcher consolidates the roles of both the old apply_patch() and some of the internal logic within our rbt patch command. By default, it just feeds patches into GNU patch one-by-one, but SCMs can do whatever they need to here by pointing to a custom Patcher subclass and changing that logic.

Callers tell the Patcher, “Hey, I have these patches, and have these settings to consider (reverting, making commits, etc.)” and will then get back an object that says what the patcher is capable of doing.

They can then say “Okay, begin patching, and give me each PatchResult as you go.” The Patcher can apply them one-by-one or in batches (Mercurial will use batches), and send back useful PatchResults as appropriate.

That in turn gives the caller the ability to report progress on the process without assuming anything about what that process looks like.

And it Just Works (TM)

This design is very clean to use. Here’s an example:

review_request = api_root.get_review_request(review_request_id=123)

patcher = scmclient.get_patcher(patches=[
    Patch(content=b'...'),
    Patch(content=b'...'),
    Patch(content=b'...'),
])
total_patches = len(patcher.patches)

try:
    if patcher.can_commit:
        print(f'Preparing to commit {total_patches} patches...')
        patcher.prepare_for_commit(review_request=review_request)
    else:
        print(f'Preparing to apply {total_patches} patches...')

    for patch_result in patcher.patch():
        print(f'Applied patch {patch_result.patch_num} / {total_patches}')
except ApplyPatchError as e:
    patch_result = e.failed_patch_result

    if patch_result:
        print(f'Error applying patch {patch_result.patch_num}: {e}')

        if patch_result.has_conflicts:
            print()
            print('Conflicts:')

            for conflict in patch_result.conflicts:
                print(f'* {conflict}')
    else:
        print(f'Error applying patches: {e}')

In this example, we:

  1. Defined three patches to apply
  2. Requested to perform commits if possible, using the review request information to aid in the process.
  3. Displayed progress for any patched files.
  4. Handled any patching errors, and showing any failed patch numbers and any conflicts if that information is available.

This will work across all SCMs, entirely backed by the SCM’s own patching logic.

This is still very much in progress, but is slated for RBTools 5.1, coming soon.

Until then, check out our all-new Review Board 7 release and our all-new Review Board Discord channel, open for all developers and for Review Board users alike.

A New Patching Process for RBTools 5.1 Read More »

Review Board: Between Then and Now

I just realized, before I know it, we’ll be hitting 20 years of Review Board.

Man, do I feel old.

It’s hard to imagine it now, but code review wasn’t really a thing when we built Review Board back in 2006. There were a couple expensive enterprise tools, but GitHub? Pull requests? They didn’t exist yet.

This meant we had to solve a lot of problems that didn’t have readily-made or readily-understood solutions, like:

🤔 What should a review even *be*? What’s involved in the review process, and what tools do you give the user?

We came up with tools like:

  • Resolvable Issue Tracking (a To Do list of what needs to be done in a change)
  • Comments spanning 1 or more lines of diffs
  • Image file attachment review
  • Previews of commented areas appearing above the comments.

Amongst others.

🤔 How should you discuss in a review? Message board style, with one box per reply? Everything embedded in top-level reviews? Comments scattered in a diff?

We decided on a box per review, and replies embedded within it, keeping discussion about a topic all in one place.

Explicitly not buried in a diff, because in complex projects, you also may be reviewing images, documents, or other files. Those comments are important, so we decided they should all live, threaded, under a review.

A lot of tools went the “scatter in a diff” route, and while that was standard for a while, it never sat right with me. For anything complex, it was a mess. I think we got this one right.

🤔 How do you let users keep track of what needs to be reviewed?

We came up with our Dashboard, which shows a sortable, filterable, customizable view of all review requests you may be interested in. This gave a bird’s-eye view across any number of source code repositories, teams, and projects.

Many tools didn’t go this route. You were limited to seeing review requests/pull requests on that repository, and that’s it. For larger organizations, this just wasn’t good enough.

🤔 How do you give organizations control over their processes? A policy editor? APIs? Fork the code?

We settled on:

  • A Python extension framework. This was capable of letting developers craft new policy, collect custom information during the review process, and even build whole new review UIs for files.
  • A full-blown REST API, which is quite capable.
  • Eventually, features like WebHooks, once those became a thing.

Our goal was to avoid people ever having to fork. But also, we kept Review Board MIT-licensed, so people were sure to have the control they needed.

I could probably go on for a while. A lot of these eventually worked their way into other code review tools on the market, and are standard now, but many started off as a lot of long nights doodling on a whiteboard and in notebooks.

We’ve had the opportunity to work for years with household names that young me would have never imagined. If you’ve been on the Internet at all in the past decade, you’ve regularly interacted with at least one thing built in Review Board.

But the passage of time and the changes in the development world make it hard these days. We’re an older tool now, and people like shiny new things. That’s okay. We’re still building some innovative shiny things. More on some of those soon 😉

This is a longer post than I planned for, but this stuff’s on my mind a lot lately.

I’ve largely been quiet lately about development, but I’m trying to change that. Develop in the open, as they say. Expect a barrage of behind-the-scenes posts coming soon.

Review Board: Between Then and Now Read More »

Looking Back on Review Board

Just over 3.5 years ago, David Trowbridge and I spent some time discussing the annoyances of the typical patch submission and code review processes in the open source projects we participated in and at companies, and decided to play with some ideas for improving this. At the time, we knew very little about what we intended to do. We had a name for it pretty early on, but that was about all we had. We didn’t even know whether we’d get past an early prototyping stage. But here it is, over 3 years later, and we have the leading open source code review tool with an active support and development community, hundreds of companies using it, and exciting new innovations for aiding in the code review process.

I was thinking a few days ago about how far we’ve come and some of the decisions we made along the way. I went digging through our commit history in order to relive some of the past of our little project. Since so few people were even aware of Review Board’s existence at the time, I thought I’d share some of our history with you. Particularly the interesting and funny bits.

“Add the reviewboard.”

Commit #1. The very first thing we put in our Subversion tree on September 27, 2006. I don’t even remember what was in this change now. We transitioned to Git last year and this commit is now just plain empty. Maybe it was jut the directory structure? Who can say.

Early on, we didn’t refer to “Review Board” as a proper name. It was generally “the reviewboard” or something similar. The codebase was young. We didn’t actually do code review on the project at this point (and it shows!). The first few months are littered with odd or nonsensical commit messages, small breakages, and bad decisions.

A few of my favorite commit messages are:

  • “I suck. Make submitting of reviews.”
  • “Don’t stuff the list of files in the bug list. It’s impolite.”
  • “Avoid failing out with Christian’s wacko form”
  • “Gum.”
  • “Holy apple pancakes. It worked!”
  • “I suck… The array was empty… The tests never had a chance to fail. :(“
  • “‘This is a summary’ sucks. Now we use fortune for the summary, description, and testing done. ‘You’re ugly and your mother dresses you funny.'”
  • “Unbreak things before ChipX86 notices”
  • “I’m just… garhgh”

Nowadays, our commit messages look nothing like that, but that’s the fun of a new project. You get to go commit-crazy while you try to figure out what you’re building.

Dashboard, quips and fortunes

The UI of old looked quite different than the UI of today.

We had a dashboard from the very beginning (before the review request pages, even) but it wasn’t anything like the dashboard we had today. It was a simple page with a table containing all outgoing review requests and a table containing incoming review requests. But it also had one more thing: quips.

The beginning of quips functionality was being built. Quips are just little random quotes that are inserted in the UI. I think the plan was to put quips on certain pages, making Review Board a little more fun. We were using them in the dashboard for empty lists, with variations all saying something about the dashboard being empty. Quips are a neat feature that just never survived the early days of development.

Fortunes are similar. On Linux/Unix systems, there’s a little program called “fortune” that just displays a random quote. Since we at first had to test review request functionality without actually having a repository backend of any sort, and we didn’t want to input all the information each time, we just used fortune to generate the summary, description and testing done text. This made for some really funny review requests early on, but this is of course something that had no reason to survive initial development.

Sometimes we would create a bunch of review requests just to see what kind of quotes we’d get. 🙂

Multiple repositories? Almost didn’t happen.

One of the really critical parts of Review Board today is the ability to talk to a variety of different types of repositories in one instance. But, it turns out, this almost didn’t happen.

The initial goals were not that ambitious. Review Board talked to one repository per instance. Everything was basically hard-coded with one repository in mind. That type of repository, as well as its information, was customizable. You just couldn’t have more than one. At the time, this wasn’t a problem, but it didn’t take long until we had a need to talk to two repositories.

We discussed this and at first decided that if we needed to talk to two repositories, we could just set up two instances. It would have been a lot of work to update it for multiple repositories, after all. And really, this was a small project. Who would really need more than a couple repositories? This started to nag at me, though, and so I spent a couple nights rewriting all of the code as an experiment. It ended up working pretty nicely, and we were able to ditch the multi-instance model.

The importance of rewards

It’s always nice to have a little reward for milestones. Developers sometimes compete over cool bug numbers, revisions, etc. Initially, we were going to use quips to add some fun to the site, but we ended up settling on our current trophy system.

One of our first Review Board instances started to approach review request #1000, which was a huge milestone for us. I decided to commemorate the event by staying up and quickly hacking in a hidden feature for showing a trophy for review request #1000. The way we implemented it, you’d see the first ever trophy at 1,000, and from there you’d see it at every milestone number (1,000, 2,000, 3,000, 10,000, etc.). I didn’t want to stop there, though, so I added support for a second type of trophy, one that has confused people with its appearance to this day. Mission complete.

Of course, when we updated the server and someone finally hit 1000, it triggered a bug in the new trophy code and broke his review request. Oh well, I tried.

Diff viewers are hard

If I could pick one point during the whole history of Review Board where I was ready to completely give up, it would be during the creation of our diff viewer. All three diff viewers.

See, the first diff viewer was a complete and total hack. We generated a side-by-side diff using the diff tools and just parsed the output, basically generating a table of that. It was ugly, though, and limiting. It also caused problems where text on a row would either be truncated or would break the parser. I spent a long time working on this before I totally gave up and went on to try a new approach.

My second approach was closer to what we have today, but also limiting and very, very buggy. We were using Python’s built-in diff generation module, which implements a basic diff algorithm. It gave us insert and delete information, but not replace information. We had to hack that in ourselves, and it was really a hack. Try taking a bunch of inserts and deletes and find out which of those are really changed lines. No, really, try it. It’s harder than you think, and it’ll often be wrong.

Still, we stuck with this for a long time. It was slow, buggy, and didn’t generate the sort of output people expected from diff tools. Most people see diffs from GNU Diff, which implements the Meyers Diff Algorithm (with a few additions and tweaks). These Meyers diffs are much nicer to view than what Python gave us. Another problem we hit was that we didn’t have real line number information, so we had to output fake line numbers. They weren’t really line numbers so much as row numbers in the table. Ugh. Even getting this far was really hard and frustrating, and the result still wasn’t good.

Attempt #3. I decided to build our own diff parser and generator from scratch. What a project. I knew nothing about diff generation and hardly knew where to start. I spent probably a good month or so just trying to work on this new diff code, and was so close to giving up so many times. It ended up being completely worth it, though, as we ended up with a very nice, extensible diff parser.

Without that third attempt, we’d be in the stone age. Review Board would not be as nice to use. We wouldn’t have inter-line diffs (where we highlight what changed in a replace line), syntax highlighting, move detection (coming in 1.5), or function/class headers (where we show which function/class the part of the diff is in — also coming in 1.5).

What else…

Well, there’s a lot more I could talk about. Our initial attempts at JavaScript code for the UI, our trials and challenges with database migration, or our early problems storing diffs with different encodings in databases. This is getting long, though, so I’ll cover these in another post on lessons learned.

Looking Back on Review Board Read More »

Review Board 1.0 Released!

Review Board 1.0

Tonight, we hit a milestone in the Review Board project that we’ve been working toward for over two years. We finally pushed out our 1.0 release. A lot of blood, sweat and tears went into this release (okay, so not literally, but it was A LOT OF WORK!). The last few months in particular have been challenging, as we’ve had to solve some tricky bugs and scalability problems, but the end result is pretty great.

Just a short while ago, we announced the release and put up an overview of the entire release and product. We’ve already had some nice congratulatory e-mails and tweets, which is really nice 🙂

Some stats for this release:

  • 2 years, 9 months, 25 days have passed since our first commit.
  • 120 contributors have contributed to Review Board so far (in terms of code contributions).
  • 2,019 commits were made.
  • 899 review requests have been posted to our project’s actual Review Board server. 1,650 users are registered on there.
  • Our demo server, in comparison, has 2,082 review requests filed and 10,154 users.
  • 938 bugs were filed. 812 were fixed.
  • 232 feature requests were filed. 101 were implemented. Most remaining ones are scheduled for releases.
  • An estimated 200+ companies are now using Review Board. 26 have let us list them publicly.
  • The largest known Review Board install has over 83,000 filed review requests and over 2,000 users, doing upwards of 10GB of traffic per day.
  • 5 presentations on Review Board are known to have been given, 3 by us, 2 by others.
  • 552 users have joined our main mailing list, and 3,674 e-mails have been sent.

Now that Review Board 1.0 is out, we can get started on some awesome new features we’ve had planned. I have a little notebook full of ideas for our 1.1 and 1.5 releases (which may become 1.5 and 2.0, respectively, as this list grows). Some of the new features are actually ready to be committed within the next couple of days, so those of you using nightlies will start to see them soon.

We were accepted into this year’s Summer of Code, and have three students working on exciting projects for us, so hopefully we’ll start to see these trickle into the upcoming nightlies as well. Among these projects include diff viewer improvements (moved region detection, better whitespace-only change detection), IDE integration with Eclipse, and improved notification hooks and e-mail support.

We’re also working on providing support for third-party extensions, which will allow developers to extend Review Board in new, exciting ways without having to modify Review Board itself. This is especially handy for companies who wish to integrate better with their sandboxes, bug trackers or unit testing services. This will likely land in 1.5 (2.0?) at the earliest, as it’s a large change, but the code for this mostly works today. It’s just a matter of getting the codebase ready and figuring out what APIs we want to stabilize and expose.

As I mentioned in the release announcement, we’re planning a release party, tentatively on July 11th, 2009, in the Bay Area (somewhere around Palo Alto, CA). If any Review Board users want to join us, please RSVP!

Review Board 1.0 Released! Read More »

Review Board: Summer of Code, Roadmap and Future Plans

Summer of Code

This year, we (the Review Board project) was given the opportunity to participate in Google’s Summer of Code. We’ve received some great student proposals so far, and I think we’ll see exciting work done on Review Board this summer.

The deadline for Summer of Code is coming up fast (April 3rd, 19:00 UTC). If you’re interested in working on Review Board and haven’t yet applied, it’s not too late, but you’ll want to hurry. Skim through our ideas page and, if you find something interesting or have a great idea not listed here, then apply and tell us your plans. I can say we’ve received several proposals so far for the installer and admin UI, so unless you feel strongly about either of those, you’ll increase your chances with other proposals.

We’re also offering free Review Board hosting for open source projects participating in Summer of Code. If you’re a mentoring organization and would like to give Review Board a try for reviewing and managing student code, go ahead and contact us and we’ll get you set up.

Roadmap

We’re finally nearing 1.0. We recently put out our 1.0 beta 2 release and are now in a feature freeze. We’re working to get some bug, performance and usability fixes in for beta 3, which I’m shooting for in a few weeks. Then we’ll branch for 1.0, put out a Release Candidate or two, and then finally release 1.0!

There’s a lot of really cool features planned after 1.0, namely extensions and policy customization.

Extensions

Our bug tracker is filled with feature requests for all kinds of things, ranging from bug tracker integration, instant messaging, a method for offering bribes for code reviews, and so on. We clearly can’t put all the requested features in the codebase, so we’ve decided instead to add support for third-party extensions. Coming soon, developers will be able to write extensions to Review Board in the form of Python modules to extend or alter the functionality of Review Board. The extension framework will allow them to do the following:

  • Access the database using the existing Review Board database models.
  • Add new database models for storing data.
  • Listen for signals (new review request published, review request submitted, etc.) and act on them.
  • Add custom URLs.
  • Replace existing URLs, for advanced capabilities such as replacing the diff viewer.
  • Add new API handlers.
  • Add “action” links to existing review requests and reviews.
  • Add columns and sidebar entries to the dashboard.
  • Add pages to the administration UI.
  • Communicate with other extensions.
  • Provide a settings page, which stores data in Review Board-provided models (we even auto-generate the settings page for the extension by default).
  • And more!

A lot of this already exists in a private development branch, and it will be one of our primary focuses as soon as 1.0 goes out.

In time, we’ll add a new section to the Review Board website where developers can list their extensions for download and for sale. Administrators will be able to browse and search for extensions directly from the administration UI and install them without having to even open a terminal (in most cases).

We’re hoping this will solve a lot of in-house integration issues. For example, many companies have custom sandbox architectures, bug trackers, and statistics software which they’ll now be able to tie in with Review Board.

Policy Customization

We’ve found that a lot of companies have very specific ways they want to handle policy and access restrictions. For example, many companies want to limit who can see certain parts of a repository (and therefore certain diffs), or want to allow anybody to create review groups, or want to disallow people from joining review groups. Some also want to dictate what constitutes approval for submitting a change.

We’re looking into the various requests and attempting to come up with a policy model that is flexible enough to handle these needs. One of the ideas is to provide some basic level of access control on a per-repository, per-path, and per-group basis. We’d then piggy-back on the extension framework to allow for more specific policy control. The advantage is that developers could write their own policy rules that interface with some part of their company’s infrastructure.

If people have any input on this, we’d love to hear it.

Review Board: Summer of Code, Roadmap and Future Plans Read More »

Improving browser performance in Review Board

This past Sunday, I landed a set of changes into Review Board that provide improved performance, such as aggressive browser-side caching of media and pages. It’s just a start, but has already significantly reduced page load times in all of my tests, in some case by several seconds. We implemented these methods for Review Board, but they’re methods that can be applied to any Django project out there.

There are several key things that Review Board now does to improve performance:

  • Tells browsers to cache all media for one year.
  • Only sends page data if new data is available.
  • Compresses all media files to reduce transfer time.
  • Parallelizes media downloads.
  • Loads CSS files as early as possible.
  • Loads JavaScript files as late as possible.
  • Progressively loads expensive data.

A lot of the performance improvements come straight from Yahoo!’s Best Practices for Speeding Up Your Site. We’re not doing everything there yet, but we’re working toward it. That’s a great resource, by the way, and I recommend that everyone who has even made a website before go and read it.

So what do the above techniques buy us, and how are we doing them? Let me go into more details…

Caching all media for a year

The average site has one or more CSS files, JavaScript files, and several images. This translates to a lot of requests to the server, which may leave the site feeling slow. On top of this, a browser only makes a few requests to a server at a time, in order to avoid swamping the server, which will further hinder load times. This happens every time a user visits a page on your site.

Aggressive caching makes a huge difference and can greatly reduce load times for users. Review Board now tells the browser to cache media files for a year. Once a user downloads a JavaScript or CSS file, they won’t have to download it again, meaning that in general the only requests the browser needs to make is for the page requests and AJAX requests.

The big pitfall with long-term caching is that the cached resources can go stale. For example, if a new version of an image was uploaded, the browser wouldn’t even know about it, since it was told it should keep its old version for a year before checking again.

We solve this by introducing “media serials,” timestamps that are appended to all media paths. Instead of caching /js/myscript.js, the browser would cache /js/myscript.js?1273618736.

These media serials are computed on the first page request by our djblets.util.context_processors.ajaxSerial context processor. This quickly scans all media files known to the program, finding out the latest modification timestamp. It then provides a {{MEDIA_SERIAL}} variable for templates to append to media URLs as part of the query string.

The benefit to this method is that we can cache media files for a year and not worry about users having stale cached resources the next time we upgrade a copy of Review Board. The filenames requested will be different, browsers will see that the new file is not in the cache, and make a request, caching the new file for a year.

Only send page data if new data is available

Aggressive caching of media files is great and saves a lot of time, but it doesn’t help for dynamically generated content. For this, we need a new strategy.

When a browser makes a request, it can send a If-Modified-Since header to the server containing the Last-Modified value it received the last time it downloaded that page. This is a very valuable header, and there’s some things we can do with it to save both the server and the browser a lot of trouble.

If the browser sends If-Modified-Since, and we know that no new data has been generated since the timestamp provided, we can send an HttpResponseNotModified (HTTP response code 304). This will tell the browser it already has the newest version of the page. The sooner we do this, the better, as it means we don’t have to waste time building templates or doing any expensive database queries.

Djblets, once again, provides some functions to help out here: djblets.util.http.set_last_modified and djblets.util.http.get_modified_since.

The general usage pattern is that we first build a timestamp representing the latest version of the page. This could be the timestamp for a particular object that the page represents. We then check if we can bail early by calling:

if get_modified_since(request, timestamp):
    return HttpResponseNotModified()

Further down, after building the page, we must set the Last-Modified timestamp, using the same timestamp as above, like so:

set_last_modified(response, timestamp)

We’re using this in only a few places right now, such as the review request details page, but it drastically improves load times. If the review request hasn’t changed and nobody’s commented on it since the browser last loaded the page, a reload of the page will be almost instant.

Compress all media files

Our Apache and lighttpd config files now enable compression by default. By compressing these files, we can turn a relatively large JavaScript file (such as the jquery and jquery-ui files) into a very small file before sending it over to the browser. This reduces transfer times at the expense of compression/decompression time (which is small enough to not worry for deployments of this size, and can be offset by caching of compressed files server-side).

Parallelize media downloads

It’s important to not mix loads of media files of different types. The browser parallelizes media downloads of the same type, in page load order, but if you load one CSS file, one JavaScript file, another CSS file, and then another JavaScript file, the browser will only attempt one load at a time. If you load all the CSS files before all JavaScript files, it will parallelize the CSS file download and then the JavaScript downloads. By enforcing the separation of loads, we can achieve faster page download/render times.

Load CSS files as soon as possible

Loading CSS files before the browser starts to display the page can make the page appear to load smoother. The browser will already know how things should look and will lay the page out accordingly, instead of laying the page out once and then updating that once the CSS files have loaded.

Load JavaScript files as late as possible

JavaScript loads block the browser, as the browser must parse and interpet the JavaScript before it can continue. Sometimes it’s necessary to load a JavaScript file early, but in many cases the files can be loaded late. When possible, we load JavaScript files at the very end of the document body so that they won’t even begin downloading until the page has rendered. This provides noticeable performance for script-heavy pages.

Progressively load expensive data

There are types of data that are just too expensive to load along with the rest of the page. For a long time, Review Board would parse and render fragments of a diff for display in the review request page, but that meant that before the page could load, Review Board would need to do the following:

  1. Query the list of all comments.
  2. Fetch every file commented on.
  3. Apply the stored patch to each file.
  4. Diff between the original and patched files.
  5. Render the portion of the diff commented on into the page.

This became very time-consuming, and if a server was down, the page wasn’t available until everything timed out. The solution to this was to lazily load each of these diff fragments in order.

We now display a placeholder table for each diff fragment in roughly the same size of the rendered fragment (to avoid excessive page scrolling on loads). The table contains a spinner showing that something is happening, and, one-by-one (to avoid dogpiling) we load each diff fragment.

The code to render the diff fragment, by the way, takes advantage of the If-Modified-Since header and is also cached for a year. We use an AJAX_SERIAL (same principal as the MEDIA_SERIAL above) to allow for changes in new deployments.

With these caching mechanisms in place, the review request page now loads in roughly a second in many cases (or less once cached), with diff fragments coming in lazily (and then almost immediately on future loads).

More to come…

This was a great first step, but there’s more we can do. Before we hit our 1.0 release, we’re going to batch together all our CSS files and JavaScript files into a couple of combined files and then “minify” them (basically compressing them in such a way to allow for smaller files and faster load times of the interpreted data).

Again, these are techniques we’re now making use of in Review Board, but they’re not in any way specific to Review Board. Anyone out there developing websites or web applications should seriously look into ways to improve performance. I hope this was a good starting point, but seriously, do read Yahoo!’s article as well.

Improving browser performance in Review Board Read More »

Review Board 1.0 alpha 1 released

Roughly two years ago, David Trowbridge and I began development of Review Board for use in our open source projects and our team at VMware. During that time, we’ve turned Review Board into a powerful code review tool that works with a variety of version control systems. Most of VMware has moved over to it, as have an estimated 50-100 companies world-wide. We’ve had over 100 contributors to the project, people providing volunteer support on the mailing list, and people have developed third party tools for integrating with Review Board.

After all this time in development, with this many people contributing, we decided it’s probably time to get a release out there. Sure, we could have done this a long time ago, but there’s a number of large things we were hoping to get in (a recently-committed UI rewrite, for instance). Now that we have most of the major features we want for our 1.0 release, we decided it was time for an alpha.

Over the coming months, we’ll be working on stabilizing the codebase, fixing a few large remaining usability quirks, enhancing performance, and writing some proper documentation (which is coming along nicely).

We’re eager to get a quality product out there and to begin development on the next release. There’s a lot of neat things planned:

  • Support for writing extensions to Review Board.
  • A fully-featured API covering every operation you’ll need to perform.
  • Some degree of policy support (specifying which users/groups can see which parts of a repository, for instance).
  • Reviews with statuses other than “Ship It”. This will probably be customizable to some degree.
  • Possibly some theme customization to allow Review Board to blend in better with corporate sites, Trac installs, etc.

Along with this, I plan to roll out a new website for the project that will have a browseable list of third party extensions, apps, Greasemonkey scripts, and more.

We have more information on our release on our release announcement.

Review Board 1.0 alpha 1 released Read More »

Twittering as Review Board Approaches 1.0

On the road to 1.0

We’re getting very close to feature freeze for Review Board 1.0. The last couple of major features are up for review. These consist largely of a UI rewrite that simplifies a lot of Review Board’s operations and moves us over to using jQuery. This will go in once it’s been reviewed and tested in Firefox 3, IE 6/7, Opera and Safari.

There are some preliminary screenshots up of the UI rewrite. Some things will be changing before this goes in, but it should give a good idea as to the major changes (if you’re already a Review Board user).

In the meantime, we’re working to get some other fixes and small features in, and I’m beginning work on a user manual. I’m not sure how much will get done for 1.0, but with any luck I’ll have a decent chunk done.

Twittering the night away

I’ve just set up a @reviewboard user on Twitter that I’m going to try to keep up-to-date as progress is made. This should give people a decent way of passively keeping track of updates if they’re Twitter users.

Barter system?

Britt Selvitelle of Twitter fame just sent me a great screenshot of a barter system for Review Board. Can’t get someone to review your code? Offer them something in exchange!

As some people know, we’re planning to have extensions in the next major release (1.5 or 2.0). This would be a fun little extension to have 🙂 Maybe I’ll write it as part of a tutorial.

Twittering as Review Board Approaches 1.0 Read More »

Djblets and Review Board moving to jQuery

When we first began development of Review Board and its Django utility package Djblets almost two years ago, we needed to pick a JavaScript library to use. There were several good ones available at the time, and after evaluating several options we chose the Yahoo! UI Library (YUI) and YUI-Ext, an extension library to YUI. Both were excellent and have served us well over the past two years.

However, YUI-Ext itself is really no more, which means we needed to choose something new. We could have continued to bundle it and YUI with Review Board, but users of Djblets would have to hunt down a copy and load in all of YUI and YUI-Ext just to use such features as datagrids. This was, to say the least, inconvenient.

YUI-Ext and the Licensing Situation

Now I said YUI-Ext is no more, but really it’s still around, just in a new form. It had been renamed to ExtJS and became its own independent library. While compatible with YUI, most of the functionality we needed was really provided by ExtJS, meaning we didn’t really need both. Over time, YUI evolved as well, so we began looking into the differences and figuring out which to go with.

ExtJS, it turns out, was a non-starter for us. Unlike YUI-Ext before it, the ExtJS licensing terms were a bit restrictive and, frankly, confusing. Open source projects could use it under the GPL3, but commercial products required a special license per developer, which is $289 per developer. The GPL3 itself is unclear with regards to our situation. We’re an open source project, but we’re MIT-licensed. What if a company wants to modify something for their own use and not release it to the public? What if down the road someone wants to offer commercial extensions to Review Board? We’d have to choose the commercial license to be safe, but then contributors would also need to pay $289 for a license, wouldn’t they?

Looking Again at YUI

We didn’t want to step down a dangerous road with ExtJS, so we started to look again at YUI. It’s definitely come a long way and I must recommend it for developers looking for a strong toolkit with good UI support. They provide good documentation, many examples, and base functionality for nearly everything.

However, there’s a lot we’d have to rewrite to move to it, and if we had to rewrite the code anyway, I wanted to look around a bit at the current generation of JavaScript toolkits. Especially since our needs have changed over the years and we’re looking for something with a lighter footprint. We’re also moving away from the dialog-centered UI we have on some pages, which is where YUI shines.

jQuery

We ended up deciding to go with jQuery, partially because of the footprint and partially the clean way in which scripts can be written. As an experiment, I converted our Djblets datagrid code to jQuery. The result is a smaller, much more readable, reusable, cross-browser script. I was impressed by what jQuery let me do, and by the size.

I did some comparisons on the number of files downloaded and the size of the files, using the Review Board dashboard as a test.

With YUI and YUI-Ext, we were loading 7 JavaScript files, excluding our own scripts, at a total size of 376KB (when minified).

With jQuery, we loaded only 2 JavaScript files, at a total size of 128KB.

Our datagrid.js file also went down from 13KB to 9KB largely due to some of the niceties of jQuery’s API.

This is a pretty significant savings, a whole 252KB, which could be a couple of seconds on an average DSL line. And it’s not just the file size but the number of requests, which can make a big difference on a heavily accessed web server.

What This Means for Djblets

Developers using Djblets can now use datagrids without needing to do anything special. Djblets bundles jQuery, which it will use by default unless you choose to override which jQuery script is being used. This means all you need is an install of Djblets set up and you’re ready to use it!

Going Forward

We’re working on migrating the rest of Review Board to jQuery. This will take place over the next few weeks, and is one of the last major things we hope to do for our 1.0 release.

Djblets and Review Board moving to jQuery Read More »

Review Board Roadmap and Donations

Roadmap

With the upcoming release of Django 1.0 in the next few weeks, we decided it was time to formalize a roadmap for Review Board 1.0. The roadmap provides a good overview of what users can expect for our release, and what it will take to get there.

At this point we’re asking for people to contribute wherever possible. The big thing is fixing bugs targeted for the 1.0 release. We’d also like some help in finalizing unit tests.

Quality Control

We’re doing what we can to improve quality control in Review Board. For a lot of people, Review Board works great, though setups often differ and some users hit issues that others never see. For this, we’re trying to improve our unit tests to catch these various cases. When people submit patches, we’d greatly appreciate unit tests to cover the new code, and in some cases will require them for the code submission.

Selenium

We will soon start using Selenium in our unit test process to simulate user action in various web browsers. Selenium allows for remote-controlling a web browser, simulating clicks, text input, and other user actions and checking the results. Over time, when our Selenium test suite is more complete, we should be able to catch browser-specific problems a lot more easily.

Buildbot Server

Another issue users have hit lately is breakages due to changes in Django for the 1.0 release. As things calm down there, this will become less of an issue, but we’ve put things in place to catch these problems before users do.

We have just set up a buildbot server that will perform a full build and run the test suite whenever there’s a code check-in to Review Board, Django, Djblets or Django-Evolution. It will then notify us when there’s a new breakage. Users can check the build page before updating just to make sure they won’t hit a major problem. Later on, our buildbot server will generate nightly builds and handle Selenium tests.

We have a limited number of servers to test with. If you have server space and resources to donate and would like to run a BuildBot Slave server, let me know. We’re looking to set up slaves to test various combinations of the following:

  • Python 2.5
  • Python 2.4
  • Django SVN trunk
  • Django 1.0
  • Windows 2000, XP and Vista
  • Internet Explorer 6 and 7 (for Selenium tests)
  • Opera (for Selenium tests)
  • Firefox 2 and 3 (for Selenium tests)

Sandbox

Our BuildBot server is also set up to allow us to test code changes before we submit the code. Running a sandbox build of our pending code will cause all build slaves to run the entire test suite. This ensures that we don’t break things accidentally.

If you’re a contributor working on large patches for Review Board and would like to have access to the sandbox, please post to the mailing list and we can work with you on getting an account set up.

Installation Improvements

We’re working to make the installation experience much easier. I’m in the process of creating Python easy_install packages for Review Board and Djblets. Soon, users will be able to simply easy_install ReviewBoard to get going instead of checking out the development tree. I’m hoping to create both nightly builds and release builds.

Code will soon go in to move the entire project configuration into the administration interface. Modifying settings_local.py and restarting the server will be a thing of the past. All that will be left there will be a few site-specific settings and the database settings. Expect this to go in real soon.

A tool is in development for helping to generate the initial Review Board server tree based on an installed reviewboard Python module (using easy_install) and generating the web server configuration files. This will hopefully take care of a lot of problems people hit when trying to get their server configuration right the first time.

And last but not least, before 1.0 we will have a first-time installation page that handles the creation of the initial settings_local.py and the adding and checking of repositories.

So in the end, the installation process will be something along the lines of:

  1. sudo easy_install ReviewBoard
  2. sudo rb-install-site /var/www/reviews.mycompany.com
  3. Fill out the fields presented.
  4. Hand-tweak the configuration files if needed.
  5. Go to the page for the new Review Board server, fill out the fields and finish the install.

This is the goal, anyway. We’re going to try to get as close to this as possible for 1.0.

Donations

Review Board has become a full-time project for us. Though it got its start at VMware, it’s really a personal project developed in our spare time, not a project run by VMware. As the project grows, we’ve been putting more time, energy and money into it.

Hosting fees have started to become large, given that we’re now hosting the main project website, the main Review Board server for our code reviews, the demo server, the Google Summer of Code review server and the BuildBot server and slaves. Down the road, we have many plans that will also require funding.

To help cover our costs, we’re now made it easy to donate to the project. If Review Board has helped you, your company, team, or project and saved you money or time over alternative solutions, maybe you’d like to help give back to keep our project going. Every bit helps.

Review Board Roadmap and Donations Read More »

Scroll to Top