Author name: chipx86

A New Patching Process for RBTools 5.1

RBTools, our command line tool suite for Review Board, is getting all-new infrastructure for applying patches. This will be available in rbt patch, rbt land, and any custom code that needs to deal with patches.

A customer reported that multi-commit review requests won’t apply on Mercurial. The problem, it turns out, is that Mercurial won’t let you apply a patch if the working tree isn’t clean, and that includes if you’ve applied a prior patch in a series. That’s pretty annoying, but there are good reasons for it.

It can, however, apply multiple patches in one go. Which is nice, but impossible to use with RBTools today.

So this weekend, I began working on a new patch application process.

How SCMs apply patches

There are a lot of SCMs out there, and they all work a bit differently.

When dealing with patches, most are thin wrappers around GNU diff/patch, with a bit of pre/post-processing to deal with SCM-specific metadata in the patches.

The process usually looks like this:

  1. Extract SCM-specific data from the diff.

    The SCM patcher will read through the patch and look for any SCM-specific data to extract. This may include commit IDs/revisions, file modes, symlinks to apply, binary file changes, and commit messages and other metadata. It’ll usually validate the local checkout and any file modifications, to an extent.
  2. Normalize the diff, if needed.

    This can involve taking parts of the diff that can’t be passed to GNU patch (such as binary file changes, file/directory metadata changes) and setting those aside to handle specially. It may also split the patch into per-file segments. The results will be something that can be passed to GNU diff.
  3. Invoke the patcher (usually GNU patch or similar).

    This often involves calling an external patch program with a patch or an extracted per-file patch, checking the results, and choosing how to handle things like conflicts or staging a file for a commit or applying some metadata to the file.
  4. Invoke any custom patching logic.

    This is where logic specific to the SCM may be applied. Patching a binary file, changing directory metadata, setting up symlinks, etc.

SCMs can go any route with their patching logic, but that’s a decent overview of what to expect.

Depending on what that logic looks like, and what constraints the SCM imposes, this process may bail early. For instance, some will just let you apply a patch on top of a potentially-dirty working directory, and some will not.

Mercurial won’t, which brings us to this project.

Out with the old

RBTools uses an SCM abstraction model, with classes implementing features for specific SCMs. We determine the right backend for a local source tree, get an instance of that backend, and then call methods on it.

Patches are run through an apply_patch() method. It looks like this:

def apply_patch(
    self,
    patch_file: str,
    *,
    base_path: str,
    base_dir: str,
    p: Optional[str] = None,
    revert: bool = False,
) -> PatchResult:
    ...

This takes in a path to a patch file, some criteria like whether to revert or commit a patch, and a few other things. The SCM can then use the default GNU patch implementation, or it can use something SCM-specific.

At the end of this, we get a PatchResult, which the caller can use to determine if the patch applied, if there were conflicts, or if it just outright failed.

The problem is, each patch is handled independently. There’s no way to say “I have 5 patches. Do what you need to do to apply them.”

So we’re stuck applying one-by-one in Mercurial, and therefore we’re doomed to fail.

This architecture is old, and we’re ready to move on from it.

In with the new

We’re introducing new primitives in RBTools, which give SCMs full control over the whole process. Not only are these useful for RBTools commands, but for third-party code built upon RBTools.

Patch application is now made up of the following pieces:

  • Patch: A representation of a patch to apply, with all the information needed to apply it.
  • Patcher: A class responsible for applying and committing patches, built to allow SCM-specific subclasses to communicate patching capabilities and to define patching logic.
  • PatchResult: The result of a patch operation, covering one or more patches.

Patcher consolidates the roles of both the old apply_patch() and some of the internal logic within our rbt patch command. By default, it just feeds patches into GNU patch one-by-one, but SCMs can do whatever they need to here by pointing to a custom Patcher subclass and changing that logic.

Callers tell the Patcher, “Hey, I have these patches, and have these settings to consider (reverting, making commits, etc.)” and will then get back an object that says what the patcher is capable of doing.

They can then say “Okay, begin patching, and give me each PatchResult as you go.” The Patcher can apply them one-by-one or in batches (Mercurial will use batches), and send back useful PatchResults as appropriate.

That in turn gives the caller the ability to report progress on the process without assuming anything about what that process looks like.

And it Just Works (TM)

This design is very clean to use. Here’s an example:

review_request = api_root.get_review_request(review_request_id=123)

patcher = scmclient.get_patcher(patches=[
    Patch(content=b'...'),
    Patch(content=b'...'),
    Patch(content=b'...'),
])
total_patches = len(patcher.patches)

try:
    if patcher.can_commit:
        print(f'Preparing to commit {total_patches} patches...')
        patcher.prepare_for_commit(review_request=review_request)
    else:
        print(f'Preparing to apply {total_patches} patches...')

    for patch_result in patcher.patch():
        print(f'Applied patch {patch_result.patch_num} / {total_patches}')
except ApplyPatchError as e:
    patch_result = e.failed_patch_result

    if patch_result:
        print(f'Error applying patch {patch_result.patch_num}: {e}')

        if patch_result.has_conflicts:
            print()
            print('Conflicts:')

            for conflict in patch_result.conflicts:
                print(f'* {conflict}')
    else:
        print(f'Error applying patches: {e}')

In this example, we:

  1. Defined three patches to apply
  2. Requested to perform commits if possible, using the review request information to aid in the process.
  3. Displayed progress for any patched files.
  4. Handled any patching errors, and showing any failed patch numbers and any conflicts if that information is available.

This will work across all SCMs, entirely backed by the SCM’s own patching logic.

This is still very much in progress, but is slated for RBTools 5.1, coming soon.

Until then, check out our all-new Review Board 7 release and our all-new Review Board Discord channel, open for all developers and for Review Board users alike.

Remembering ICQ: A Page Out of History

Early ICQ logo, with a green flower and a red petal.

I got onto ICQ very early in its life, around 1996 or 1997, with a 6-digit UIN (298387 — sadly stolen years later). I loved it, it’s how I stayed connected with family and friends, how I met new people, how I connected on the Internet at a time when the Internet was trying to figure out what communication and community looked like.

Years later I became a developer in the IM space working on Gaim/Pidgin, which supported ICQ and a myriad of other services. Looking back, seeing the evolution from ICQ to services like AIM and MSN to modern chats like Discord and Slack, there really hasn’t been a system truly like it since.

The pager model was quite different from what IMs evolved into. There was less a focus on “chat with me right now!” and more of a “I’ve sent you a message, get back to me with a reply when you can.”

The reliance on modems and limited internet time was clearly a factor in that design.

And really, it wasn’t so much that IM systems that came after that were an entirely different paradigm. It really just came down to the UI choices. With nearly all IM clients, when a person messaged you, a window popped up. With ICQ, you got a little “uh oh!” sound and a blinking icon in your ICQ window, and could choose to deal with it at your leisure.

That difference may seem small, but significantly changed the expectations around conversations. When an IM is in your face, you have to make a choice right then: Respond, or dismiss. Either way, it stole your attention away from what you were doing, and like a salesman handing you a product they want you to purchase, you feel a sense of obligation to engage.

The ICQ model was different: There’s a message waiting for you, and when you’re ready and available, you can choose to deal with it. You didn’t even see the contents of the message until you were ready, so there’s no guilt-driven drive to respond when a message came in. No “Well I guess I can answer this question real quick…” Don’t want to see anything at all? Just close or minimize the window, come back later.

If you wanted an actual live chat with rapid responses, you could do that, but it wasn’t the default. It wasn’t the expectation.

Today, you may be used to setting “Away”, “Not Available”, and “Do Not Disturb” statuses, and even going “Invisible”, but how about “Free for Chat”?

These days we deal with demands on our time all throughout the day. Notifications on our phones designed to draw our attention. News alerts that reach for emotional reactions. Incoming text and chat messages on a dozen services, all with snippets of a text that make you want to read the rest and then maybe respond before you forget.

There really isn’t incentive for services to adopt the ICQ model so much these days, as everyone’s competing for your attention, but the passive nature of messages and notifications that was part of the original ICQ could be a lesson in how to build software that helps keep people connected, while also letting us claw back control of our own time.

Early ICQ was unique. It’s changed over time, adopting to modern trends, newer protocols, and even new owners. The ICQ of today isn’t the same ICQ of 1996, that’s for sure (but surprisingly, your 1996 UIN would still work today). Still, it’s sad to see it go after almost 28 years, even though it’s not the same service it once was.

It’s been heartening seeing how many people remember it fondly. I know for me it helped set my life on a course of events that led to my involvement in a major open source project, then to my first job in the tech industry, and then to my first real company.

The service may be gone, but it won’t be forgotten. Not only are there lessons to learn from the way ICQ tackled communication and agency online, but there are, it turns out, enthusiasts working to bring it back:

  • The NINA and Escargot project (Twitter) is building their own ICQ server, fully compatible with ICQ clients, and the Discord has been flooded with people looking to discover ICQ for the first time or rediscover it all over again.
  • Pidgin still supports ICQ through third-party plugins.

And though I haven’t found one yet, I do hope that someone will recreate the original ICQ experience in some form. Or better, take what ICQ did well and think how we might learn from it when designing the interactions of today. I for one wouldn’t mind a simple, casual, and non-invasive approach to communication again.

Excluding nested node_modules in Rollup.js

We’re often developing multiple Node packages at the same time, symlinking their trees around in order to test them in other projects prior to release.

And sometimes we hit some pretty confusing behavior. Crazy caching issues, confounding crashes, and all manner of chaos. All resulting from one cause: Duplicate modules appearing in our Rollup.js-bundled JavaScript.

For example, we may be developing Ink (our in-progress UI component library) over here with one copy of Spina (our modern Backbone.js successor), and bundling it in Review Board (our open source, code review/document review product) over there with a different copy of Spina. The versions of Spina should be compatible, but technically they’re two separate copies.

And it’s all because of nested node_modules.

The nonsense of nested node_modules

Normally, when Rollup.js bundles code, it looks for any and all node_modules directories in the tree, considering them for dependency resolution.

If a dependency provides its own node_modules, and needs to bundle something from it, Rollup will happily include that copy in the final bundle, even if it’s already including a different copy for another project (such as the top-level project).

This is wasteful at best, and a source of awful nightmare bugs at worst.

In our case, because we’re symlinking source trees around, we’re ending up with Ink’s node_modules sitting inside Review Board’s node_modules (found at node_modules/@beanbag/ink/node_modules.), and we’re getting a copy of Spina from both.

Easily eradicating extra node_modules

Fortunately, it’s easy to resolve in Rollup.js with a simple bit of configuration.

Assuming you’re using @rollup/plugin-node-resolve, tweak the plugin configuration to look like:

{
    plugins: [
        resolve({
            moduleDirectories: [],
            modulePaths: ['node_modules'],
        }),
    ],
}

What we’re doing here is telling Resolve and Rollup two things:

  1. Don’t look for node_modules recursively. moduleDirectories is responsible for looking for the named paths anywhere in the tree, and it defaults to ['node_modules']. This is why it’s even considering the nested copies to begin with.
  2. Explicitly look for a top-level node_modules. modulePaths is responsible for specifying absolute paths or paths relative to the root of the tree where modules should be found. Since we’re no longer looking recursively above, we need to tell it which one we do want.

These two configurations together avoid the dreaded duplicate modules in our situation.

And hopefully it will help you avoid yours, too.

Review Board: Between Then and Now

I just realized, before I know it, we’ll be hitting 20 years of Review Board.

Man, do I feel old.

It’s hard to imagine it now, but code review wasn’t really a thing when we built Review Board back in 2006. There were a couple expensive enterprise tools, but GitHub? Pull requests? They didn’t exist yet.

This meant we had to solve a lot of problems that didn’t have readily-made or readily-understood solutions, like:

🤔 What should a review even *be*? What’s involved in the review process, and what tools do you give the user?

We came up with tools like:

  • Resolvable Issue Tracking (a To Do list of what needs to be done in a change)
  • Comments spanning 1 or more lines of diffs
  • Image file attachment review
  • Previews of commented areas appearing above the comments.

Amongst others.

🤔 How should you discuss in a review? Message board style, with one box per reply? Everything embedded in top-level reviews? Comments scattered in a diff?

We decided on a box per review, and replies embedded within it, keeping discussion about a topic all in one place.

Explicitly not buried in a diff, because in complex projects, you also may be reviewing images, documents, or other files. Those comments are important, so we decided they should all live, threaded, under a review.

A lot of tools went the “scatter in a diff” route, and while that was standard for a while, it never sat right with me. For anything complex, it was a mess. I think we got this one right.

🤔 How do you let users keep track of what needs to be reviewed?

We came up with our Dashboard, which shows a sortable, filterable, customizable view of all review requests you may be interested in. This gave a bird’s-eye view across any number of source code repositories, teams, and projects.

Many tools didn’t go this route. You were limited to seeing review requests/pull requests on that repository, and that’s it. For larger organizations, this just wasn’t good enough.

🤔 How do you give organizations control over their processes? A policy editor? APIs? Fork the code?

We settled on:

  • A Python extension framework. This was capable of letting developers craft new policy, collect custom information during the review process, and even build whole new review UIs for files.
  • A full-blown REST API, which is quite capable.
  • Eventually, features like WebHooks, once those became a thing.

Our goal was to avoid people ever having to fork. But also, we kept Review Board MIT-licensed, so people were sure to have the control they needed.

I could probably go on for a while. A lot of these eventually worked their way into other code review tools on the market, and are standard now, but many started off as a lot of long nights doodling on a whiteboard and in notebooks.

We’ve had the opportunity to work for years with household names that young me would have never imagined. If you’ve been on the Internet at all in the past decade, you’ve regularly interacted with at least one thing built in Review Board.

But the passage of time and the changes in the development world make it hard these days. We’re an older tool now, and people like shiny new things. That’s okay. We’re still building some innovative shiny things. More on some of those soon 😉

This is a longer post than I planned for, but this stuff’s on my mind a lot lately.

I’ve largely been quiet lately about development, but I’m trying to change that. Develop in the open, as they say. Expect a barrage of behind-the-scenes posts coming soon.

Building Multi-Platform Docker Images Using Multiple Hosts

Here’s a very quick, not exactly comprehensive tutorial on building Docker images using multiple hosts (useful for building multiple architectures).

If you’re an expert on docker buildx, you may know all of this already, but if you’re not, hopefully you find this useful.

We’ll make some assumptions in this tutorial:

  1. We want to build a single Docker image with both linux/amd64 and linux/arm64 architectures.
  2. We’ll be building the linux/arm64 image on the local machine, and linux/amd64 on a remote machine (accessible via SSH).
  3. We’ll call this builder instance “my-builder”

We’re going to accomplish this by building a buildx builder instance for the local machine and architecture, then append a configuration for another machine. And then we’ll activate that instance.

This is easy.

Step 1: Create your builder instance for localhost and arm64

$ docker buildx create \
    --name my-builder \
    --platform linux/arm64

This will create our my-builder instance, defaulting it to using our local Docker setup for linux/arm64.

If we wanted, we could provide a comma-separated list of platforms that the local Docker should be handling (e.g., --platform linux/arm64,darwin/arm64).

(This doesn’t have to be arm64. I’m just using this as an example.)

Step 2: Add your amd64 builder

$ docker buildx create \
    --name my-builder \
    --append \
    --platform linux/amd64 \
    ssh://<user>@<remotehost>

This will update our my-builder, informing it that linux/amd64 builds are supported and must go through the Docker service over SSH.

Note that we could easily add additional builders if we wanted (whether for the same architectures or others) by repeating this command and choosing new --platform values and remote hosts

Step 3: Verify your builder instance

Let’s take a look and make sure we have the builder setup we expect:

$ docker buildx ls
NAME/NODE       DRIVER/ENDPOINT           STATUS    BUILDKIT  PLATFORMS
my-builder *    docker-container
  my-builder0   desktop-linux             inactive            linux/arm64*
  my-builder1   ssh://myuser@example.com  inactive            linux/amd64*

Yours may look different, but it should look something like that. You’ll also see default and any other builders you’ve set up.

Step 4: Activate your builder instance

Now we’re ready to use it:

$ docker buildx use my-builder

Just that easy.

Step 5: Build your image

If all went well, we can now safely build our image:

$ docker buildx build --platform linux/arm64,linux/amd64 .

You should see build output for each architecture stream by.

If we want to make sure the right builder is doing the right thing, you can re-run docker buildx ls in another terminal. You should see running as the status for each, along with an inferred list of other architectures that host can now build (pretty much anything it natively supports that you didn’t explicitly configure above).

Step 6: Load your image into Docker

You probably want to test your newly-built image locally, don’t you? When you run the build, you might notice this message:

WARNING: No output specified with docker-container driver. Build
result will only remain in the build cache. To push result image
into registry use --push or to load image into docker use --load

And if you try to start it up, you might notice it’s missing (or that you’re running a pre-buildx version of your image).

What you need to do is re-run docker buildx build with --load and a single platform, like so:

$ docker buildx build --load --platform linux/arm64 .

That’ll rebuild it (it’ll likely just reuse what it built before) and then make it available in your local Docker registry.

Hope that helps!

Re-typing Parent Class Attributes in TypeScript

I was recently working on converting some code away from Backbone.js and toward Spina, our TypeScript Backbone “successor” used in Review Board, and needed to override a type from a parent class.

(I’ll talk about why we still choose to use Backbone-based code another time.)

We basically had this situation:

class BaseClass {
    summary: string | (() => string) = 'BaseClass thing doer';
    description: string | (() => string);
}

class MySubclass extends BaseClass {
    get summary(): string {
        return 'MySubclass thing doer';
    }

    // We'll just make this a standard function, for demo purposes.
    description(): string {
        return 'MySubclass does a thing!';
    }
}

TypeScript doesn’t like that so much:

Class 'BaseClass' defines instance member property 'summary', but extended class 'MySubclass' defines it as an accessor.

Class 'BaseClass' defines instance member property 'description', but extended class 'MySubclass' defines it as instance member function.

Clearly it doesn’t want me to override these members, even though one of the allowed values is a callable returning a string! Which is what we wrote, darnit!!

So what’s going on here?

How ES6 class members work

If you’re coming from another language, you might expect members defined on the class to be class members. For example, you might think you could access BaseClass.summary directly, but you’d be wrong, because these are instance members.

Peer-Programming a Buggy World with ChatGPT AI

AI has been all the rage lately, with solutions like Stable Diffusion for image generation, GPT-3 for text generation, and CoPilot for code development becoming publicly available to the masses.

That excitement ramped up this week with the release of ChatGPT, an extremely impressive chat-based AI system leveraging the best GPT has to offer.

I decided last night to take ChatGPT for a spin, to test its code-generation capabilities. And I was astonished by the experience.

Together, we built a simulation of bugs foraging for food in a 100×100 grid world, tracking essentials like hunger and life, reproducing, and dealing with hardships involving seasonal changes, natural disasters, and predators. All graphically represented.

We’re going to explore this in detail, but I want to start off by showing you what we built:

Also, you can find out more on my GitHub repository

A Recap of my Experience

Before we dive into the collaborative sessions that resulted in a working simulation, let me share a few thoughts and tidbits about my experience:

The End of COVID.. Data.

This year’s seen a rapid reduction of available COVID data. Certainly in California, where we’ve been spoiled with extensive information on the spread of this virus.

In 2020, as the pandemic began to ramp up, the state and counties began to launch dashboards and datasets, quickly making knowledge available for anyone who wanted to work with it. State dashboards tracked state-wide and some county-wide metrics, while local dashboards focused on hyper-local information and trends.

Not just county dashboards, but schools, hospitals, and newspapers began to share information. Individuals, like myself, got involved and began to consolidate data, compute new data, and make that available to anyone who wanted it.

California was open with most of their data, providing CSV files, spreadsheets, and Tableau dashboards on the California Open Data portal. We lacked open access to the state’s CalREDIE system, but we still had a lot to work with.

It was a treasure trove that let us see how the pandemic was evolving and helped inform decisions.

But things have changed.

The Beginning of the End

The last 6 months or so, this data has begun to dry up. Counties have shut down or limited dashboards. The state’s moved to once-a-week case information. Vaccine stats have stopped being updated with new boosters.

This was inevitable. Much of this requires coordination between humans, real solid effort. Funding is drying up for COVID-related data work. People are burnt out and moving on from their jobs. New diseases and flu seasons have taken precedence.

But this leaves us in a bad position.

Scratching Out AI Chicken Art with Stable Diffusion

I’ve been enjoying playing with Stable Diffusion, an AI image generator that came out this past week. It runs phenomenally on my M1 Max Macbook Pro with 64GB of RAM, taking only about 30 seconds to produce an image at standard settings.

AI image generation has been a controversial, but exciting, topic in the news as of late. I’ve been following it with interest, but thought I was still years off from being able to actually play with it on my own hardware. That all changed this week.

I’m on day two now with Stable Diffusion, having successfully installed the M1 support via a fork. And my topic to get my feet wet has been…

Chickens.

Why not.

So let’s begin our tour. I’ll provide prompts and pictures, but please not I do not have the seeds (due to a bug with seed stability in the M1 fork).

Integration and Simulation Tests in Python

One of my (many) tasks lately has been to rework unit and integration tests for Review Bot, our automated code review add-on for Review Board.

The challenge was providing a test suite that could test against real-world tools, but not require them. An ever-increasing list of compatible tools has threatened to become an ever-increasing burden on contributors. We wanted to solve that.

So here’s how we’re doing it.

First off, unit test tooling

First off, this is all Python code, which you can find on the Review Bot repository on GitHub.

We make heavy use of kgb, a package we’ve written to add function spies to Python unit tests. This goes far beyond Mock, allowing nearly any function to be spied on without having to be replaced. This module is a key component to our solution, given our codebase and our needs, but it’s an implementation detail — it isn’t a requirement for the overall approach.

Still, if you’re writing complex Python test suites, check out kgb.

Deciding on the test strategy

Review Bot can talk to many command line tools, which are used to perform checks and audits on code. Some are harder than others to install, or at least annoying to install.

We decided there’s two types of tests we need:

  1. Integration tests — ran against real command line tools
  2. Simulation tests — ran against simulated output/results that would normally come from a command line tool

Being that our goal is to ease contribution, we have to keep in mind that we can’t err too far on that side at the expense of a reliable test suite.

We decided to make these the same tests.

The strategy, therefore, would be this:

  1. Each test would contain common logic for integration and simulation tests. A test would set up state, perform the tool run, and then check results.
  2. Integration tests would build upon this by checking dependencies and applying configuration before the test run.
  3. Simulation tests would be passed fake output or setup data needed to simulate that tool.

This would be done without any code duplication between integration or simulation tests. There would be only one test function per expectation (e.g., a successful result or the handling of an error). We don’t want to worry about tests getting out of sync.

Regression in our code? Both types of tests should catch it.

Regression or change in behavior in an integrated tool? Any fixes we apply would update or build upon the simulation.

Regression in the simulation? Something went wrong, and we caught it early without having to run the integration test.

Making this all happen

We introduced three core testing components:

  1. @integration_test() — a decorator that defines and provides dependencies and input for an integration test
  2. @simulation_test() — a decorator that defines and provides output and results for a simulation test
  3. ToolTestCaseMetaClass — a metaclass that ties it all together

Any test class that needs to run integration and simulation tests will use ToolTestCaseMetaClass and then apply either or both @integration_test/@simulation_test decorators to the necessary test functions.

When a decorator is applied, the test function is opted into that type of test. Data can be passed into the decorator, which is then passed into the parent test class’s setup_integration_test() or setup_simulation_test().

These can do whatever they need to set up that particular type of test. For example:

  • Integration test setup defaults to checking dependencies, skipping a test if not met.
  • Simulation test setup may write some files or spy on a subprocess.Popen() call to fake output.


For example:

class MyTests(kgb.SpyAgency, TestCase,
              metaclass=ToolTestCaseMetaClass):
    def setup_simulation_test(self, output):
        self.spy_on(execute, op=kgb.SpyOpReturn(output))

    def setup_integration_test(self, exe_deps):
        if not are_deps_found(exe_deps):
            raise SkipTest('Missing one or more dependencies')

    @integration_test(exe_deps=['mytool'])
    @simulation_test(output=(
        b'MyTool 1.2.3\n'
        b'Scanning code...\n'
        b'0 errors, 0 warnings, 1 file(s) checked\n'
    ))
    def test_execute(self):
        """Testing MyTool.execute"""
        ...

When applied, ToolTestCaseMetaClass will loop through each of the test_*() functions with these decorators applied and split them up:

  • Test functions with @integration_test will be split out into a test_integration_<name>() function, with a [integration test] suffix appended to the docstring.
  • Test functions with @simulation_test will be split out into test_simulation_<name>(), with a [simulation test] suffix appended.

The above code ends up being equivalent to:

class MyTests(kgb.SpyAgency, TestCase):
    def setup_simulation_test(self, output):
        self.spy_on(execute, op=kgb.SpyOpReturn(output))

    def setup_integration_test(self, exe_deps):
        if not are_deps_found(exe_deps):
            raise SkipTest('Missing one or more dependencies')

    def test_integration_execute(self):
        """Testing MyTool.execute [integration test]"""
        self.setup_integration_test(exe_deps=['mytool'])
        self._test_common_execute()

    def test_simulation_execute(self):
        """Testing MyTool.execute [simulation test]"""
        self.setup_simulation_test(output=(
            b'MyTool 1.2.3\n'
            b'Scanning code...\n'
            b'0 errors, 0 warnings, 1 file(s) checked\n'
        ))
        self._test_common_execute()

    def _test_common_execute(self):
        ...

Pretty similar, but less to maintain in the end, especially as tests pile up.

And when we run it, we get something like:

Testing MyTool.execute [integration test] ... ok
Testing MyTool.execute [simulation test] ... ok

...

Or, you know, with a horrible, messy error.

Iterating on tests

It’s become really easy to maintain and run these tests.

We can now start by writing the integration test, modify the code to log any data that might be produced by the command line tool, and then fake-fail the test to see that output.

class MyTests(kgb.SpyAgency, TestCase,
              metaclass=ToolTestCaseMetaClass):
    ...

    @integration_test(exe_deps=['mytool'])
    def test_process_results(self):
        """Testing MyTool.process_results"""
        self.setup_files({
            'filename': 'test.c',
            'content': b'int main() {return "test";}\n',
        })

        tool = MyTool()
        payload = tool.run(files=['test.c'])

        # XXX
        print(repr(payload))

        results = MyTool().process_results(payload)

        self.assertEqual(results, {
            ...
        })

        # XXX Fake-fail the test
        assert False

I can run that and get the results I’ve printed:

======================================================================
ERROR: Testing MyTool.process_results [integration test]
----------------------------------------------------------------------
Traceback (most recent call last):
    ...
-------------------- >> begin captured stdout << ---------------------
{"errors": [{"code": 123, "column": 13, "filename": "test.c", "line': 1, "message": "Expected return type: int"}]}

Now that I have that, and I know it’s all working right, I can feed that output into the simulation test and clean things up:

class MyTests(kgb.SpyAgency, TestCase,
              metaclass=ToolTestCaseMetaClass):
    ...

    @integration_test(exe_deps=['mytool'])
    @simulation_test(output=json.dumps(
        'errors': [
            {
                'filename': 'test.c',
                'code': 123,
                'line': 1,
                'column': 13,
                'message': 'Expected return type: int',
            },
        ]
    ).encode('utf-8'))
    def test_process_results(self):
        """Testing MyTool.process_results"""
        self.setup_files({
            'filename': 'test.c',
            'content': b'int main() {return "test";}\n',
        })

        tool = MyTool()
        payload = tool.run(files=['test.c'])
        results = MyTool().process_results(payload)

        self.assertEqual(results, {
            ...
        })

Once it’s running correctly in both tests, our job is done.

From then on, anyone working on this code can just simply run the test suite and make sure their change hasn’t broken any simulation tests. If it has, and it wasn’t intentional, they’ll have a great starting point in diagnosing their issue, without having to install anything.

Anything that passes simulation tests can be considered a valid contribution. We can then test against the real tools ourselves before landing a change.

Development is made simpler, and there’s no worry about regressions.

Going forward

We’re planning to apply this same approach to both Review Board and RBTools. Both currently require contributors to install a handful of command line tools or optional Python modules to make sure they haven’t broken anything, and that’s a bottleneck.

In the future, we’re looking at making use of python-nose‘s attrib plugin, tagging integration and simulation tests and making it trivially easy to run just the suites you want.

We’re also considering pulling out the metaclass and decorators into a small, reusable Python packaging, making it easy for others to make use of this pattern.

Scroll to Top