Today I Learned

Excluding nested node_modules in Rollup.js

We’re often developing multiple Node packages at the same time, symlinking their trees around in order to test them in other projects prior to release.

And sometimes we hit some pretty confusing behavior. Crazy caching issues, confounding crashes, and all manner of chaos. All resulting from one cause: Duplicate modules appearing in our Rollup.js-bundled JavaScript.

For example, we may be developing Ink (our in-progress UI component library) over here with one copy of Spina (our modern Backbone.js successor), and bundling it in Review Board (our open source, code review/document review product) over there with a different copy of Spina. The versions of Spina should be compatible, but technically they’re two separate copies.

And it’s all because of nested node_modules.

The nonsense of nested node_modules

Normally, when Rollup.js bundles code, it looks for any and all node_modules directories in the tree, considering them for dependency resolution.

If a dependency provides its own node_modules, and needs to bundle something from it, Rollup will happily include that copy in the final bundle, even if it’s already including a different copy for another project (such as the top-level project).

This is wasteful at best, and a source of awful nightmare bugs at worst.

In our case, because we’re symlinking source trees around, we’re ending up with Ink’s node_modules sitting inside Review Board’s node_modules (found at node_modules/@beanbag/ink/node_modules.), and we’re getting a copy of Spina from both.

Easily eradicating extra node_modules

Fortunately, it’s easy to resolve in Rollup.js with a simple bit of configuration.

Assuming you’re using @rollup/plugin-node-resolve, tweak the plugin configuration to look like:

{
    plugins: [
        resolve({
            moduleDirectories: [],
            modulePaths: ['node_modules'],
        }),
    ],
}

What we’re doing here is telling Resolve and Rollup two things:

  1. Don’t look for node_modules recursively. moduleDirectories is responsible for looking for the named paths anywhere in the tree, and it defaults to ['node_modules']. This is why it’s even considering the nested copies to begin with.
  2. Explicitly look for a top-level node_modules. modulePaths is responsible for specifying absolute paths or paths relative to the root of the tree where modules should be found. Since we’re no longer looking recursively above, we need to tell it which one we do want.

These two configurations together avoid the dreaded duplicate modules in our situation.

And hopefully it will help you avoid yours, too.

Excluding nested node_modules in Rollup.js Read More »

Building Multi-Platform Docker Images Using Multiple Hosts

Here’s a very quick, not exactly comprehensive tutorial on building Docker images using multiple hosts (useful for building multiple architectures).

If you’re an expert on docker buildx, you may know all of this already, but if you’re not, hopefully you find this useful.

We’ll make some assumptions in this tutorial:

  1. We want to build a single Docker image with both linux/amd64 and linux/arm64 architectures.
  2. We’ll be building the linux/arm64 image on the local machine, and linux/amd64 on a remote machine (accessible via SSH).
  3. We’ll call this builder instance “my-builder”

We’re going to accomplish this by building a buildx builder instance for the local machine and architecture, then append a configuration for another machine. And then we’ll activate that instance.

This is easy.

Step 1: Create your builder instance for localhost and arm64

$ docker buildx create \
    --name my-builder \
    --platform linux/arm64

This will create our my-builder instance, defaulting it to using our local Docker setup for linux/arm64.

If we wanted, we could provide a comma-separated list of platforms that the local Docker should be handling (e.g., --platform linux/arm64,darwin/arm64).

(This doesn’t have to be arm64. I’m just using this as an example.)

Step 2: Add your amd64 builder

$ docker buildx create \
    --name my-builder \
    --append \
    --platform linux/amd64 \
    ssh://<user>@<remotehost>

This will update our my-builder, informing it that linux/amd64 builds are supported and must go through the Docker service over SSH.

Note that we could easily add additional builders if we wanted (whether for the same architectures or others) by repeating this command and choosing new --platform values and remote hosts

Step 3: Verify your builder instance

Let’s take a look and make sure we have the builder setup we expect:

$ docker buildx ls
NAME/NODE       DRIVER/ENDPOINT           STATUS    BUILDKIT  PLATFORMS
my-builder *    docker-container
  my-builder0   desktop-linux             inactive            linux/arm64*
  my-builder1   ssh://myuser@example.com  inactive            linux/amd64*

Yours may look different, but it should look something like that. You’ll also see default and any other builders you’ve set up.

Step 4: Activate your builder instance

Now we’re ready to use it:

$ docker buildx use my-builder

Just that easy.

Step 5: Build your image

If all went well, we can now safely build our image:

$ docker buildx build --platform linux/arm64,linux/amd64 .

You should see build output for each architecture stream by.

If we want to make sure the right builder is doing the right thing, you can re-run docker buildx ls in another terminal. You should see running as the status for each, along with an inferred list of other architectures that host can now build (pretty much anything it natively supports that you didn’t explicitly configure above).

Step 6: Load your image into Docker

You probably want to test your newly-built image locally, don’t you? When you run the build, you might notice this message:

WARNING: No output specified with docker-container driver. Build
result will only remain in the build cache. To push result image
into registry use --push or to load image into docker use --load

And if you try to start it up, you might notice it’s missing (or that you’re running a pre-buildx version of your image).

What you need to do is re-run docker buildx build with --load and a single platform, like so:

$ docker buildx build --load --platform linux/arm64 .

That’ll rebuild it (it’ll likely just reuse what it built before) and then make it available in your local Docker registry.

Hope that helps!

Building Multi-Platform Docker Images Using Multiple Hosts Read More »

Re-typing Parent Class Attributes in TypeScript

I was recently working on converting some code away from Backbone.js and toward Spina, our TypeScript Backbone “successor” used in Review Board, and needed to override a type from a parent class.

(I’ll talk about why we still choose to use Backbone-based code another time.)

We basically had this situation:

class BaseClass {
    summary: string | (() => string) = 'BaseClass thing doer';
    description: string | (() => string);
}

class MySubclass extends BaseClass {
    get summary(): string {
        return 'MySubclass thing doer';
    }

    // We'll just make this a standard function, for demo purposes.
    description(): string {
        return 'MySubclass does a thing!';
    }
}

TypeScript doesn’t like that so much:

Class 'BaseClass' defines instance member property 'summary', but extended class 'MySubclass' defines it as an accessor.

Class 'BaseClass' defines instance member property 'description', but extended class 'MySubclass' defines it as instance member function.

Clearly it doesn’t want me to override these members, even though one of the allowed values is a callable returning a string! Which is what we wrote, darnit!!

So what’s going on here?

How ES6 class members work

If you’re coming from another language, you might expect members defined on the class to be class members. For example, you might think you could access BaseClass.summary directly, but you’d be wrong, because these are instance members.

Re-typing Parent Class Attributes in TypeScript Read More »

Breaking back into your network with the Synology Web UI

Have you ever left town, or even just took a trip to the coffee shop, only to find that you’re locked out of your home network? Maybe you needed a file that you forgot to put in Dropbox, or felt paranoid and wanted to check on your security cameras, or you just wanted to stream music. I have…

The end of a long drive

Last night, I arrived at my hotel after a 4 hour drive only to find my VPN wasn’t working. I always VPN in to home, so that I can access my file server, my VMs, security cameras, what have you. I didn’t understand.. I was sure I had things set up right. You see, I recently had my Xfinity router replaced, and had to set it up to talk to my Asus N66U, but I was absolutely sure it was working. Almost sure. Well, I thought it was working…

So I tried SSHing in. No dice. Hmm.. Any web server ports I exposed? Guess not. Maybe port forwarding was messed up somewhere?

Ah HA! I could reach my wonderful Synology NAS’s web UI. If you haven’t used this thing, it’s like a full-on desktop environment with apps. It’s amazing. Only thing it’s really missing is a web browser for accessing the home network (get on this, guys!). After spending some time thinking about it, I devised a solution to get me back into my home network, with full VPN access (though, see the end of the story for what happened there).

Christian’s step-by-step guide to breaking in with Synology

No more stories for now.

To get started, I’m assuming you have three things:

  1. Remote access (with admin rights) to your Synology NAS’s web console.
  2. A Linux server somewhere both sides can log into remotely (other than your local machine, as I’m assuming yours isn’t publicly connected to the network).
  3. A local Linux or Mac with a web browser and ssh. You can make this work on Windows with Putty as well, but I’m not going into details on that. Just figure out SSH tunneling and replace step 7 below.

All set? Here’s what you do.

  1. Log into your NAS and go to Package Center. Click Settings -> Package Sources and add:
  2. Name: MissileHugger
    Location: http://packages.missilehugger.com/
  3. Install the “Web Console” package and run it from the start menu.
  4. Web Console doesn’t support interactive sessions with commands, so you’ll need to have some SSH key set up on your linux server’s authorized_keys, and have that key available to you. There’s also no multi-line paste, so you’ll need to copy this key through Web Console line-by-line:

    Locally:

    $ cat ~/.ssh/id_dsa

    On Web Console:

    $ echo "-----BEGIN DSA PRIVATE KEY-----" > id_dsa
    $ echo "<first line of private key>" >> id_dsa
    $ echo "<second line of private key>" >> id_dsa
    $ ...
    $ echo "-----END DSA PRIVATE KEY-----" >> id_dsa
    $ chmod 600 id_dsa
  5. Establish a reverse tunnel to your Linux box, pointing to the web server you’re trying to reach (we’ll say 192.168.1.1 for your router).

    Remember that Web Console doesn’t support interactive sessions, or pseudo-terminal allocation, so we’ll need to tweak some stuff when calling ssh:

    $ ssh -o 'StrictHostKeyChecking no' -t -t -i id_dsa \
          -R 19980:192.168.1.1:80 youruser@yourlinuxserver

    The ‘StrictHostKeyChecking no’ is to get around not having any way to verify a host key from Web Console, and the two -t parameters (yes, two) forces TTY allocation regardless of the shell.

  6. If all went well, your Linux server should locally have a port 19980 that reaches your web server. Verify this by logging in and typing:
    $ lynx http://localhost:19980
  7. On your local machine, set up a tunnel to connect port 19980 on your machine to port 19980 on your Linux server.
    $ ssh -L 19980:yourlinuxserver:19980 youruser@yourlinuxserver
  8. You should now be able to reach your router. Try it! Open your favorite browser and go to http://localhost:19980
  9. Clean up. Delete your id_dsa you painfully hand-copied over, if you no longer need it, and kill your SSH sessions.

Epilogue

While this worked great, and I was able to get back in and see my router configuration, I wasn’t able to spot any problems.

That’s when I realized my Mac’s VPN configuration was hard-coding my old IP address and not the domain for my home network. Oops 🙁

Hope this helps someone!

Breaking back into your network with the Synology Web UI Read More »

Re: Subverting Subversion

I used to use the same trick Rodney Dawes describes in Subverting Subversion. Yes, it was very annoying to have to set everything from a file every time, or from stdin.

Ah, but there’s a better way, and people new to SVN seem to somehow miss this valuable command.

$ svn propedit svn:ignore

Up comes your editor, just as if you opened .cvsignore. You can now safely nuke your .cvsignore files. This is a useful command, so write it down until it’s burned into your brain.

Re: Subverting Subversion Read More »

Scroll to Top