Intercepting and Modifying responses with Chrome via the Devtools Protocol

At Shape we come across many sketchy pieces of JavaScript. We find scripts that are maliciously injected into pages, they might be sent by a customer for advice, or our security teams might find a resource on the web that seems to specifically reference some aspects of our service. As part of our everyday routine, we dive into the scripts head first to understand what they’re doing and how they work. They are usually minified, often obfuscated, and always require multiple levels of modification before they are really ready for deep analysis.

Until recently, the easiest way to do this analysis was either with locally cached setups that enable manual editing or by using proxies to rewrite content on the fly. The local solution is the most convenient, but websites do not always translate perfectly to other environments and it often leads people down a rabbit hole of troubleshooting just to get productive. Proxies are extremely flexible, but are usually cumbersome and not very portable – everyone has their own custom setup for their environment and some people are more familiar with one proxy vs another. I’ve started to use Chrome and its devtools protocol in order to hook into requests and responses as they are happening and modify them on the fly. This is portable to any platform that has Chrome, bypasses a whole slew of issues, and integrates well with common JavaScript tooling. In this post, I’ll go over how to intercept and modify JavaScript on the fly using Chrome’s devtools protocol.

We’ll use node but a lot of the content is portable to your language of choice provided you have the devtools hooks easily accessible.

First off, if you’ve never explored scripting Chrome, Eric Bidelman wrote up an excellent Getting Started Guide for Headless Chrome. The tips there apply to both Headless and GUI Chrome (with one quirk I’ll address in the next section).

Launching Chrome

We’ll use the chrome-launcher library from npm to make this easy.


npm install chrome-launcher

chrome-launcher does precisely what you think it would do and you can pass the same command line switches you’re used to from the terminal unchanged (a great list is maintained here). We’ll pass the following options:

  • –window-size=1200,800
    • Automatically set the window size to something reasonable.
  • –auto-open-devtools-for-tabs
    • Automatically open up the devtools because we use them frequently.
  • –user-data-dir=/tmp/chrome-testing
    • Set a constant user data directory. (Ideally we wouldn’t need this but non-headless mode on Mac OSX doesn’t seem to allow you to intercept requests without this flag. If you know of a better way, please let me know via Twitter!)

const chromeLauncher = require('chrome-launcher');

async function main() {

  const chrome = await chromeLauncher.launch({

    chromeFlags: [

      '--window-size=1200,800',

      '--user-data-dir=/tmp/chrome-testing',

      '--auto-open-devtools-for-tabs'

    ]

  });

}

main()

Try running your script to make sure you’re able to open Chrome. You should see something like this:

Screen Shot 2018-09-13 at 10.46.26 AM

Using the Chrome Devtools Protocol

This is also referred to as the “Chrome debugger protocol,” and both terms seem to be used interchangeably in Google’s docs 🤷‍♂️. First, install the package chrome-remote-interface via npm which gives us convenient methods to interact with the devtools protocol. Make sure to have the protocol docs handy if you want to dive in more deeply.


npm install chrome-remote-interface

To use the CDP, you need to connect to the debugger port and, because we’re using the chrome-launcher library, this is conveniently accessible via chrome.port.


const protocol = await CDP({ port: chrome.port });

Many of the domains in the protocol need to be enabled first, and we’re going to start with the Runtime domain so that we can hook into the console API and deliver any console calls in the browser to the command line.


const { Runtime } = protocol;

await Promise.all([Runtime.enable()]);

Runtime.consoleAPICalled(

   ({ args, type }) =>

   console[type].apply(console, args.map(a => a.value))

);

Now when you run your script, you get a fully functional Chrome window that also outputs all of its console messages to your terminal. That’s awesome on its own, especially for testing purposes!

Intercepting requests

First, we’ll need to register what we want to intercept by submitting a list of RequestPatterns to setRequestInterception. You can intercept at either the “Request” stage or the “HeadersReceived” stage and, to actually modify a response, we’ll need to wait for “HeadersReceived”. The resource type maps to the types that you’d commonly see on the network pane of the devtools.

Don’t forget to enable the Network domain as you did with Runtime, above, by adding Network.enable() to the same array.


await Network.setRequestInterception(

  { patterns: [

    {

      urlPattern: '*.js*',

      resourceType: 'Script',

      interceptionStage: 'HeadersReceived'

    }

  ] }

);

Registering the event handler is relatively straightforward and each intercepted request comes with an ​interceptionId that can be used to query information about the request or eventually issue a continuation. Here we’re just stepping in and logging every request we intercept to the terminal.


Network.requestIntercepted(async ({ interceptionId, request}) => console.log(

    `Intercepted ${request.url} {interception id: ${interceptionId}}`

  );

  Network.continueInterceptedRequest({

    interceptionId,

  });

});

Modifying requests

To modify requests we’ll need to install some helper libraries that will encode and decode base64 strings. There are loads of libraries available; feel free to pick your own. We’ll use atob and btoa.


npm install btoa atob

The API to deal with responses is a little awkward. To handle responses, you need to include all your response logic on the request interception (as opposed to simply intercepting a response, for example) and then you have to query for the body by the interception ID. This is because the body might not be available at the time your handler is called and this allows you to explicitly wait for just what you’re looking for. The body can also come base64 encoded so you’ll want to check and decode it before blindly passing it along.


const response = await Network.getResponseBodyForInterception({ interceptionId });

const bodyData = response.base64Encoded ? atob(response.body) : response.body;

At this point you’re free to go wild on the JavaScript. Your code puts you in the middle of a response allowing you to both access the complete JavaScript that was requested and send back your modified response. Awesome! We’ll just tweak the JS by appending a console.log at the end of it so that our terminal will get a message when our modified code is executed in the browser.


const newBody = bodyData + `\nconsole.log('Executed modified resource for ${request.url}');`;

We can’t simply pass along a modified body alone because the content might conflict with the headers that were sent with the original resource. Since you’re actively testing and tweaking, you’ll probably want to start with the basics before worrying too much about any other header information you need to convey. You can access the response headers via ​responseHeaders passed to the event handler if necessary, but for now we’ll just craft our own minimal set in an array for easy manipulation and editing later.


const newHeaders = [

  'Date: ' + (new Date()).toUTCString(),

  'Connection: closed',

  'Content-Length: ' + newBody.length,

  'Content-Type: text/javascript'

];

Sending the new response down requires crafting a full, base64 encoded HTTP response (including the HTTP status line) and sending it through a rawResponse property in the object passed to continueInterceptedRequest.


Network.continueInterceptedRequest({

  interceptionId,

  rawResponse: btoa(

    'HTTP/1.1 200 OK\r\n' +

    newHeaders.join('\r\n') +

    '\r\n\r\n' +

    newBody

  )

});

Now, when you execute your script and navigate around the internet, you’ll see something like the following in your terminal as your script intercepts JavaScript and also as your modified JavaScript executes in the browser and the ​console.log()s bubble up through the hook we made at the start of the tutorial.

Screen Shot 2018-09-13 at 11.16.22 AM

The complete working code for the basic example is here:


const chromeLauncher = require('chrome-launcher');

const CDP = require('chrome-remote-interface');

const atob = require('atob');

const btoa = require('btoa');

async function main() {

  const chrome = await chromeLauncher.launch({

    chromeFlags: [

      '--window-size=1200,800',

      '--user-data-dir=/tmp/chrome-testing',

      '--auto-open-devtools-for-tabs'

    ]

  });

  const protocol = await CDP({ port: chrome.port });

  const { Runtime, Network } = protocol;

  await Promise.all([Runtime.enable(), Network.enable()]);

  Runtime.consoleAPICalled(({ args, type }) => console[type].apply(console, args.map(a => a.value)));

  await Network.setRequestInterception({ patterns: [{ urlPattern: '*.js*', resourceType: 'Script', interceptionStage: 'HeadersReceived' }] });

  Network.requestIntercepted(async ({ interceptionId, request}) => {

    console.log(`Intercepted ${request.url} {interception id: ${interceptionId}}`);

    const response = await Network.getResponseBodyForInterception({ interceptionId });

    const bodyData = response.base64Encoded ? atob(response.body) : response.body;

    const newBody = bodyData + `\nconsole.log('Executed modified resource for ${request.url}');`;

    const newHeaders = [

      'Date: ' + (new Date()).toUTCString(),

      'Connection: closed',

      'Content-Length: ' + newBody.length,

      'Content-Type: text/javascript'

    ];

    Network.continueInterceptedRequest({

      interceptionId,

      rawResponse: btoa('HTTP/1.1 200 OK' + '\r\n' + newHeaders.join('\r\n') + '\r\n\r\n' + newBody)

    });

  });

}

main();

Where to go from here

You can start by pretty printing the source code, which is always a useful way to start reverse engineering something. Yes, of course, you can do this in most modern browsers but you’ll want to control each step of modification yourself in order to keep things consistent across browsers and browser versions and to be able to connect the dots as you analyze the source. When I’m digging into foreign, obfuscated code I like to rename variables and functions as I start to understand their purpose. Modifying JavaScript safely is no trivial exercise and that’s a blog post on its own, but for now you could use something like ​unminify to undo common minification and obfuscation techniques.

You can install unminify via npm and wrap your new JavaScript body with a call to ​unminify in order to see it in action:


const unminify = require('unminify');

[...]

const newBody = unminify(bodyData + `\nconsole.log('Intercepted and modified ${request.url}');`);

We’ll dive more into the transformations in the next post. If you have any questions, comments, or other neat tricks, please reach out to me via Twitter!

Key Findings from the 2018 Credential Spill Report

In 2016 we saw the world come to grips with the fact that data breaches are almost a matter of when, not if, as some of the world’s largest companies announced spills of incredible magnitude. In 2017 and 2018, we started to see regulatory agencies make it clear that companies need to proactively protect users from attacks fueled by these breaches as they show little sign of slowing.

In the time between Shape’s inaugural 2017 Credential Spill Report and now, we’ve seen a vast number of new industries roll up under the Shape umbrella and, with that, troves of new data on how different verticals are exploited by attacker—from Retail and Airlines to Consumer Banking and Hotels. Shape’s 2018 Credential Spill Report is nearly 50% larger and includes deep dives on how these spills are used by criminals and how their attacks play out. We hope that the report helps companies and individuals understand the downstream impact these breaches have. Credential stuffing is the vehicle that enables endless iterations of fraud and it is critical to have eyes on the problem as soon as possible. This is a problem that is only getting worse and attackers are becoming more advanced at a rate that is devaluing modern mitigation techniques rapidly.

Last year, over 2.3 billion credentials from 51 different organizations were reported compromised. We saw roughly the same number of spills reported each of the past 2 years, though the average size of the spill decreased slightly despite having a new record breaking announcement reported by Yahoo. Even after excluding Yahoo’s update from the measurements in 2017, we saw an average of 1 million credentials spilled every single day.

These credential spills will affect us for years and, with an average time of 15 months between a breach and the report, attackers are already well ahead of the game before companies can even react to being compromised. This window of opportunity creates strong motives for criminals, as evidenced by the e-commerce sector where 90% of login traffic comes from credential stuffing attacks. The result is that attacks are successful as often as 3% of the time and the costs can quickly add up for businesses. Online retail loses about $6 billion per year while the consumer banking industry faces over $50 million per day in potential losses from attacks.

2017 also gave us many credential spills from smaller communities – 25% of the spills recorded were from online web forums. These spills did not contribute the largest number of credentials but their presence underlines a significant and important role in how data breaches occur in the first place. Web forums frequently run on similar software stacks and often do not have IT teams dedicated to keeping that software up-to-date as a top priority. This makes it possible for one vulnerability to affect many different properties with minimal to no retooling effort. Simply keeping your software up to date is the easiest way to protect your company and services from being exploited.

As a consumer, the advice is always the same: never reuse your passwords. This may seem like an oversimplification but it is the 100% foolproof way to ensure that any credential spill doesn’t leave you open to a future credential stuffing attack. Data breaches can still affect you in different ways depending on the details of the data that was exfiltrated, but credential stuffing is the trillion dollar threat and you can sidestep it completely by ensuring every password is unique.

As a company, protecting your users against the repercussions of these breaches is becoming a greater priority. You can get a pretty good idea of whether or not you may already have a problem by monitoring the patterns of your login success rate compared to daily traffic patterns. Most companies and websites have a fairly constant percentage of login success and failures, if you see deviations that coincide with unusual traffic spikes you are likely already under attack. Of course, Shape can help you identify this traffic with greater detail but it’s important to get a handle on this problem regardless of the vendor – we all win if we disrupt criminal behavior that puts us all at risk. As part of our commitment to do this ourselves, Shape also released its first version of Blackfish, a collective defense system aimed at sharing alerts of credential stuffing attacks within Shape’s defense network for its customers. This enables companies to preemptively devalue a credential spill well before it has even been reported.

You can download Shape’s 2018 Credential Spill report here.

Please feel free to reach out to us over twitter at @shapesecurity if you have any feedback or questions about the report.

Key Takeaways: Using a Blacklist of Stolen Passwords [Webinar]

More than 90 billion passwords are being used across the web today, and it’s expected to be nearer 300 billion by 2020. With that in mind, the topics of password best practices and the threats around stolen credentials, remain top challenges for many global organizations.

Security Boulevard recently hosted a webinar with Shape and cyber security expert Justin Richer, co-author of the new NIST (National Institute of Standards and Technology) Digital Identity Guidelines. The webinar looks at how password protection and password attack prevention have evolved.

Watch the full webinar here

Key Takeaways


Traditional P@$$wOrd Guidelines Don’t Solve the Problem

Justin Richer discusses how passwords were originally invented as a way to gain entry. But today they have evolved into a way to authenticate who you are. Companies rely on a username-password combination to give them confidence you are who you say you are. So once passwords are stolen, companies have less and less confidence you are the person you claim to be.

To make it difficult for criminals to steal your identity companies have implemented complex password requirements. Unfortunately, this conventional wisdom around password management, such as enforced rotation every six months, using at least six characters, upper and lowercase characters, numbers and symbols, have made passwords hard to remember.

Additionally, for non-English languages, not all these rules can be applied regarding uppercase and lowercase. They also don’t always adapt to the world of mobile devices where it’s hard to type using touch screens, and the emerging technology of voice recognition personal assistants.

In the end, users reuse passwords that are easy to remember and pick bad passwords due to password fatigue. As a result, traditional password guidelines don’t help companies gain confidence—they are actually compounding the problem.

The Real Culprit – Password Reuse

In reality the problem companies are fighting is password reuse. Once one account has been compromised, the attackers have access to multiple accounts that use the same username and password. Fraudsters may use these accounts themselves, but often they bundle up the stolen credentials and sell the passwords on the dark web.

New NIST guidelines serve to help companies reduce password fatigue and reuse, while also providing suggestions for testing new passwords against a database of stolen credentials—a breach corpus. When the two are implemented together, fraudsters will have a much harder time taking advantage of stolen credentials through account takeover and automated fraud.

New Passwords and Using Blacklists

Revision 3 of the NIST password guidelines overview – Digital identity guidelines – has dramatically updated recommendations on how to use passwords properly:

https://pages.nist.gov/800-63-3/sp800-63b/appA_memorized.html

The main tenets are:

    • Don’t rely on passwords alone. Use multi-factor authentication steps to verify the user is who they claim to be.
    • Drop the complexity requirements, they make passwords hard to remember and aren’t as effective as once thought.
    • Allow all different types of characters.
    • End the upper limit on size. Length can be an important key to avoid theft.
    • Rotate when something seems suspect. Don’t rotate because of an arbitrary timeout, like every six months.
    • Disallow common passwords.
    • Check new passwords against a blacklist of stolen passwords

The most important step is to check new passwords against a blacklist. These cover a range of passwords, including those known to have been already compromised, and those used in any major presentation. Checking against a blacklist is new territory—a lot of organizations don’t even know where to start.

Creating a Blacklist

An ideal blacklist should have all stolen passwords—not just the ones discovered on the dark web. Unfortunately creating a list of all stolen passwords is difficult. Recently companies have been relying on lists of stolen credentials from the dark web, but these are often too little, too late as it’s not possible to know how long these stolen passwords have been in circulation. For example, Yahoo was breached in 2013, but didn’t realize until 2016. Due to the economics of attackers, there is almost always a big lag between when data is breached and when it’s exploited.

Blackfish and the Breach Corpus

At Shape we created Blackfish to proactively invalidate user and employee credentials as soon as they are compromised from a data breach. It notifies organizations in near real-time, even before the breach is reported or discovered. How does it do this?

Blackfish technology is built upon the Shape Security global customer network which includes many of the largest companies in the industries most targeted by cybercriminals including banking, retail, airlines, hotels and government agencies. By protecting the highest profile target companies, the Blackfish network sees attacks using stolen credentials first, and is able to invalidate the credentials early in the fraud kill chain. This provides a breakthrough solution in solving the zero-day vulnerability gap between the time a breach occurs and its discovery.

Using machine learning, as soon as a credential is identified as compromised on one site, Blackfish instantly and autonomously protects all other customers in its collective defense network. As a result, Blackfish is the most comprehensive blacklist in the industry today.

Don’t Rely on Dark Web Research

Dark web research provides too little information, too late. Today major online organizations can take a much more proactive approach to credential stuffing. By using Blackfish businesses can immediately defend themselves from attack while reducing the operational risk to the organization. Over time these stolen credentials become less valuable to attackers because they just don’t work, and in turn credential stuffing attacks and fraud are reduced.

Watch the full webinar here