Healthcare CAPTCHA: The Cure that’s Worse than the Disease

A healthcare insurer was forced to use a CAPTCHA. 70% of their aged patients could no longer refill their prescriptions. It was a complete disaster.

“This is not who we are,” muttered the CIO of one of the largest health insurance companies in the world as he looked over the report. 

The digital team had been forced to put up a CAPTCHA on the site’s login page, and this had driven a full 70 percent of the company’s older patients off of the website. Pharmacy orders were also down a shocking 70 percent, and the call center was swamped at 130 percent of call volume with site users unable to pass the difficult visual puzzles. It was a complete disaster.

Photo by Joshua Ness

The seeds of this catastrophe were planted quite innocently.

The global healthcare insurer had introduced an innovative Health Rewards program that was hailed as a bold gamification of wellness. The program rewarded patients with points for achieving preventive medical milestones, such as scheduling wellness checkups, screening for bone density, and getting flu shots. Patients even got points for volunteering or participating in nutrition classes, activities that were good for their social and mental health and community bonding. It was beautiful; this is how markets are supposed to work —personal rewards for conscientious behavior.

The reward points themselves had no cash value, but could be redeemed in the insurer’s online mall for gift cards from retailers like Amazon and Walmart—and those gift cards definitely did have cash value. These rewards proved a juicy target for gift-card crackers.

Credential Stuffing and Gift Card Cracking

Almost immediately, automation attackers began credential-stuffing the login page of the insurance company’s rewards program. Credential stuffing is the act of testing millions of previously breached username and password combinations against a website with the knowledge that some of the credentials will work there. Success rates for an individual credential-stuffing login are low; they vary between 0.1 percent and 2 percent depending on the client population. 


Attackers counter the low probability of any individual login succeeding by scaling their attempts into the millions via automation—scripted programs called “bots.” Modern bots look very much like human users to a target computer—telling them apart is one of the most difficult problems in modern computer science. A 1 percent success rate in a credential-stuffing attack is a reasonable statistical estimate; one million leaked credentials will yield 10,000 successful logins against a third party, leading to account takeovers by the attacker. Today there are over 5 billion leaked credentials on the market.

The attackers breached thousands of accounts at the healthcare insurer’s rewards program. They consolidated reward points and converted them into gift cards, from which they exfiltrated the real cash value. The insurer’s CIO and IT security team were actually not that worried about the losses incurred through gift-card fraud.

“We were much more anxious about the PII exposure than the fraud.”

Global Health Insurer CIO

The attackers appeared to be ignoring the Personally Identifiable Information (PII) associated with the cracked accounts in favor of getting the rewards points, but the exposure was alarming.

The security team turned to their Content Delivery Network (CDN) vendor for help. The CDN’s “bot management” solution put a CAPTCHA into the user login process in an attempt to stop the automation. 

And that’s when the wheels came off.

Human success rates for CAPTCHAs are already distressingly low—as low as 15 percent completion rates for some populations. Because computers have gotten so good at solving CAPTCHAs, the tests have gotten more and more difficult.

For elderly users, who are visually impaired more often than not, CAPTCHA success rates are even lower. In fact, one would be hard-pressed to devise a worse user experience than CAPTCHA for an aging population.

Immediately after the CDN put their CAPTCHA in place, login success rates plummeted. Seven out of ten elderly users could no longer log in to their accounts, access the rewards program, or renew their prescriptions online. 

Online pharmacy orders plunged by 70 percent.

Frustrated patients had to phone the health insurer’s call center to renew prescriptions.

Meanwhile the attackers easily bypassed the “bot management” solution through one of the many underground services that offer 1,000 solved CAPTCHAs for $1. Now they were the only ones earning rewards.

“This is not what we do.”

Global Insurer CIO

The CAPTCHA was far more damaging than the fraud it was supposed to stop. The cure was worse than the disease.

Can you make an introduction?

The CIO reached out to a C-level colleague of his at a top-3 North American bank. He explained the situation and said, “Hey, you guys are a bank, and you don’t use CAPTCHAs. How do you get away with that?”

His peer said, “We use Shape Security,” and he made an introduction.

Shape worked with the healthcare insurer’s CIO and his team to get our technology deployed. We went into monitoring mode first, to study attack traffic patterns. Because Shape came in behind their CDN solution, the monitoring period became an informal bake-off between the CDN’s bot management service and Shape’s.

Understanding Users and Risk

Web and Mobile Visitor BehaviorsRisk
Legitimate users with good behaviorStrong passwords
No password re-use
Low
Legitimate users with bad behaviorWeak passwords
Password re-use
Prey to phishing
High
Illegitimate users with ill intentAccount Takeover
Phishing
IP Theft
High

Even behind the CDN’s CAPTCHA, Shape was detecting large amounts of credential stuffing and gift-card cracking—sometimes up to two million attempts per day. While the attackers had been smart enough to “hide” their traffic spikes within the diurnal patterns associated with human logins, they were not otherwise trying to disguise their traffic.  Sometimes they connected through proxies, sometimes through a partner healthcare insurer, and even once through a financial aggregator.

Shape fought the attackers as they retooled, attempting to get around the Shape defenses. Within weeks, most of the attackers gave up, resulting in a 90% decrease in overall traffic.

The CIO was sufficiently impressed by Shape to completely displace the CDN for bot management at the healthcare insurer’s web property, and the CAPTCHAs were removed from two dozen entry points.

Shape then began working with the team to monitor the mobile property, because that is where attackers always retarget to after we block them on the web. After another month of monitoring the mobile traffic, Shape was able to show that the healthcare insurer’s mobile property could be further improved to remember legitimate users, and we cut their legitimate “forgot password” transactions in half. Shape also provided the insurer with a customized list of recommendations for information access and password protections policies.

Steady State Unlocked

Today the healthcare insurer’s website has zero CAPTCHAs in front of their pharmacy, the account profile, and their rewards program. The Shape mobile SDK is integrated with nearly all the mobile platforms that the insurer reports.

Attackers and aggregators continue to probe the insurer’s web and mobile properties. Shape sees them, and foils the attackers. The health insurer is notified of the aggregators, who are encouraged to use authorized API gateways.

Attackers continue to probe (unsuccessfully) today

The online pharmacy is accessible to all customers again. Call volumes have dropped to levels not seen since before the CAPTCHA crisis. Attackers and aggregators continue to probe the insurer’s web and mobile properties. Shape sees them, and foils the attackers. The health insurer is notified of the aggregators, who are encouraged to use authorized API gateways.

And, perhaps most importantly, the healthcare insurer is again free to focus on innovating new programs and rewarding customers for taking preventive steps for their medical and social wellness.

10 Questions to Ask a Bot-Mitigation Vendor

You figured out that you have a bot problem. Maybe you have a high account takeover (ATO) rate, or someone’s cracking all your gift cards, or scraping your site. You tried to handle it yourself with IP blacklists, geo-fencing, and dreaded CAPTCHAs, but it became an endless battle as the attacker retooled and retooled and you’re sick of it.

So now you’ve decided that you’re going to call in professionals to stop the problem, and get some of your time back. You’ve narrowed it down to a couple or three, and you’re going to get them in and ask them some questions. But what questions? Here are some good ones that can give you an idea if the vendor’s solution is a fit for your environment.

1. How does the vendor handle attacker retooling?

This is your most important question. When a countermeasure is put in place, persistent attackers will retool to get around it. Victims of credential stuffing say that fighting bot automation by themselves is like playing whack-a-mole. You are paying a service to play this game for you, so ask how they handle it, because attackers always retool.

Attackers always retool. How does the vendor respond?

2. Does the vendor dramatically increase user friction?

CAPTCHAs and 2FA dramatically increase user friction. Human failure rates for the former range from 15% to 50% (depending on the CAPTCHA), and lead to high cart-abandonment and decreased user satisfaction. Honestly, think carefully about vendors who rely on these countermeasures. Your goal should be to keep CAPTCHA off your site, not pay someone to annoy your users.

3. How does the service deal with false positives and false negatives?

A false positive for an anti-automation vendor is when they mark a real human as a bot. A false negative is when they mark a bot as human and let it through (this is by far the most common case, but sometimes the less important one). Bot mitigation will have some of both; be suspicious of any vendor who claims otherwise. But a vendor should be very responsive to the problem of false positives; that is, you should be able to contact them, complain, and have the false positive determination addressed.

Ideally, games of whack-a-mole should be 100% outsourced

4. When an attacker bypasses detection, how does the service adapt?

There will be advanced attackers who manage to bypass detection, becoming a false negative. When it happens, you may not know about it until you see the side effect (fraud, account takeovers, etc.). Then you’ll need to contact your vendor and work with them on how to remediate. How do they handle this process?

5. How does the vendor handle manual fraud (actual human farms)?

If your vendor is particularly adept at keeping out automation (bots), a very, very determined attacker will hire a manual fraud team to input credentials by hand in real browsers. Many services do not detect this (since technically, human farms are not bots) Can their service detect malicious intent from even real humans? Shape can.

How does the service handle manual fraud farms?

6. If one customer gets bypassed, how does the vendor protect that bypass from affecting all other customers?

Ideally, the vendor should have custom detection and mitigation policies for every customer. That way, if an attacker retools enough to get around the countermeasures at one site, they can’t automatically use that config to get into your site. Each customer should be insulated from a retool against a different customer.

7. If an attacker bypasses countermeasures, does the service still have visibility on attacks?

It is very common for a service to be blind after an attacker bypasses defenses. If the vendor mitigates on the data they use to detect, then when an attacker bypasses mitigation, you lose the ability to detect. For example, if they block on the IP, when the attacker bypasses the block (distributes globally) the vendor may lose visibility and doesn’t know how bad you are getting hammered.

An example of a system that is working correctly is when 10,000 logins come through and they all look okay initially because they have behavioral analytics within the proper range for humans. But later it is determined that all 10,000 had identical behaviors, which means the logins were automated. A good vendor will be able to detect this for you, even after the fact.

8. Is there a client-side or browser agent?

If yes, how large is the integration and how expensive is the execution? Does the user or administrator have to install custom endpoint software, or is it automatic? If there is no endpoint presence how does the vendor detect rooted devices on mobile and how does it detect attacks using latest web browsers on residential IPs?

For example, one of our competitors takes pride in having no endpoint presence – not even a browser-agent. A common customer of ours used both their solution and ours simultaneously and found that the competitor missed 95% more automation (ask for details and we can provide them).

9. Does the vendor rely on IP-Blacklisting or IP-Reputation?

Our own research shows that automation attackers re-use an IP address an average of only 2.2 times. Often they are only used once per day or per week! This makes IP-Blacklisting useless. There are over a hundred client signals besides the IP address; a good service will make better use of dozens of those rather than relying on crude IP blacklisting.

10. How quickly can the vendor make a change?

When the attacker retools to get around current countermeasures, how quickly will the vendor retool? Is it hours, or is it days? Does the vendor charge extra if there is a sophisticated persistent attacker?


There are other questions that are table stakes for any SaaS vendor. Things like deployment models (is there a cloud option) and cost model (clean traffic or charge by hour). And, of course, you should compare the service level agreement (SLA) of each vendor. But you were probably going to ask those questions anyway (right?).

Yes, this article is slightly biased, as Shape Security is the premiere automation mitigation service. But consider the hundreds of customers we’ve talked to who chose us; these are the questions they asked, and we hope that they help you, even if you end up choosing a different bot-mitigation vendor.

On The Launch of Shape Connect

The war against “fake” begins today, with the launch of Shape Connect.

Shape spent the last eight years building a machine-learning engine that has a single focus: to distinguish humans from robots on the Internet. The engine is constantly learning as it processes over a billion transactions every day from 25 percent of the consumer brands in the Fortune 500. It’s actually a billion-and-a-half on payday and National Donut Day (June 7, thank you, Dunkin’ Donuts).

We’ve made this incredible engine available to everyone and we call it Shape Connect. Connect is self-serve, takes minutes to set up, and is free for two fortnights (yes, GenZ, that’s the correct spelling).

Why is Connect so revolutionary? Distinguishing automation (bots) from humans is the most difficult, and most pressing, challenge on the Internet. Stopping fake traffic should be job #1 for any website that has value—yet, Facebook, Twitter, and Google all struggle with fake traffic. Shape Security can, and we’re practically giving the service away.

Solving Modern Problems

Okay, okay, so we built a computer that can identify other computers. How does this help you? Many businesses are being defrauded by bots and don’t even know it. They might know they have a problem of some kind but not understand that automation is the real threat vector.

Credential Stuffing Causes HUGE Business Losses  

Credential Stuffing: Shape didn’t invent it, but we DID name it. It’s where malicious actors  acquire login credentials belonging to blithely unaware Internet users, employ bots to pour billions of username/password combinations into millions of websites, then drain users’ accounts of money, credit-card numbers, email addresses, and other valuable stuff.

Website breaches resulting in gargantuan credential spills are common occurrences these days despite mighty efforts to boost privacy and security measures. A sophisticated criminal industry has sprung up that uses automation to access online accounts across the board, including social media, retail, banking, travel, and healthcare.

What credential stuffing looks like before Shape Connect stops it

Believe it or not, credential stuffing-related activity can make up more than half of a website’s traffic. It’s estimated that this kind of nefarious pursuit results in business losses of over $5 billion annually in North America alone.

Gift Card Cracking

Another super-annoying problem is the cracking of online gift-card programs. Most gift-card programs allow recipients to check the card balance online. Attackers create bot armies to check the balance of every possible gift-card number! When they find a gift-card number that has a positive balance, they use it to purchase re-sellable goods before the recipient can use the card. Isn’t that horrible? It costs retailers millions of dollars per year.

Business Logic Mischief

But it gets worse. Almost any site that has significant intellectual property in its business logic is either being attacked or is at risk. Consider the stalwart health-insurance company. Insurance websites allow you to get premium estimates based on your profile. Their rates are based on diligent research and proprietary actuarial tables accumulated over decades of experience. One of our customers found that a competitor was creating millions of fake profiles, each with a slight tweak to its age, income, and pre-existing condition to map out the insurer’s quote-rate tables. What took decades to create was being stolen by a competitor using bots. That’s not fair, is it?

Are You Dating a Robot?

One of the curious facts that emerged from the aftermath of the Ashley Madison breach in 2015 was that a significant number of the female profiles on the affair dating site were fake. They’d been created by bots to yield vehicles by which swindlers around the world could establish online relationships with men whom they would then defraud through a money transfer. While Ashley Madison is no longer with us, there are other, less controversial dating sites that still have the same problem. Shape helped one of them deal with fake-account creation, leading to a much lower probability of robot dating. (Sorry, robots, true love is for humans.)

Hotels and Airlines: Point Theft

Hotels and airlines have their own currencies in the form of loyalty program “points” or “miles.” These have long been a target for fraudsters who can take over thousands of accounts, merge all their points, and convert them into re-sellable goods. In many cases, attackers prefer going after points. Your average consumer will notice immediately if their bank account is drained, but may not quickly (or ever) notice that their points are gone. They might just assume the points had expired. Room rates and flight fares are another form of intellectual property, and aggregators scrape the sites constantly, pulling rate information for competitors, leading to overly low “look-to-book” rates.

Fight The War Against Fake

Those are just a few examples of automation as a threat vector for business. We could tell you about a million cases of sophisticated bots threatening every different type of business, but we hope you get the picture already.

So let’s get back to Shape Connect, what it is, and how it works.

How Shape Connect Works

Our fully cloud-based service stands staunchly between your site and the Internet, deflecting bots and protecting you credential stuffing, DDoS, account takeovers, gift card cracking, and all other malicious activity done at scale.

We’ve put together a couple of videos showing how Shape Connect works to protect your site. For those of you blessed with short attention spans, we have a 90-second, visually stimulating cartoony video (above).

If that piques your interest and you want the whole story, here’s a six-minute video that goes deeper into the workings of Shape Connect.

And if you’re a reader, we’ll break it down for you right here.

Without Shape Connect, there’s nothing between your website and the user’s browser. But what if it’s not a browser or a real user? Both real users and bots follow the same steps to get to your site.

  1. The client (user or bot) queries DNS.
  2. DNS returns the IP address of your website (or load balancer or cluster, or whatever).
  3. The browser or bot sends a request directly to your website.
  4. Your website returns the response.

With Shape Connect, there’s a layer of protection between your site and the user or bot.

  1. DNS returns a dedicated Shape Connect IP to the user or bot.
  2. All client requests are routed through Shape’s Secure CDN for fastest response.
  3. Shape Connect absorbs any DDoS attacks that the client might have sent.
  4. Shape Connect’s artificial intelligence determines if the request came from a real human using a real browser or from an automated bot. It passes only human requests through to your website.
  5. Your website responds only to legitimate requests, sending the data back through Shape Connect and to the human at the other side.

Of course, if you have “trusted bots” that you want to allow, you can manage your own whitelists.  

With the Shape Connect Dashboard, you can see all the requests that have come through, and marvel at all the automated malicious requests that Shape blocked!

Your Honor, I Object!

The rest of the industry is catching on to the bot problem, and some are pushing approaches that differ from Shape Connect.

What about WAF?

One of those alternative solutions is so-called “bot management” integrated into a Web Application Firewall (WAF). We’re seeing many WAF vendors trying this, but failing. Here’s a long treatise that explains why we think WAF is a suboptimal approach.

What about PCI?

With Shape Connect, you can drive away all unwanted automation and still be PCI compliant. We’ve got more details for you in this informative and colorful brochure.

Connect with Shape Connect

To celebrate the official launch of Shape Connect, we were going to throw ourselves a gigantic poolside party, with mumble rappers from LA and rivers of Henny.  But we decided, instead, that it would be more fun to watch all the new customers come in and bask in the delight they experience as they get connected.

Shape Connect is live right now, and if you’re comfortable and confident, you can sign up for a free trial. But we’re also here if you want to chat first about how Shape Connect can secure your business, reduce your latency, keep your servers afloat, and improve your customer experience journey. Talk with you soon!

Do You Need a WAF, or Something Better than a WAF?

The King is Dead, Long Live the King, by Bart Hoffstein (cayusa), license Creative Commons 2

“The king is dead! Long live the king!” The jarring conflict embodied in this timeless hoorah is about to apply to the application security space. Subjects are giving up on the old king—the web application firewall (WAF) technology—as their primary appsec tool, for several reasons. First, because WAFs are too complicated. Second, because attackers have changed their attack vector to target credentials at scale (credential stuffing) before hacking. Third, and most important, because the market has evolved to offer an approach superior to WAF in efficacy, value, and worker hours invested.

While we at Shape Security have been predicting the shift away from WAF for years, others have been taking note. The PCI DSS specification had previously mandated a WAF, and that drove WAF sales for a decade. However, the language of PCI DSS has changed in 6.6, and other solutions can be used to fulfill the requirement.

The new approach is a distributed, cloud-based, machine-learning Turing service backed by anti-automation specialist operators. Let’s call it “anti-automation” for short until a clever analyst comes up with a better name.

2018 had 1500+ critical CVEs

WAFs are Too Complicated

Consider the statement that WAFs are too complicated. In our experience working with customers over the last decade, we’ve rarely, if ever, seen complete WAF protection cover even a tenth of critical applications. Frequently, the WAF has just a single dedicated (and expensive) administrator, and the ruleset for the WAF must be updated under the following conditions:

  1. When attackers evolve an attack to get around existing signatures.
  2. When content has been added (which is constantly in today’s agile web paradigm).
  3. When a web vulnerability is detected in the application or any supporting infrastructure (2018 had over 1500 critical CVEs — six for every working day).

These factors, which are all external to the WAF, quickly overwhelm the administrator and end up protecting only a handful of applications (or a single application). And usually not well.

Credential Stuffing and Retooling are the New Threat Vectors

Even if WAFs had done their job properly, it wouldn’t really matter because attackers have radically changed their approach. Gone are the days of attackers manually hacking websites. Today, they focus first on taking over the accounts of legitimate users. From there, they perpetrate their blight or escalate their privilege.

Today it’s all about credential stuffing. Attackers test millions of breached credentials using automated tools like Sentry MBA, PhantomJS, or automated headless browsers to gain their initial beachhead. Between 0.2% and 3% of credential-stuffing attempts are successful—a piteously low rate, which is why attackers try millions of credentials at a time. Even a 0.5% success rate using one million breached credentials will yield 5,000 accounts.

WAF technology was designed to stop SQL injections, not credential stuffing. An on-premises WAF managed by a single or part-time resource has no hope of defeating sophisticated credential-stuffing campaigns.

When a defender concocts a rule to stop a credential-stuffing campaign, the attacker pauses, retools to get around it, and then resumes the campaign. We at Shape see this all day, every day, with up to ten different levels of retooling. No single resource can keep up with that degree of sophistication, and the world is coming around to admit the problem.

The New Paradigm for Application Protection

If we all admit that the WAF is too long in the tooth, and that attackers have changed their approach anyway, the obvious question is: What is the right approach?

There are only a handful of highly skilled specialists with the right combination of technologies to consistently defeat and deter attacker automation. The key technologies of the best approach are:

  1. Artificial Intelligence (AI). Each attacker is launching millions of login tests, from millions of different IP addresses around the world. Only an AI-assisted SOC can see through the tidal wave and pick out real users.
  2. Expert-Assisted Mitigation. As useful as AI is, no vendor has machine-learning models that can detect and block all automation without also blocking real users (false positives). AI must be used to detect and flag campaigns to real human operators who make the final determination and remediation.
  3. Collective Defense. Most attackers launch credential-stuffing campaigns against multiple defenders in a serial fashion. The right approach must include defending a plurality of targets in each vertical market, so attacks seen against one company can be used to inoculate all the other companies before the attack can get to them.

Shape Security pioneered all these technologies for the Fortune 500 and Global 2000, and we’re now bringing them to everyone else to take the burden off WAF admins.

Looking Beyond the WAF

The OWASP Top Ten is the Open Web Application Security Project’s top-ten application security risk list. The legacy WAF technology was the only tool specifically designed to speak to the OWASP Top Ten, but at the end of the day, it was poorly suited to solve the list’s issues. Table 1 shows a breakdown of how well a WAF executes against an anti-automation service like Shape for each entry of the Top Ten.

RankOWASP RiskWAF AbilityAnti-Automation
1Injection******
2Broken Authentication****
3Sensitive Data Exposure***
4XML External EntitiesN/AN/A
5Broken Access Control***
6Security Misconfiguration****
7Cross-site Scripting***
8Insecure Deserialization*N/A
9Known Vulnerabilities*****
10Logging and Monitoring****

Let’s dive a little deeper into some of the Top Ten.

#1: Injection, #3: Sensitive Data Exposure

One could argue that the number-one job of a WAF is to prevent SQL injection. Modern organizations have learned to use identity as perimeter to keep unauthenticated users from causing any kind of SQL query, and that in itself is a commendable first line of defense. To get around the perimeter, attackers must gain control of an account. To do that, they use credential stuffing or brute force, both techniques that are much better blocked by an anti-automation service than a WAF.

#2: Broken Authentication, #5: Broken Access Control

Authentication systems are difficult to perfect. When they fail, they increase risk disproportionately to other systems, which is why OWASP keeps them high on their list. With sufficient tweaking, a properly configured WAF can assist broken authentication or access control system. But wouldn’t the knowledge to create the necessary defensive WAF configs be better utilized fixing the original misconfigurations? The anti-automation service simply detects that systems probing for these vulnerabilities are not human, and blocks them—which is a much simpler and broader approach than trying to make sure every knob is at the right level.

#10: Logging and Monitoring

Insufficient logging and monitoring of the application weaken   incident response. WAFs can help by flagging attacks before other systems do, but an anti-automation service comes with its own highly trained, specialized SOC. There is no contest here.

Conclusion

The final defense for WAF apologists used to lie in the PCI DSS WAF requirement, but even those have been relaxed to allow for a more flexible solution, and that’s a good thing. Shape Security has additional documentation on how cloud-based services can meet the requirement here.

Given all these factors—the deprecation of PCI DSS, the decreasing emphasis on WAF (and its magic quadrant), the evolution of credential stuffing, and the strategy of identity as perimeter—the market has been casting about for a new solution. Shape’s distributed anti-automation service, fronted by machine learning and backed by specialist operators, is rising to meet the challenge.

5 Rando Stats from Watching eCrime All Day Every Day

David Holmes here, cub reporter for Shape Security. While I’m luxuriating in United Airlines’ steerage class, our crack SOC team is back at HQ slaving away over their dashboards as tidal waves of automated traffic crash against the Shape breakers. At least they have Nespresso and those convenient eggs-in-a-bag from the kitch. The day shift of SOC team #1 actually sits pretty close to the corporate marketing brigade, so we kind of know each other and exchange awkward greetings in the hallway.

Breakfast of SOC Champions

ANYWAY, I thought it would be cool to share some statistics from SOC’s recent cases that highlight the shape of the anti-automation industry today.

1. 750 Million in a Week for One Site

Since the release of the Collection #1 credential corpus, some of our customers are experiencing insane levels of login events. One customer saw over 1.5 billion automation attempts in a two-week period. That’s pretty high even for them, one of the largest banks in the solar system. If, for some tragic reason, the Collection #1 campaign persists at its current level, you could extrapolate 39 billion automation attempts in a year (assuming no cracker vacation). Against a single site. That’s sick, brah. Sick.

2. IP Address Re-use: 2.2

This stat is actually sadder than last week’s Grammys. During a credential-stuffing campaign, the attacker throws millions of credentials (gathered from breaches or the “dark web”). If he tried them all from a single IP address, then, of course, you’d just block that IP address, right? So he uses multiple IP addresses. In extreme cases, the most sophisticated cracker will only try a single login from each IP address (no re-use). Lately, the average number of times an IP address will get reused during a campaign is a paltry 2.2.

Basically, blocking by IP address is useless. By the time you add an IP address to your blacklist, it’s too late—it’s not going to be reused again during the campaign. If you see a vendor touting address-blocking, or CAPTCHAs, as a solution, please put your hands on your hips, throw back your head, and issue forth the biggest belly laugh you can. Bwahahaha!

Sadly, some of the technical people we talk to just don’t get it. We tell them: “Blacklists are useless,” and they say “Sure, but you block by IP address, right?” Then we explain it again, and they still don’t get it. Someone should write a paper! Oh, wait, that’s us.

3. Credential Stuffing Succeeds 2% of the Time

2% is funny. It’s our favorite milk. It’s the conversion from US dollars to Philippine pesos. It’s our reader-retention rate when we let Holmes write. Two percent may not sound like much, but consider an attacker testing a million stolen credentials against your web property. That’s 20,000 valid usernames and passwords he’s going to confirm. Actually, the success rate varies between 0.1 and three percent, but two percent is good enough for government work. And speaking of government…

You might be thinking: Actually, guys, 0.1 to 3.0 is a huge range. That’s a multiple of 30. An order of magnitude and then some.  True enough, but when dealing with a million—or even a billion—credentials, the difference is really just “bad outcome” versus “really bad outcome.”

Yesterday Shape looked at a small campaign where a single, lonely attacker in Vietnam had 1,500,000 credentials. Even a 0.1-percent success rate, for him, would have translated to the confirmation, and possible account takeover (ATO), of 1,500 accounts. We say “would have” because we foiled all of his posts. He didn’t even seem to notice, which makes us think maybe he’s TOO automated, or that he suffers from some kind of “educational gap” (that’s the new euphemism for stupidity).

4. 15 Months to an Ugly Baby

The number of months between when some dood stole all your credentials and when you read about it in The Register while eating your precious Honey Smacks is: 15. A lot can happen in 15 months; French words, mostly. Organization penetration, exfiltration, hacker celebration, hacker inebriation, and stock depreciation. Of course 15 months is just an average, and individual cases vary widely, but the point is that it’s an eternity in Internet time.

“Well, dang!” you sputter around your Honey Smacks. “What’s being done about this???”

We’ve got a solution we call Blackfish. We’re already seeing all the waves of credential stuffing against the busiest commercial sites in the world. So we can tell when someone stuffs, say, the creds from your entire customer login database against HoneySmacks.com. Now you don’t have to wait 15 months; if you had Blackfish, you’d know the minute someone tried your logins. How cool is that? If you’re interested, a single chat with our trusty sales chatbot can get the ball rolling for you.

And if you want to read a much more coherent explanation of the 15-month effect, print out our award-winning Credential Spill report, and read it over your Honey Smacks tomorrow.

Disclaimer: Shape Security in no way endorses Honey Smacks; in fact, they have been voted the number #2 worst breakfast you can possibly eat. But dang, they are yums.

5. 99.5% of POSTs are against “forgot-password.js”

Our SOC team dealt with an ATO campaign last month. We remember it well because against that website, we detected that 99.5 percent of requests headed for their “forgot-password” page were automated. Yes, that’s 199/200 for the fractionally-minded (aren’t numbers fun)!

Sure, that’s a single campaign, but in our experience, it’s not an uncommon one. Check your own weblogs and see how the access requests to your forgot-password page compare to, well, anything else (and then call us).

We have many customers for whom forgot-password is their most-frequented page by far. By far! And if our customers weren’t the paragons of morality that they are, they’d put ads on that page and fund themselves a couple of truckloads of egg-in-a-bags. Or is it eggs-in-a-bags? The Oxford dictionary is strangely silent on this topic.


Well, there you have it: five random statistics about fighting anti-automation we slapped together compiled from the last month. Stay tuned, friends, and we at Shape Security’s marketing brigade will bring you more pseudo-cogent security-related statistics, probably from RSA 2019, in a couple of weeks.

Lessons Learned from 2018 Holiday Attacks: No Rest for the Wicked

Scrooge would approve—attackers work on Christmas Eve, and now on New Year’s Eve, too

We at Shape Security defend the world’s top banking, retail, and travel websites. And while you might be just getting back to work this first full week of January, our attack forensics teams are finally getting a break, because this holiday season was a busy one. Now that the dust has settled, we’ve analyzed our data to determine how 2018’s online holiday-season shenanigans differ from 2017’s.

During this festive Holiday season, attackers worked through Christmas Eve and Christmas Day. But in a striking change from the previous year, the most sophisticated attackers no longer took a New Year’s Eve (NYE) off. In fact, this year, we saw several intense campaigns that started or peaked on NYE.

The Best Time to Rob a Bank is Christmas Day

No matter what institution they use, most online banking customers have one thing in common: they stop checking their online balances during the December holidays. Turning a blind eye to one’s finances is optimistic human nature; our customers report that legitimate online banking activity often drops as much as 30 to 40 percent during this period.

Financial institutions may not observe the full extent of this change, however, because the drop in legitimate banking activity is overshadowed by an increase in malicious activity. According to our data, in both 2017 and 2018, malicious actors took advantage of the holiday, launching new attacks on or right around Christmas.


Figure 1: A malicious actor waited to launch their attack until Christmas Day itself.

Shape’s Christmas present to the Top 5 US bank, the target in the above graph, was the fact that we didn’t take Christmas Day off, either.

New Year’s Eve is Cancelled (for Professional Criminals)

With some notable exceptions, nearly all attackers took New Year’s Eve off. On that night, attacks aimed at Shape’s customers dropped over 65% overall – and in one case over 99%, We observed this trend across all industries, including retail, travel, financial services, and tech. Perhaps tired from their exertions over Christmas, nearly all attackers put their keyboards away and joined the poor furloughed federal workers on a break for the New Year’s holiday.

“The holiday season now separates the hobbyists from the dedicated professional cybercriminals.”


Figure 2: Reductions in both legitimate consumer traffic and automated attack traffic.

But the sophisticated attackers, the ones who do this for a living, actually used the global holiday for surgical strikes, particularly against banks .

The attack graph below illustrates the trend. The tiny, tiny red bars on the left (they look like a dotted line) show the normal level of traffic on a financial institution’s website.

Figure 3: Attacker launches failed campaign, retools on NYE, gives up on Jan 1

On December 29, malicious actors launched a large attack against the site. Even by spoofing dozens of signals at all levels – network, client and behavioral, they still couldn’t penetrate Shape’s defenses. On New Year’s Eve they retooled, doubling the number of signals that they were spoofing, but that too, failed, and they gave up towards the end of the day.

Why Launch Attacks During the Holidays?

Sophisticated attackers, the ones for whom crime is their day job, know they are playing a chess game that requires human intervention. So they plan their moves according to when organizations are most vulnerable, i.e., when a security team is most likely to be distracted or short-staffed. What are the days that a security operations team is most likely to be away from their desks? Christmas and New Year’s.

Furthermore, because professional criminals are relying on their ill-gotten gains, they are loath to waste resources. Everyone knows that the top banks are the most lucrative targets, yet hardest to crack. So we suspect that’s why FSIs in particular are targeted during the holidays.

The clearest example of this theory comes from the most sophisticated attack group Shape saw in 2018—a bot that mimicked iOS clients (see our 2018 Credential Spill Report, in which we talk about this attack group). They’d previously targeted a top Canadian retailer, a top global food and beverage company, and a Top 10 North American bank, and we had successfully held them off across our entire customer network.

Figure 4: Sophisticated attacker activity on NYE

This group had been lying low for a couple of months, but on NYE they came back with a sneaky, retooled attack when they thought we weren’t watching. But Shape detected the new attack and quickly blocked it. The attacker gave up on New Year’s Day.

It is not clear why only sophisticated attackers worked on New Year’s Eve this year. We suspect they are getting desperate as more and more organizations harden their application defenses against automated fraud and are looking for any type of vulnerability to exploit. In that case, it’s possible we will see this behavioral trend extend to other major holidays in which companies effectively shut down, such as Chinese New Year and Labor Day.

About Shape Security

Shape Security is defining a new future in which excellent cybersecurity not only stops attackers, but also reduces friction for good customers. Shape disrupts the economics of cybercrime by making it too expensive for attackers to commit online fraud, while also enabling enterprises to more easily transact with genuine customers. The Shape platform, covered by 55 patents, was designed to stop the most dangerous application attacks enabled by bots and cybercriminal tools, including credential stuffing (account takeover), fake account creation, and unauthorized aggregation. The world’s leading organizations rely on Shape as their primary line of defense against attacks on their web and mobile applications, including three of the Top 5 US banks, five of the Top 10 global airlines, two of the Top 5 global hotels, and two of the Top 5 US government agencies. Today, the Shape Network defends 1.7 billion user accounts from account takeover and protects 40% of the consumer banking industry. Shape was recognized by the Deloitte Technology Fast 500 as the fastest-growing company in Silicon Valley and was recently inducted into J.P. Morgan Chase’s Hall of Innovation.


Extreme Cybersecurity Predictions for 2019

Prediction blogs are fun but also kind of dangerous because we’re putting in writing educated guesses that may never come true and then we look, um, wrong. Also dangerous because if we’re going to get any airtime at all, we have to really push the boundary of incredulity. So here at Shape, we’ve decided to double down and make some extreme cybersecurity predictions, and then we’ll post this under the corporate account so none of our names are on it. Whoa, did we just say that out loud?

“Baby, when you log in to my heart, are you being fake?” Photo Credit: HBO

Forget the Singularity, Worry About the Inversion

New York Magazine’s “Life in Pixels” column recently featured a cute piece on the Fake Internet. They’re just coming to the realization that a huge number of Internet users are, in fact, fake. The users are really robots (ahem, bots) that are trying to appear like humans—no, not like Westworld, but like normal humans driving a browser or using a mobile app. The article cites engineers at YouTube worrying about when fake users will surpass real users, a moment they call “The Inversion.”  We at Shape are here to tell you that if it hasn’t happened already, it will happen in 2019. We protect the highest-profile web assets in the world, and we regularly see automated traffic north of 90%. For pages like “password-reset.html” it can be 99.95% automated traffic!

Zombie Device Fraud

There are an estimated five million mobile apps on the market, with new ones arriving every day, and an estimated 60 to 90 installed on the average smartphone. We’ve seen how easy it can be for criminals to exploit developer infrastructure to infect mobile apps and steal bitcoins, for instance. But there’s another way criminals can profit from app users without having to sneak malware into their apps—the bad guys can just buy the apps and make them do whatever they want, without users having any idea that they are using malicious software. The economics of the app business—expensive to create and maintain, hard to monetize—mean less than one in 10,000 apps will end up making money, according to Gartner. This glut of apps creates a huge business opportunity for criminals, who are getting creative in the ways they sneak onto our devices. In 2019, we’ll see a rise in a new type of online fraud where criminals purchase mobile apps just to get access to the users. They then can convert app-user activity into illegitimate fraudulent actions by hiding malware underneath the app interface. For example, a user may think he is playing a game, but in reality his clicks and keystrokes are actually doing something else. The user sees that he is hitting balls and scoring points, but behind the scenes he is actually clicking on fake ads or liking social media posts. In effect, criminals are using these purchased mobile apps to create armies of device bots that they then use for massive fraud campaigns.

Robots will Kill Again

Have you seen those YouTubes from Boston Dynamics? The ones where robots that look like headless Doberman pinschers open doors for each other? You extrapolate and imagine them tearing into John Connor and the human resistance inside. They are terrifying. But they’re not the robots we’re thinking of (yet). A gaggle of autonomous vehicle divisions are already driving robot fleets around Silicon Valley. Google’s Weymo and Uber use these robots to deliver people to their next holiday party, and we’ve heard of at least two robot-car companies delivering groceries. Uber already had the misfortune of a traffic fatality when its autonomous Tesla hit a cyclist in Arizona last year. But Uber robots will be back on the road in 2019, competing for miles with Weymo. Combine these fleets with the others, and more victims more can join Robert Williams and Kenji Urada in the “killed-by-robot” hall of fame. Hopefully it won’t be you, dear reader, and hopefully none of these deaths will be caused by remote attackers. Fingers crossed!

Reimagining Behavioral Biometrics

Behavioral biometrics are overhyped today because enterprises lack the frequency of user interactions and types of data needed to create identity profiles of digital users. But in 2019, behavioral analytics will merge with macro biometrics to become truly effective. The market will move to a combination of macro biometrics, like Face ID, and traditional behavioral biometrics, like keyboard behavior and swiping. Apple is ahead of the game with Face ID and has applied for a voice biometrics patent to be used with Siri.

Kim Jong Un as Online Crime Kingpin?

North Korea will become a dominant player in the criminal underground with more frequent and sophisticated financially motivated hacks, rivaling Russian gangs. International sanctions have pushed the country to be more economically resourceful, so it has beefed up its cyber operations.The northern half of the Korean peninsula has been blamed for cyberattacks on banks, via SWIFT transfers, and bitcoin mining, in addition to traditional espionage involving governments, aviation, and other industries. In 2019, cyber attacks originating from groups (allegedly) associated with North Korea will continue to be successful and enforcement remains challenging. And with the recent Marriott breach affecting 500 million Starwood Hotels guests, the theft of passport numbers means nation-states and other attackers have an even more valuable and rare tool at their disposal for financial, tax, and identity fraud.  

All Breaches Aren’t Created Equal

As industries mature, we refine the metrics we use. In 2019 we’ll see enterprises change how they approach data breaches, moving beyond identifying size and scope, focusing instead on potency and longevity. Breach impact will be measured by the overall quality and long-term value of the compromised credentials. For instance, do these assets unlock one account or one hundred accounts? Most recently we’ve seen the Starwood data heist become one of the biggest breaches of its kind, largely due to the bevy of personal data exposed. In this case, since the unauthorized access dates back four years, we can assume this data has already fueled and will continue to fuel serious acts of financial fraud, tax fraud, and identity theft. As hacker tools become more sophisticated and spills more frequent, businesses can’t afford to ignore downstream breaches that result from people reusing the same passwords on multiple accounts. In reality, today’s breaches are fueling a complex and interconnected cybercriminal economy. In 2019, expect businesses to join forces and adopt collective defense strategies to keep one breach from turning into a thousand.

The Future Looks, Um, Futuristic!

These are our extreme predictions for 2019. Will they come true? Some of them, probably. We hope the robots don’t actually kill people, but we’re pretty sure that the Inversion (where automated traffic surpasses human traffic) is a sure bet, if it hasn’t happened already.

Where do you want to be when the Inversion happens?
Working with us, at Shape!