Highlighting breaking news, events, and analyst commentary on cyber security from around the world
Author: Shape Security
Shape Security defends Global 2000 corporations from increasingly sophisticated automated cyber-attacks, including large-scale account takeover, credential stuffing, content scraping and content aggregation attacks on web and mobile applications. Shape has deflected over $1B in fraud losses for major retailers, financial institutions, airlines, and government agencies. Shape Security is headquartered in Silicon Valley and backed by Kleiner Perkins Caufield & Byers, Norwest Venture Partners, Venrock, Baseline Ventures, Google Ventures, and other prominent investors. Read our blog to get insights.
By Chris Burke, Sr. Director of Telco Sales at Shape Security
We live in testing times. For telecoms operators, the COVID-19 pandemic is throwing up challenge after challenge. These range from a major mismatch between network configuration and the new traffic patterns, to physical attacks on 5G infrastructure and elaborate forms of fraud devised by a new breed of bad actors. At the same time, under-employed customers are querying bills and questioning their tariff plans.
Perhaps the most significant of these challenges is the almost overnight shift in traffic from city centres and corporate campuses to suburbia, where white-collar staff now do almost all their work. Designed to cater for armies of commuters, mobile networks suddenly need to accommodate large numbers of daytime calls in residential districts. Similarly, in the broadband market, all the traffic that once travelled through fibre-optic networks connecting office blocks in urban areas is now having to squeeze through DSL lines connecting suburban homes.
Those telcos that engineered their networks too close to the bone are being caught out. But most operators have, so far, managed to cope, helped by the throttling of video streaming and video conferencing services. But compromising on quality isn’t a long-term fix. Operators need to become more agile, so they can quickly adapt to further changes in traffic patterns. They need to be ready, for example, for a permanent shift away from densely populated call centres. Some telcos and Internet companies have already indicated their own customer service staff will be able to work from home for the foreseeable future….
One final reminder! Shape’s App Security & Fraud Summit — Virtual Event — is tomorrow, February 12, 2020 / 9:00 am PT. Join us as top cybersecurity and fraud leaders dive deep into the latest attacks, tools, and trends you need to know to protect your web and mobile applications in 2020. Tune in, you won’t be sorry!
The App Security & Fraud Summit’s mission is to share ideas, insights, and connections that have the power to transform the industry. The lineup of speakers represents key voices that will help attendees better understand and prepare for new threats and trends. The last place you want to learn about a new cyberattack is in the news. Because by then, it’s too late.
Join hundreds of your peers for this important event and hear from some of the industry’s most dynamic minds, including:
Tara Seals, Senior Editor, Threatpost
Critical Infrastructure Attacks: Between Apocalypse and Reality
Dan Woods, VP Shape Intelligence Center
Manual Fraud to Genesis: Beyond Automation Attacks on Applications
Mike Plante, CMO, Shape Security
Shape Security Predictions 2020 Report: Emerging Threats to Application Security
Shape’s Vice President of Intelligence Center, Dan Woods, will present at the upcoming Retail Cyber Intelligence Summit on September 24-25, 2019, at the Four Seasons Hotel in Denver, Colorado.
2018 saw a significant increase in user credential spills from retailers. And as the retail industry continues to increase its digitization, it creates more incentives for attackers, as well as increases retailers’ potential attack surfaces. In fact, more than 50 percent of all e-commerce fraud losses were from cyber-attacks such as ATO, gift card cracking, and scalping. In addition, up to 99% of traffic on retail and e-commerce login forms was due to account takeover attempts!
Dan’s session, titled “The Anatomy of Web and Mobile Application’s Costliest Attacks,” will discuss actual attacks launched against retail and hospitality organizations and explain attackers’ motivations and monetization schemes. Dan will also share the latest threat intelligence on effective attack tools and techniques that cybercriminals are using to circumvent traditional countermeasures with devastating effectiveness.
“We’re looking forward to continuing our partnership with Shape Security and are pleased to have them as a presenting sponsor at our upcoming Retail Cyber Intelligence Summit in Denver,” said Suzie Squier, president of RH-ISAC.
The Retail Cyber Intelligence Summit is tailored for strategic leaders and cybersecurity practitioners from both physical and online retailers, gaming properties, grocers, hotels, restaurants, consumer product manufacturers and cybersecurity industry partners. The full conference agenda and information on how to register is available here.
There is a war brewing in cyberspace. The general public is blissfully unaware, and very likely will remain so. The media, when it talks about cybersecurity, tends to focus on the breach of the week, even though there cannot possibly be any lessons left to learn in that parade of spectacle and shame.
The war we speak of is against malicious automation (bots), and it’s being fought largely outside the gaze of journalism. On one side are the organizations putting their stores, intellectual property, processes, and businesses online in their journey toward digital transformation: the “good guys.” On the other side are malicious actors armed with nearly undetectable automation, intent on theft, political influence, fake news, and fake transactions: the “bad guys.”
The comedy of this “automation war” is how lopsided it is, technologically. The bad guys have accumulated an impressive arsenal of tools from Sentry MBA, PhantomJS, and simple proxies, to browser extensions (Antidetect), human click farms, behavior collection farms, global proxy networks and, finally, to headless chrome steered with a real orchestration framework like Puppeteer.
Meanwhile, the good guys have only ancient traps like a CAPTCHA or a web application firewall (WAF), both of which are trivially easy for bad guys to bypass. Organizations aren’t thrilled about annoying their customers with friction (like making them click on blurry pictures of buses for 20 minutes) and endlessly rewriting WAF rules when attackers retool every week. It’s an unfair fight, and who has time for that, honestly.
The Silent War of Automation
The primary tactic of an automation attacker is to imitate a legitimate transaction. It doesn’t matter if the transaction has a very low probability of gain for the attackers, because they can multiply their gains by scaling the transactions into the millions at nearly no cost. Because they are blending in so perfectly, many victim organizations have no idea that it’s happening until they see an effect like fully booked inventory, credit card chargebacks, or a competitor who seems to know the price of every single munition with all possible discounts.
The media won’t write a story about how a competitor reverse-engineered an insurer’s policy premiums through the creation of a million slightly different fake profiles, or how an actor deluged a work-for-hire site with a million fake low-wage contractor profiles that represented their tiny firm in the Philippines, because it’s too complicated and there’s no one to shame. There’s no spectacle there.
So, the silent war goes on, with the bad guys getting better and better at imitation, and organizations in nearly every vertical experiencing bizarre side effects (“All our free passport interview slots have been booked and are being sold!”).
What Won’t Save The Day
Everyone’s been hoping that the silver bullet for the good guys was going to be AI. Surely the incredible volume of modern transactions can be used to train machine learning engines to differentiate real traffic from fake, right? The answer is no, it can’t. At best, today’s ML engines can spot not individual anomalies but patterns of suspicious activity.
When a campaign is identified as being underway, human operators must step in and determine the intent of the campaign, because understanding is crucial in determining next steps. The mitigation can’t just be simple blocking, because that’s a signal which helps the attacker retool.
Sometimes, the info-war tactics of misinformation and redirection are the solution for the day. Or evidence collection. You need tacticians. You need real people using automation to fight real people using automation.
The war in cyberspace will be a main topic of discussion next week in Atlanta at the CyberHub Summit. Classy people there will be talking about meta issues like defending the region’s online financial services and de-risking the supply chain. A few of us from Shape Security will be there, and over some pints of the venue’s product, we can show you how we’re fighting the war against malicious automation.
If you can’t make it to the CyberHub Summit, please don’t hesitate to contact us at any of the channels listed under our logo, but otherwise we hope to see you in Atlanta next week!
A healthcare insurer was forced to use a CAPTCHA. 70% of their aged patients could no longer refill their prescriptions. It was a complete disaster.
“This is not who we are,” muttered the CIO of one of the largest health insurance companies in the world as he looked over the report.
The digital team had been forced to put up a CAPTCHA on the site’s login page, and this had driven a full 70 percent of the company’s older patients off of the website. Pharmacy orders were also down a shocking 70 percent, and the call center was swamped at 130 percent of call volume with site users unable to pass the difficult visual puzzles. It was a complete disaster.
The seeds of this catastrophe were planted quite innocently.
The global healthcare insurer had introduced an innovative Health Rewards program that was hailed as a bold gamification of wellness. The program rewarded patients with points for achieving preventive medical milestones, such as scheduling wellness checkups, screening for bone density, and getting flu shots. Patients even got points for volunteering or participating in nutrition classes, activities that were good for their social and mental health and community bonding. It was beautiful; this is how markets are supposed to work —personal rewards for conscientious behavior.
The reward points themselves had no cash value, but could be redeemed in the insurer’s online mall for gift cards from retailers like Amazon and Walmart—and those gift cards definitely did have cash value. These rewards proved a juicy target for gift-card crackers.
Credential Stuffing and Gift Card Cracking
Almost immediately, automation attackers began credential-stuffing the login page of the insurance company’s rewards program. Credential stuffing is the act of testing millions of previously breached username and password combinations against a website with the knowledge that some of the credentials will work there. Success rates for an individual credential-stuffing login are low; they vary between 0.1 percent and 2 percent depending on the client population.
Attackers counter the low probability of any individual login succeeding by scaling their attempts into the millions via automation—scripted programs called “bots.” Modern bots look very much like human users to a target computer—telling them apart is one of the most difficult problems in modern computer science. A 1 percent success rate in a credential-stuffing attack is a reasonable statistical estimate; one million leaked credentials will yield 10,000 successful logins against a third party, leading to account takeovers by the attacker. Today there are over 5 billion leaked credentials on the market.
The attackers breached thousands of accounts at the healthcare insurer’s rewards program. They consolidated reward points and converted them into gift cards, from which they exfiltrated the real cash value. The insurer’s CIO and IT security team were actually not that worried about the losses incurred through gift-card fraud.
“We were much more anxious about the PII exposure than the fraud.”
Global Health Insurer CIO
The attackers appeared to be ignoring the Personally Identifiable Information (PII) associated with the cracked accounts in favor of getting the rewards points, but the exposure was alarming.
The security team turned to their Content Delivery Network (CDN) vendor for help. The CDN’s “bot management” solution put a CAPTCHA into the user login process in an attempt to stop the automation.
And that’s when the wheels came off.
Human success rates for CAPTCHAs are already distressingly low—as low as 15 percent completion rates for some populations. Because computers have gotten so good at solving CAPTCHAs, the tests have gotten more and more difficult.
For elderly users, who are visually impaired more often than not, CAPTCHA success rates are even lower. In fact, one would be hard-pressed to devise a worse user experience than CAPTCHA for an aging population.
Immediately after the CDN put their CAPTCHA in place, login success rates plummeted. Seven out of ten elderly users could no longer log in to their accounts, access the rewards program, or renew their prescriptions online.
Online pharmacy orders plunged by 70 percent.
Frustrated patients had to phone the health insurer’s call center to renew prescriptions.
Meanwhile the attackers easily bypassed the “bot management” solution through one of the many underground services that offer 1,000 solved CAPTCHAs for $1. Now they were the only ones earning rewards.
“This is not what we do.”
Global Insurer CIO
The CAPTCHA was far more damaging than the fraud it was supposed to stop. The cure was worse than the disease.
Can you make an introduction?
The CIO reached out to a C-level colleague of his at a top-3 North American bank. He explained the situation and said, “Hey, you guys are a bank, and you don’t use CAPTCHAs. How do you get away with that?”
His peer said, “We use Shape Security,” and he made an introduction.
Shape worked with the healthcare insurer’s CIO and his team to get our technology deployed. We went into monitoring mode first, to study attack traffic patterns. Because Shape came in behind their CDN solution, the monitoring period became an informal bake-off between the CDN’s bot management service and Shape’s.
Understanding Users and Risk
Web and Mobile Visitor
Legitimate users with good behavior
Strong passwords No password re-use
Legitimate users with bad behavior
Weak passwords Password re-use Prey to phishing
Illegitimate users with ill intent
Account Takeover Phishing IP Theft
Even behind the CDN’s CAPTCHA, Shape was detecting large amounts of credential stuffing and gift-card cracking—sometimes up to two million attempts per day. While the attackers had been smart enough to “hide” their traffic spikes within the diurnal patterns associated with human logins, they were not otherwise trying to disguise their traffic. Sometimes they connected through proxies, sometimes through a partner healthcare insurer, and even once through a financial aggregator.
Shape fought the attackers as they retooled, attempting to get around the Shape defenses. Within weeks, most of the attackers gave up, resulting in a 90% decrease in overall traffic.
The CIO was sufficiently impressed by Shape to completely displace the CDN for bot management at the healthcare insurer’s web property, and the CAPTCHAs were removed from two dozen entry points.
Shape then began working with the team to monitor the mobile property, because that is where attackers always retarget to after we block them on the web. After another month of monitoring the mobile traffic, Shape was able to show that the healthcare insurer’s mobile property could be further improved to remember legitimate users, and we cut their legitimate “forgot password” transactions in half. Shape also provided the insurer with a customized list of recommendations for information access and password protections policies.
Steady State Unlocked
Today the healthcare insurer’s website has zero CAPTCHAs in front of their pharmacy, the account profile, and their rewards program. The Shape mobile SDK is integrated with nearly all the mobile platforms that the insurer reports.
Attackers and aggregators continue to probe the insurer’s web and mobile properties. Shape sees them, and foils the attackers. The health insurer is notified of the aggregators, who are encouraged to use authorized API gateways.
The online pharmacy is accessible to all customers again. Call volumes have dropped to levels not seen since before the CAPTCHA crisis. Attackers and aggregators continue to probe the insurer’s web and mobile properties. Shape sees them, and foils the attackers. The health insurer is notified of the aggregators, who are encouraged to use authorized API gateways.
And, perhaps most importantly, the healthcare insurer is again free to focus on innovating new programs and rewarding customers for taking preventive steps for their medical and social wellness.
You figured out that you have a bot problem. Maybe you have a high account takeover (ATO) rate, or someone’s cracking all your gift cards, or scraping your site. You tried to handle it yourself with IP blacklists, geo-fencing, and dreaded CAPTCHAs, but it became an endless battle as the attacker retooled and retooled and you’re sick of it.
So now you’ve decided that you’re going to call in professionals to stop the problem, and get some of your time back. You’ve narrowed it down to a couple or three, and you’re going to get them in and ask them some questions. But what questions? Here are some good ones that can give you an idea if the vendor’s solution is a fit for your environment.
1. How does the vendor handle attacker retooling?
This is your most important question. When a countermeasure is put in place, persistent attackers will retool to get around it. Victims of credential stuffing say that fighting bot automation by themselves is like playing whack-a-mole. You are paying a service to play this game for you, so ask how they handle it, because attackers always retool.
2. Does the vendor dramatically increase user friction?
CAPTCHAs and 2FA dramatically increase user friction. Human failure rates for the former range from 15% to 50% (depending on the CAPTCHA), and lead to high cart-abandonment and decreased user satisfaction. Honestly, think carefully about vendors who rely on these countermeasures. Your goal should be to keep CAPTCHA off your site, not pay someone to annoy your users.
3. How does the service deal with false positives and false negatives?
A false positive for an anti-automation vendor is when they mark a real human as a bot. A false negative is when they mark a bot as human and let it through (this is by far the most common case, but sometimes the less important one). Bot mitigation will have some of both; be suspicious of any vendor who claims otherwise. But a vendor should be very responsive to the problem of false positives; that is, you should be able to contact them, complain, and have the false positive determination addressed.
4. When an attacker bypasses detection, how does the service adapt?
There will be advanced attackers who manage to bypass detection, becoming a false negative. When it happens, you may not know about it until you see the side effect (fraud, account takeovers, etc.). Then you’ll need to contact your vendor and work with them on how to remediate. How do they handle this process?
5. How does the vendor handle manual fraud (actual human farms)?
If your vendor is particularly adept at keeping out automation (bots), a very, very determined attacker will hire a manual fraud team to input credentials by hand in real browsers. Many services do not detect this (since technically, human farms are not bots) Can their service detect malicious intent from even real humans? Shape can.
6. If one customer gets bypassed, how does the vendor protect that bypass from affecting all other customers?
Ideally, the vendor should have custom detection and mitigation policies for every customer. That way, if an attacker retools enough to get around the countermeasures at one site, they can’t automatically use that config to get into your site. Each customer should be insulated from a retool against a different customer.
7. If an attacker bypasses countermeasures, does the service still have visibility on attacks?
It is very common for a service to be blind after an attacker bypasses defenses. If the vendor mitigates on the data they use to detect, then when an attacker bypasses mitigation, you lose the ability to detect. For example, if they block on the IP, when the attacker bypasses the block (distributes globally) the vendor may lose visibility and doesn’t know how bad you are getting hammered.
An example of a system that is working correctly is when 10,000 logins come through and they all look okay initially because they have behavioral analytics within the proper range for humans. But later it is determined that all 10,000 had identical behaviors, which means the logins were automated. A good vendor will be able to detect this for you, even after the fact.
8. Is there a client-side or browser agent?
If yes, how large is the integration and how expensive is the execution? Does the user or administrator have to install custom endpoint software, or is it automatic? If there is no endpoint presence how does the vendor detect rooted devices on mobile and how does it detect attacks using latest web browsers on residential IPs?
For example, one of our competitors takes pride in having no endpoint presence – not even a browser-agent. A common customer of ours used both their solution and ours simultaneously and found that the competitor missed 95% more automation (ask for details and we can provide them).
9. Does the vendor rely on IP-Blacklisting or IP-Reputation?
Our own research shows that automation attackers re-use an IP address an average of only 2.2 times. Often they are only used once per day or per week! This makes IP-Blacklisting useless. There are over a hundred client signals besides the IP address; a good service will make better use of dozens of those rather than relying on crude IP blacklisting.
10. How quickly can the vendor make a change?
When the attacker retools to get around current countermeasures, how quickly will the vendor retool? Is it hours, or is it days? Does the vendor charge extra if there is a sophisticated persistent attacker?
There are other questions that are table stakes for any SaaS vendor. Things like deployment models (is there a cloud option) and cost model (clean traffic or charge by hour). And, of course, you should compare the service level agreement (SLA) of each vendor. But you were probably going to ask those questions anyway (right?).
Yes, this article is slightly biased, as Shape Security is the premiere automation mitigation service. But consider the hundreds of customers we’ve talked to who chose us; these are the questions they asked, and we hope that they help you, even if you end up choosing a different bot-mitigation vendor.
The war against “fake” begins today, with the launch of Shape Connect.
Shape spent the last eight years building a machine-learning engine that has a single focus: to distinguish humans from robots on the Internet. The engine is constantly learning as it processes over a billion transactions every day from 25 percent of the consumer brands in the Fortune 500. It’s actually a billion-and-a-half on payday and National Donut Day (June 7, thank you, Dunkin’ Donuts).
We’ve made this incredible engine available to everyone and we call it Shape Connect. Connect is self-serve, takes minutes to set up, and is free for two fortnights (yes, GenZ, that’s the correct spelling).
Why is Connect so revolutionary? Distinguishing automation (bots) from humans is the most difficult, and most pressing, challenge on the Internet. Stopping fake traffic should be job #1 for any website that has value—yet, Facebook, Twitter, and Google all struggle with fake traffic. Shape Security can, and we’re practically giving the service away.
Solving Modern Problems
Okay, okay, so we built a computer that can identify other computers. How does this help you? Many businesses are being defrauded by bots and don’t even know it. They might know they have a problem of some kind but not understand that automation is the real threat vector.
Credential Stuffing Causes HUGE Business Losses
Credential Stuffing: Shape didn’t invent it, but we DID name it. It’s where malicious actors acquire login credentials belonging to blithely unaware Internet users, employ bots to pour billions of username/password combinations into millions of websites, then drain users’ accounts of money, credit-card numbers, email addresses, and other valuable stuff.
Website breaches resulting in gargantuan credential spills are common occurrences these days despite mighty efforts to boost privacy and security measures. A sophisticated criminal industry has sprung up that uses automation to access online accounts across the board, including social media, retail, banking, travel, and healthcare.
Believe it or not, credential stuffing-related activity can make up more than half of a website’s traffic. It’s estimated that this kind of nefarious pursuit results in business losses of over $5 billion annually in North America alone.
Gift Card Cracking
Another super-annoying problem is the cracking of online gift-card programs. Most gift-card programs allow recipients to check the card balance online. Attackers create bot armies to check the balance of every possible gift-card number! When they find a gift-card number that has a positive balance, they use it to purchase re-sellable goods before the recipient can use the card. Isn’t that horrible? It costs retailers millions of dollars per year.
Business Logic Mischief
But it gets worse. Almost any site that has significant intellectual property in its business logic is either being attacked or is at risk. Consider the stalwart health-insurance company. Insurance websites allow you to get premium estimates based on your profile. Their rates are based on diligent research and proprietary actuarial tables accumulated over decades of experience. One of our customers found that a competitor was creating millions of fake profiles, each with a slight tweak to its age, income, and pre-existing condition to map out the insurer’s quote-rate tables. What took decades to create was being stolen by a competitor using bots. That’s not fair, is it?
Are You Dating a Robot?
One of the curious facts that emerged from the aftermath of the Ashley Madison breach in 2015 was that a significant number of the female profiles on the affair dating site were fake. They’d been created by bots to yield vehicles by which swindlers around the world could establish online relationships with men whom they would then defraud through a money transfer. While Ashley Madison is no longer with us, there are other, less controversial dating sites that still have the same problem. Shape helped one of them deal with fake-account creation, leading to a much lower probability of robot dating. (Sorry, robots, true love is for humans.)
Hotels and Airlines: Point Theft
Hotels and airlines have their own currencies in the form of loyalty program “points” or “miles.” These have long been a target for fraudsters who can take over thousands of accounts, merge all their points, and convert them into re-sellable goods. In many cases, attackers prefer going after points. Your average consumer will notice immediately if their bank account is drained, but may not quickly (or ever) notice that their points are gone. They might just assume the points had expired. Room rates and flight fares are another form of intellectual property, and aggregators scrape the sites constantly, pulling rate information for competitors, leading to overly low “look-to-book” rates.
Fight The War Against Fake
Those are just a few examples of automation as a threat vector for business. We could tell you about a million cases of sophisticated bots threatening every different type of business, but we hope you get the picture already.
So let’s get back to Shape Connect, what it is, and how it works.
How Shape Connect Works
Our fully cloud-based service stands staunchly between your site and the Internet, deflecting bots and protecting you credential stuffing, DDoS, account takeovers, gift card cracking, and all other malicious activity done at scale.
We’ve put together a couple of videos showing how Shape Connect works to protect your site. For those of you blessed with short attention spans, we have a 90-second, visually stimulating cartoony video (above).
If that piques your interest and you want the whole story, here’s a six-minute video that goes deeper into the workings of Shape Connect.
And if you’re a reader, we’ll break it down for you right here.
Without Shape Connect, there’s nothing between your website and the user’s browser. But what if it’s not a browser or a real user? Both real users and bots follow the same steps to get to your site.
The client (user or bot) queries DNS.
DNS returns the IP address of your website (or load balancer or cluster, or whatever).
The browser or bot sends a request directly to your website.
Your website returns the response.
With Shape Connect, there’s a layer of protection between your site and the user or bot.
DNS returns a dedicated Shape Connect IP to the user or bot.
All client requests are routed through Shape’s Secure CDN for fastest response.
Shape Connect absorbs any DDoS attacks that the client might have sent.
Shape Connect’s artificial intelligence determines if the request came from a real human using a real browser or from an automated bot. It passes only human requests through to your website.
Your website responds only to legitimate requests, sending the data back through Shape Connect and to the human at the other side.
Of course, if you have “trusted bots” that you want to allow, you can manage your own whitelists.
With the Shape Connect Dashboard, you can see all the requests that have come through, and marvel at all the automated malicious requests that Shape blocked!
Your Honor, I Object!
The rest of the industry is catching on to the bot problem, and some are pushing approaches that differ from Shape Connect.
To celebrate the official launch of Shape Connect, we were going to throw ourselves a gigantic poolside party, with mumble rappers from LA and rivers of Henny. But we decided, instead, that it would be more fun to watch all the new customers come in and bask in the delight they experience as they get connected.
Shape Connect is live right now, and if you’re comfortable and confident, you can sign up for a free trial. But we’re also here if you want to chat first about how Shape Connect can secure your business, reduce your latency, keep your servers afloat, and improve your customer experience journey. Talk with you soon!
David Holmes here, cub reporter for Shape Security. While I’m luxuriating in United Airlines’ steerage class, our crack SOC team is back at HQ slaving away over their dashboards as tidal waves of automated traffic crash against the Shape breakers. At least they have Nespresso and those convenient eggs-in-a-bag from the kitch. The day shift of SOC team #1 actually sits pretty close to the corporate marketing brigade, so we kind of know each other and exchange awkward greetings in the hallway.
ANYWAY, I thought it would be cool to share some statistics from SOC’s recent cases that highlight the shape of the anti-automation industry today.
1. 750 Million in a Week for One Site
Since the release of the Collection #1 credential corpus, some of our customers are experiencing insane levels of login events. One customer saw over 1.5 billion automation attempts in a two-week period. That’s pretty high even for them, one of the largest banks in the solar system. If, for some tragic reason, the Collection #1 campaign persists at its current level, you could extrapolate 39 billion automation attempts in a year (assuming no cracker vacation). Against a single site. That’s sick, brah. Sick.
2. IP Address Re-use: 2.2
This stat is actually sadder than last week’s Grammys. During a credential-stuffing campaign, the attacker throws millions of credentials (gathered from breaches or the “dark web”). If he tried them all from a single IP address, then, of course, you’d just block that IP address, right? So he uses multiple IP addresses. In extreme cases, the most sophisticated cracker will only try a single login from each IP address (no re-use). Lately, the average number of times an IP address will get reused during a campaign is a paltry 2.2.
Basically, blocking by IP address is useless. By the time you add an IP address to your blacklist, it’s too late—it’s not going to be reused again during the campaign. If you see a vendor touting address-blocking, or CAPTCHAs, as a solution, please put your hands on your hips, throw back your head, and issue forth the biggest belly laugh you can. Bwahahaha!
Sadly, some of the technical people we talk to just don’t get it. We tell them: “Blacklists are useless,” and they say “Sure, but you block by IP address, right?” Then we explain it again, and they still don’t get it. Someone should write a paper! Oh, wait, that’s us.
3. Credential Stuffing Succeeds 2% of the Time
2% is funny. It’s our favorite milk. It’s the conversion from US dollars to Philippine pesos. It’s our reader-retention rate when we let Holmes write. Two percent may not sound like much, but consider an attacker testing a million stolen credentials against your web property. That’s 20,000 valid usernames and passwords he’s going to confirm. Actually, the success rate varies between 0.1 and three percent, but two percent is good enough for government work. And speaking of government…
You might be thinking: Actually, guys, 0.1 to 3.0 is a huge range. That’s a multiple of 30. An order of magnitude and then some. True enough, but when dealing with a million—or even a billion—credentials, the difference is really just “bad outcome” versus “really bad outcome.”
Yesterday Shape looked at a small campaign where a single, lonely attacker in Vietnam had 1,500,000 credentials. Even a 0.1-percent success rate, for him, would have translated to the confirmation, and possible account takeover (ATO), of 1,500 accounts. We say “would have” because we foiled all of his posts. He didn’t even seem to notice, which makes us think maybe he’s TOO automated, or that he suffers from some kind of “educational gap” (that’s the new euphemism for stupidity).
4. 15 Months to an Ugly Baby
The number of months between when some dood stole all your credentials and when you read about it in The Register while eating your precious Honey Smacks is: 15. A lot can happen in 15 months; French words, mostly. Organization penetration, exfiltration, hacker celebration, hacker inebriation, and stock depreciation. Of course 15 months is just an average, and individual cases vary widely, but the point is that it’s an eternity in Internet time.
“Well, dang!” you sputter around your Honey Smacks. “What’s being done about this???”
We’ve got a solution we call Blackfish. We’re already seeing all the waves of credential stuffing against the busiest commercial sites in the world. So we can tell when someone stuffs, say, the creds from your entire customer login database against HoneySmacks.com. Now you don’t have to wait 15 months; if you had Blackfish, you’d know the minute someone tried your logins. How cool is that? If you’re interested, a single chat with our trusty sales chatbot can get the ball rolling for you.
And if you want to read a much more coherent explanation of the 15-month effect, print out our award-winning Credential Spill report, and read it over your Honey Smacks tomorrow.
Disclaimer: Shape Security in no way endorses Honey Smacks; in fact, they have been voted the number #2 worst breakfast you can possibly eat. But dang, they are yums.
5. 99.5% of POSTs are against “forgot-password.js”
Our SOC team dealt with an ATO campaign last month. We remember it well because against that website, we detected that 99.5 percent of requests headed for their “forgot-password” page were automated. Yes, that’s 199/200 for the fractionally-minded (aren’t numbers fun)!
Sure, that’s a single campaign, but in our experience, it’s not an uncommon one. Check your own weblogs and see how the access requests to your forgot-password page compare to, well, anything else (and then call us).
We have many customers for whom forgot-password is their most-frequented page by far. By far! And if our customers weren’t the paragons of morality that they are, they’d put ads on that page and fund themselves a couple of truckloads of egg-in-a-bags. Or is it eggs-in-a-bags? The Oxford dictionary is strangely silent on this topic.
Well, there you have it: five random statistics about fighting anti-automation we slapped together compiled from the last month. Stay tuned, friends, and we at Shape Security’s marketing brigade will bring you more pseudo-cogent security-related statistics, probably from RSA 2019, in a couple of weeks.
Scrooge would approve—attackers work on Christmas Eve, and now on New Year’s Eve, too
We at Shape Security defend the world’s top banking, retail, and travel websites. And while you might be just getting back to work this first full week of January, our attack forensics teams are finally getting a break, because this holiday season was a busy one. Now that the dust has settled, we’ve analyzed our data to determine how 2018’s online holiday-season shenanigans differ from 2017’s.
During this festive Holiday season, attackers worked through Christmas Eve and Christmas Day. But in a striking change from the previous year, the most sophisticated attackers no longer took a New Year’s Eve (NYE) off. In fact, this year, we saw several intense campaigns that started or peaked on NYE.
The Best Time to Rob a Bank is Christmas Day
No matter what institution they use, most online banking customers have one thing in common: they stop checking their online balances during the December holidays. Turning a blind eye to one’s finances is optimistic human nature; our customers report that legitimate online banking activity often drops as much as 30 to 40 percent during this period.
Financial institutions may not observe the full extent of this change, however, because the drop in legitimate banking activity is overshadowed by an increase in malicious activity. According to our data, in both 2017 and 2018, malicious actors took advantage of the holiday, launching new attacks on or right around Christmas.
Shape’s Christmas present to the Top 5 US bank, the target in the above graph, was the fact that we didn’t take Christmas Day off, either.
New Year’s Eve is Cancelled (for Professional Criminals)
With some notable exceptions, nearly all attackers took New Year’s Eve off. On that night, attacks aimed at Shape’s customers dropped over 65% overall – and in one case over 99%, We observed this trend across all industries, including retail, travel, financial services, and tech. Perhaps tired from their exertions over Christmas, nearly all attackers put their keyboards away and joined the poor furloughed federal workers on a break for the New Year’s holiday.
“The holiday season now separates the hobbyists from the dedicated professional cybercriminals.”
But the sophisticated attackers, the ones who do this for a living, actually used the global holiday for surgical strikes, particularly against banks .
The attack graph below illustrates the trend. The tiny, tiny red bars on the left (they look like a dotted line) show the normal level of traffic on a financial institution’s website.
On December 29, malicious actors launched a large attack against the site. Even by spoofing dozens of signals at all levels – network, client and behavioral, they still couldn’t penetrate Shape’s defenses. On New Year’s Eve they retooled, doubling the number of signals that they were spoofing, but that too, failed, and they gave up towards the end of the day.
Why Launch Attacks During the Holidays?
Sophisticated attackers, the ones for whom crime is their day job, know they are playing a chess game that requires human intervention. So they plan their moves according to when organizations are most vulnerable, i.e., when a security team is most likely to be distracted or short-staffed. What are the days that a security operations team is most likely to be away from their desks? Christmas and New Year’s.
Furthermore, because professional criminals are relying on their ill-gotten gains, they are loath to waste resources. Everyone knows that the top banks are the most lucrative targets, yet hardest to crack. So we suspect that’s why FSIs in particular are targeted during the holidays.
The clearest example of this theory comes from the most sophisticated attack group Shape saw in 2018—a bot that mimicked iOS clients (see our 2018 Credential Spill Report, in which we talk about this attack group). They’d previously targeted a top Canadian retailer, a top global food and beverage company, and a Top 10 North American bank, and we had successfully held them off across our entire customer network.
This group had been lying low for a couple of months, but on NYE they came back with a sneaky, retooled attack when they thought we weren’t watching. But Shape detected the new attack and quickly blocked it. The attacker gave up on New Year’s Day.
It is not clear why only sophisticated attackers worked on New Year’s Eve this year. We suspect they are getting desperate as more and more organizations harden their application defenses against automated fraud and are looking for any type of vulnerability to exploit. In that case, it’s possible we will see this behavioral trend extend to other major holidays in which companies effectively shut down, such as Chinese New Year and Labor Day.
About Shape Security
Shape Security is defining a new future in which excellent cybersecurity not only stops attackers, but also reduces friction for good customers. Shape disrupts the economics of cybercrime by making it too expensive for attackers to commit online fraud, while also enabling enterprises to more easily transact with genuine customers. The Shape platform, covered by 55 patents, was designed to stop the most dangerous application attacks enabled by bots and cybercriminal tools, including credential stuffing (account takeover), fake account creation, and unauthorized aggregation. The world’s leading organizations rely on Shape as their primary line of defense against attacks on their web and mobile applications, including three of the Top 5 US banks, five of the Top 10 global airlines, two of the Top 5 global hotels, and two of the Top 5 US government agencies. Today, the Shape Network defends 1.7 billion user accounts from account takeover and protects 40% of the consumer banking industry. Shape was recognized by the Deloitte Technology Fast 500 as the fastest-growing company in Silicon Valley and was recently inducted into J.P. Morgan Chase’s Hall of Innovation.
Prediction blogs are fun but also kind of dangerous because we’re putting in writing educated guesses that may never come true and then we look, um, wrong. Also dangerous because if we’re going to get any airtime at all, we have to really push the boundary of incredulity. So here at Shape, we’ve decided to double down and make some extreme cybersecurity predictions, and then we’ll post this under the corporate account so none of our names are on it. Whoa, did we just say that out loud?
Forget the Singularity, Worry About the Inversion
New York Magazine’s “Life in Pixels” column recently featured a cute piece on the Fake Internet. They’re just coming to the realization that a huge number of Internet users are, in fact, fake. The users are really robots (ahem, bots) that are trying to appear like humans—no, not like Westworld, but like normal humans driving a browser or using a mobile app. The article cites engineers at YouTube worrying about when fake users will surpass real users, a moment they call “The Inversion.” We at Shape are here to tell you that if it hasn’t happened already, it will happen in 2019. We protect the highest-profile web assets in the world, and we regularly see automated traffic north of 90%. For pages like “password-reset.html” it can be 99.95% automated traffic!
Zombie Device Fraud
There are an estimated five million mobile apps on the market, with new ones arriving every day, and an estimated 60 to 90 installed on the average smartphone. We’ve seen how easy it can be for criminals to exploit developer infrastructure to infect mobile apps and steal bitcoins, for instance. But there’s another way criminals can profit from app users without having to sneak malware into their apps—the bad guys can just buy the apps and make them do whatever they want, without users having any idea that they are using malicious software. The economics of the app business—expensive to create and maintain, hard to monetize—mean less than one in 10,000 apps will end up making money, according to Gartner. This glut of apps creates a huge business opportunity for criminals, who are getting creative in the ways they sneak onto our devices. In 2019, we’ll see a rise in a new type of online fraud where criminals purchase mobile apps just to get access to the users. They then can convert app-user activity into illegitimate fraudulent actions by hiding malware underneath the app interface. For example, a user may think he is playing a game, but in reality his clicks and keystrokes are actually doing something else. The user sees that he is hitting balls and scoring points, but behind the scenes he is actually clicking on fake ads or liking social media posts. In effect, criminals are using these purchased mobile apps to create armies of device bots that they then use for massive fraud campaigns.
Robots will Kill Again
Have you seen those YouTubes from Boston Dynamics? The ones where robots that look like headless Doberman pinschers open doors for each other? You extrapolate and imagine them tearing into John Connor and the human resistance inside. They are terrifying. But they’re not the robots we’re thinking of (yet). A gaggle of autonomous vehicle divisions are already driving robot fleets around Silicon Valley. Google’s Weymo and Uber use these robots to deliver people to their next holiday party, and we’ve heard of at least two robot-car companies delivering groceries. Uber already had the misfortune of a traffic fatality when its autonomous Tesla hit a cyclist in Arizona last year. But Uber robots will be back on the road in 2019, competing for miles with Weymo. Combine these fleets with the others, and more victims more can join Robert Williams and Kenji Urada in the “killed-by-robot” hall of fame. Hopefully it won’t be you, dear reader, and hopefully none of these deaths will be caused by remote attackers. Fingers crossed!
Reimagining Behavioral Biometrics
Behavioral biometrics are overhyped today because enterprises lack the frequency of user interactions and types of data needed to create identity profiles of digital users. But in 2019, behavioral analytics will merge with macro biometrics to become truly effective. The market will move to a combination of macro biometrics, like Face ID, and traditional behavioral biometrics, like keyboard behavior and swiping. Apple is ahead of the game with Face ID and has applied for a voice biometrics patent to be used with Siri.
Kim Jong Un as Online Crime Kingpin?
North Korea will become a dominant player in the criminal underground with more frequent and sophisticated financially motivated hacks, rivaling Russian gangs. International sanctions have pushed the country to be more economically resourceful, so it has beefed up its cyber operations.The northern half of the Korean peninsula has been blamed for cyberattacks on banks, via SWIFT transfers, and bitcoin mining, in addition to traditional espionage involving governments, aviation, and other industries. In 2019, cyber attacks originating from groups (allegedly) associated with North Korea will continue to be successful and enforcement remains challenging. And with the recent Marriott breach affecting 500 million Starwood Hotels guests, the theft of passport numbers means nation-states and other attackers have an even more valuable and rare tool at their disposal for financial, tax, and identity fraud.
All Breaches Aren’t Created Equal
As industries mature, we refine the metrics we use. In 2019 we’ll see enterprises change how they approach data breaches, moving beyond identifying size and scope, focusing instead on potency and longevity. Breach impact will be measured by the overall quality and long-term value of the compromised credentials. For instance, do these assets unlock one account or one hundred accounts? Most recently we’ve seen the Starwood data heist become one of the biggest breaches of its kind, largely due to the bevy of personal data exposed. In this case, since the unauthorized access dates back four years, we can assume this data has already fueled and will continue to fuel serious acts of financial fraud, tax fraud, and identity theft. As hacker tools become more sophisticated and spills more frequent, businesses can’t afford to ignore downstream breaches that result from people reusing the same passwords on multiple accounts. In reality, today’s breaches are fueling a complex and interconnected cybercriminal economy. In 2019, expect businesses to join forces and adopt collective defense strategies to keep one breach from turning into a thousand.
The Future Looks, Um, Futuristic!
These are our extreme predictions for 2019. Will they come true? Some of them, probably. We hope the robots don’t actually kill people, but we’re pretty sure that the Inversion (where automated traffic surpasses human traffic) is a sure bet, if it hasn’t happened already.