Tribe Technologies
Trending >

Vancouver’s NuData Security Looks To “Unconscious” Behaviour Analysis to Foil Online Fraudsters

login_formVancouver’s NuData Security uses the phrase, “Account takeover is the new credit card fraud,” to describe the ever changing world of online security threats.

In response, NuData Security has developed a means of behavioural analysis that aims to flag hacking in real time or even preventively, with the aim of, according to NuData director of customer success Ryan Wilks, “helping the merchant to not just stop fraud, but become predictive of what will become fraud.”

Much like a virus reacts to a vaccine, hackers develop new ways to penetrate security systems as the old methods become ineffective. Shifting tactics is just one way that hackers have become more sophisticated in their efforts to stay ahead of detection efforts.

What’s striking about the world of online hacking today is the businesslike approach taken by bad actors: working out of office space, using cloud services, automating processes and measuring ROI in an effort to maximize effectiveness. Today’s hackers are not smash and grab artists, but sophisticated actors employing the same kind of intelligence and efficiency that the average entrepreneur brings to building a legitimate business.

It’s a game played on the run, with hackers finding new ways to conceal their location, moving quickly from one IP address to another to steal valid credit card accounts, as opposed to cycling through hijacked credit card information based on lists stolen from databases.

Fraudsters have also adopted more subtle means of attack, using hijacked card information for fewer than five purchases or matching IP locations with terrestrial addresses of stolen cards, so as to avoid flagging unusual activity.

NuData’s claim that it has developed a behavioural analysis product designed to detect fraudulent behaviour, “both conscious and unconscious”, piqued Cantech Letter’s curiosity.

We spoke to NuData director of customer success Ryan Wilk by phone.

Tell us about NuData.

Ryan-Wilk
NuData Security’s Ryan Wilk

NuData Security was founded in 2008, with the goal of trying to understand who users are within a web experience. We saw that many tools were out there that were able to understand the perimeter defence within the risk world. Being able to say that after you submitted a transaction, or in the banking world if you were able to transfer money, trying to substantiate whether those data points all linked together in a safe way to allow that transaction to go through. As you’ve seen in the news over the course of the past number of years, breaches of data are becoming more and more normal. We’re seeing very large breaches. I think in 2014, there were over 575 million records exposed. That’s a huge swathe of people. So we’re really wanting to understand who those underlying users are and what that behaviour is to identify risk and to stop fraud and stop risk, but also just as powerful, trying to better understand who the good users are and substantiate who your good user is, to not just stop fraudsters from interacting in your environment but also to really try and help different merchants and banks figure out who their good customers are to give them a better experience so they can substantiate that subconscious good customer behaviour, versus just trying to take the data points being entered by the user as the true data.

According to your research, there has been a 112% increase in sophisticated scripted attacks on log-ins just over the past year. That’s huge. What’ll be the outcome in the future if we don’t take these problems seriously now?

There’s a number of factors that play into it. Breaches will continue. As we saw with these recent breaches with Anthem, massive amounts of data being released, and what we saw with Sony, which was a different type of breach, once a hacker group decides they’re going to decimate your company, it’s going to happen whether you want it to or not. The best laid plans don’t always work out, and some of that data will get out there. So what really needs to happen is merchants and banks need to work harder to protect their customers and protect their environments, knowing that the valid data for their customers is getting out there. The phrase that we’ve coined is that “account takeover is the new credit card fraud”. Traditionally, you’ve seen where credit cards are breached, someone goes out and uses that credit card to try to either make a purchase online or get that to go through. If you look at credit cards, in general, at ground zero or day one of a breach after you get those credit card numbers, credit cards can sell for upwards of $45, and that’s knowing that that credit card is valid. Then, within a very short period of time, two weeks to a month, that credit card has now gone down in value from about $45 to 50 cents. And the chance of that credit card being valid has gone from 100% down to less than 25%. With that knowledge, you’re taking this data point, this credit card, and it has a very finite use. You might get a purchase or two through, if it’s even valid anymore. What we’re seeing is these fraud groups, just like any big business, they’re not wanting to waste resources to get their business done. They’re looking for the better means to perpetrate their business, which is crime, so they’re getting these user names and passwords, and what we’re seeing in this same period of time, where these credit cards go from being worth $45 down to about 50 cents in just a very short period of time, user names and passwords hold their value. The reason they hold their value is that when you start to think about it, you or people you know, like your mom, they use the same user names and passwords across many different websites and web personas. Then maybe you take that a step further and use that same user name and password for your email account. Now, not only can they get into your different accounts, now they can use your different payment methods and things stored in your account and a whole swathe of data and information they get from that same data point. But there’s a good chance it will remain valid for a longer period of time. Even if you figure out that, say, your BestBuy account has been compromised, it might not click in your head. You might not reset everything else. And then, really, the Holy Grail is the email account. If I can have access to your Gmail account, I can probably go and cleanly reset all of your passwords. So all of your merchants out there, all of your banks, they think the functionality is being used completely legitimately.

“Companies need to find ways to fix the problem, but they need to do it in such a way that they’re not drastically changing the ecosystem.”

I previously came from the merchant world. I ran the trust & safety groups for Universal Parks and Resorts prior to joining NuData. There would be situation where we would see accounts taken over within our environment. We would contact the customer, we’d reset passwords, and then 10 minutes later the account would be taken over again. It was because they took over the email account and they simply went in and they reset the password again and were able to regain access to that account through what I call “the illegitimate use of the legitimate”, so using the legitimate functionality of the website to do illegitimate things, such as resetting that password. So it’s really showing that the old style of credit card fraud, you could try to stop that and it would be one-off situations. This new ecosystem of account user name and password theft, it’s allowing people to get into these accounts, checking out and using all of the data in that account so it looks completely clean, versus what they’ve seen in the past, and understanding that you have used that for authentication means, so that means you have the password. So it’s that added level of trust, a false trust that’s being created between the merchant and the end user trying to evaluate whether it’s at risk or not. So it’s really just showing how much more of the ecosystem you have access to with those account credentials or that account information, versus simply just a credit card number.

So at the consumer level, the security breach points consists of people using easy-to-remember passwords, and then there’s some slack at the IT department level. Who are NuData’s primary clients? Is it insurance companies or the banking sector or retail?

Our primary targets are the e-commerce world, e-commerce merchants, the FI world, so different banking organizations and financial institutions, as well as health care. To give you the high-level swathe, really our clients are anyone who has an account that has sensitive data that they want to protect, and they want to create a level of trust and safety within their environment, to say, “We will protect our customers from illegitimate people logging in to your accounts and taking over your accounts.” Not only that, but we will also help protect you from the illegitimate use of your future customers’ data. So with, for example, the Anthem breach, there’s so much data out there that I could probably build up a pretty good understanding, or a profile of who you are, and then I could go to different websites and start to register accounts, or create real bank accounts, to perpetrate this fraud. And while it’s not even you that’s directly being affected by it, your data is being involved there. So if you think about it, now you have to try to clean up after your identity that was taken over from a bank or a merchant. Obviously, the bank or merchant didn’t do anything wrong to you, but now you have to work with them to try to clean up your situation. And in your head, no matter how you’re saying it, you’re always thinking, “I know my information was taken over at Target.” You might not think that Target did that to you, but the name Target will always be associated with that. And then you’ll start to lose that level of trust and safety at that company. So you essentially lose customers before they even become a customer, because you weren’t able to help that customer’s data be protected from an illegitimate use.
It’s hard to grasp how big the problem is, given that most data breaches aren’t publicized. For the ones who do get out in front of the problem, it’s a dilemma between alerting the customer and tarnishing their brand by association with hacking. It’s better optics for companies to just keep it all under wraps.
Exactly. And I think that it really goes to the point of saying that the data points, in and of themselves, are becoming not as valuable in trying to substantiate risk. It’s the idea that if you have a fraud actor come on, do you want to truly understand who that person is and what they’re doing from a behavioural standpoint, or do you simply want to take what they’re giving you for granted? It’s kind of like if you were running a brick-and-mortar store, and you had somebody who clearly did not look like they should be in the store, a guy who’s wearing rags comes into Tiffany’s and hands you an Amex black card and says, “Just take it.” You’re probably going to say, “Wait a minute. I should probably do a little bit of verification on this, because this situation does not make sense to me.” It’s now being able to move that into the online world and the e-commerce and virtual world. Now you understand that it’s not just the computer, it’s not just the phone that’s working with you. There’s a human on the other side of it, or a human-created script on the other side of it, that’s controlling that device. We’re really trying to bring out and understand and bring out that full view of that combined entity, the human and device that are now interacting with me, in an attempt to bring some of that reality into the virtual world.

“They reset the password again and were able to regain access to that account through what I call “the illegitimate use of the legitimate”, so using the legitimate functionality of the website to do illegitimate things, such as resetting that password.”

How do you avoid the optical problem, that what you’re doing amounts to profiling? How do you decide who’s a good actor and who’s a bad actor without unintentionally sweeping up good people along with the bad?

We’re looking at this behaviour in two ways. We’re looking at behaviour aggregated off of a user, or what we call an anchor data point. If we look at all the behavioural profiles we’ve aggregated under this user name, as we see the behavioural profiles coming in, do those behavioural profiles make sense in correlation to that user? Understanding the biometric of that user is how they’re interacting with the device. Does it make sense? Do they type in their user name and password every time, and now all of a sudden everything’s getting pasted in? So, creating the understanding of what the norm is for that user and when that user deviates from the norm at a high level, and then also taking that a step further to say, “What does the total behavioural population look like?” We might see a failed log-in occur under your user name. In and of itself, that failed log-in might not have an extreme risk value, but now that we’re able to aggregate and say, “Hey, look, there’s similar behaviour, even though they’re cycling IP addresses, there is a similar behaviour that is now interacting with your site in such a way that it appears to be creating risk.” We’re seeing this behaviour, cycling through all these data points and attempting to log in, and it just doesn’t make sense how this single behaviour is touching all of these different data points. Looking at it in aggregate as well, to see things like massive brute force attacks against your site, where they’re coming on listening to user names and passwords, and understanding what that behaviour is doing, because these fraudsters are getting extremely sophisticated, not just that they’re using this account data more, but in how they’re attempting to use the account data. So what we’re seeing is that these fraudsters are cycling though the devices, cycling through IPs, to avoid traditional security mechanisms. We can say, “Hey, look, this IP is now blasting us.” Or along that same line, really trying to understand what that behaviour is doing, more so than simply looking at the data points and saying, “This user name and log-in has failed,” but saying that these user names and log-ins are all coming from similar behaviour that are interacting in such a way, and on top of it, it looks like there could be scripting, or there could be different types of automation going on underneath this that is creating this event. And then, to make it even more complex, what we’re seeing as well is that many sites have traditional business rules, where they might say if a user attempts to log on five times within 12 hours and they have five failed passwords, we’re going to lock that account out and cause it to recycle. But fraudsters are learning these methods. What they’re doing is, they’re creating their scripts and different types of automation to fit under that. So if they try four times and all four of those are failed passwords, the account has been put into a holding pattern until that 12-hour recycle period, so they’re not tripping that business logic either and trying to really slip under that radar of the traditional means to identify these risks.

While the cloud has gotten a lot of good press over the years, hackers are also using the cloud, aren’t they?

The cloud is extremely powerful in a good way. But also what we see is that different groups coming on from places like Vietnam, from different risk areas from within Eastern Europe, are getting into these cloud environments. They’re easily able to pop in from a place, say, in California or Boston, going through, say, the Amazon Web Service, they’re going through maybe a Microsoft hosted web service. They’re trying to hide themselves behind these cloud environments. And because the cloud is so easy to jump around in and modify where your data is coming from, it’s extremely easy for them now to set up these cloud servers and have their automation running from that, and then as soon as they’re done they just let it die. So just as it’s creating an easier place for a company like NuData, it’s becoming an easier place for fraudsters to also host their services and be able to modify and jump around very quickly without having to have physical hardware or machines somewhere.

Is there any mechanism for saying to Amazon Web Services to say, “These are fraudsters”, or are they just cloud customers like anyone else?

You can go to Amazon and Microsoft and those services and let them know, “We believe that there’s a fraudulent IP address linked back to your cloud service.” But the problem exists for them as well, because they’re sitting there with these hosting services, but they also have their web environment where they’re trying to sell these hosting services. So people are coming on and they’re taking over accounts to get into other web hosting services, within a known safe environment, or they’re coming on using stolen data. They have your credit card, they have your billing information and signing up that way. So now, just like a merchant who’s trying to protect themselves from the brute force attack that’s coming from the cloud environment, the provider also has to protect themselves from these bad actors who are looking to sign up for their services to be able to use that cloud environment to perpetrate their fraud. So it’s really it’s really a terrible cycle, a Catch-22, where it’s everyone trying to detect this while the service is hurting itself.

Your company talks about the analysis of both “conscious” and “unconscious” behaviour. What would be an example of “unconscious” behaviour analysis?

I always like to use the example of the user name and password. With most of your sites, you might use some hard user names and passwords, but you’re probably also using some that are just memorized in your head. Think about it. You put your hands on the keyboard, you don’t really think about typing your user name or password. Somehow, it magically comes out of your fingers. That’s a subconscious element of how you’re interacting with the device. Say I were to steal your user credentials. I could type them into the system, but the deviations around that typing pattern would be different because it’s not truly who you are. It’s another person trying to enter that, and also how you’re using that site. Most good users are using it in such a way to be able to do what they need to do, to be able to buy an item, log in to their account, do things they need to do. They’re not thinking about what they’re doing, they’re just doing what they’re doing. Whereas that actor is now coming on, they have a very drawn-out plan as to how they’re going to perpetrate this fraud. They’re going to come on, they’re going to log in to the account, they’re going to look and see what credit cards are there. So their behaviour, in and of itself, their subconscious behaviour of how they’re working through that system, starts to really point itself out as a glaring signal. “Look at me. I don’t fit with what is going on.” Versus that behavioural pattern that you’re not even thinking of doing. Kind of like when you get in your car, you sit down, you put on your seat belt in the exact same way, you turn the ignition on. You might open your garage door and sit there. You probably do a lot of things very similarly every day that you don’t even realize that you’re doing. So to record that information that just comes out of the user, versus trying to force anything out of the user to get that data.

“These fraudsters are getting extremely sophisticated, not just that they’re using this account data more, but in how they’re attempting to use the account data.”

Of the various privacy solutions on the market today, like Nymi in Toronto using a person’s heartbeat as their password, or people who are working towards ending this era of the password that we’re currently stuck in, where everyone has to remember some sequence of numbers and letters, how do you feel like this problem is going to get solved in the end?

It’s interesting. I’ve seen some of these companies that are looking at things like heartbeat monitoring, or the Apple watch with its sensors. It’ll be interesting to see how that works. The backlash is more in the United States than some other countries, about privacy. But I think people are going to continue to be worried about privacy, especially around things like a heartbeat monitor or something that’s pulling out not just passive biometrics, but something unique. You start to think, with the Apple Watch, they’re going to know when people die, like, “This person’s heart just stopped. This person just had a heart attack.” So when you start getting into these creepier areas of it, where you start to ask, “What’s really valid and not valid?” It’s tough to say. Look at credit cards and how we’ve gone through that evolution of having a plastic card. In Canada, you’ve switched over to the chip-and-pin versus the United States, who are still in that transition where the magnetic strip is just going to get replaced with a fancy little chip. Companies need to find ways to fix the problem, but they need to do it in such a way that they’re not drastically changing the ecosystem. We’re seeing just how hard it is for these mobile wallets to catch on. People are saying, “Why do I need to put all my cards in this mobile wallet when I have this piece of plastic that’s been working since 1960?” How complex and intrusive are you going to make this for your end user? Really, it’s trying to find that balance where you’re not being intrusive to your end user, but you’re also getting that level of authentication and not driving away business because you’re making it too difficult for that user. You’re creating such a situation where there’s a certain population who no longer wants to do business with you because they think that you’re being too intrusive.

Yeah, people will resist stuff like chip-and-pin or the driverless car, which are far safer than magnetic stripes and regular cars, because they just don’t want to change.

You have to figure out a way to do this all behind the scenes without affecting your customers. For this to catch on in the United States, there’ll have to be a whole shift in the way that not only the fraud world is looking at it, but the way the sales & marketing and business operations world are looking at it. Do we want to now put this additional friction in front of our customers? Or does the potential risk and potential loss from fraud outweigh giving our customers a better experience? And then we can try to figure out how to clean it up in the back end. That’s been the traditional modus operandi in the United States.

We Hate Paywalls Too!

At Cantech Letter we prize independent journalism like you do. And we don't care for paywalls and popups and all that noise That's why we need your support. If you value getting your daily information from the experts, won't you help us? No donation is too small.

Make a one-time or recurring donation

About The Author /

insta twitter facebook

Comment

Leave a Reply

RELATED POSTS