Transcription:
Penny Crosman (00:03):
Welcome to the American Banker Podcast. I'm Penny Crosman. According to a recent set of surveys conducted by Accenture, banks are struggling to protect themselves against the cybersecurity threats presented by AI, especially generative AI. We're here today with Valerie Abend, who is Accenture's Financial Services cybersecurity lead, who's going to take us through some of the results and offer some advice about what banks could be doing better. Welcome, Valerie.
Valerie Abend (00:31):
Thanks so much, Penny. Happy to be here with you.
Penny Crosman (00:33):
Can you tell us a little bit about these surveys that you did?
Valerie Abend (00:37):
Yeah, absolutely. So Accenture obviously goes out and surveys all different things all over the world all of the time. And one of the things we really wanted to focus on with this survey was around trust in banking. We really feel that the future of banking is very much tied to the idea of customer trust. And we went out and surveyed over 1500 customers and over 600 banking executives, two different surveys, two different groups, and it was a global survey, so all over the world.
Penny Crosman (01:10):
Sure. So one thing that struck me was that 80% of these cybersecurity executives at large global banks believe that generative AI is empowering attackers faster than banks can respond. What are some of the things that they are thinking about? Are they basically thinking about deepfakes or are there other more under the radar ways that generative AI is empowering attackers?
Valerie Abend (01:39):
So we'll start with what is today generative AI and the threats that we see and that our banking clients see? Yes, you're correct. Some of it is specifically around deep deepfakes, and that is a space that has evolved pretty dramatically, which has been enabled by generative AI. And other than that, beyond just the deepfake, and maybe even more basic than that, is just better social engineering and being able to use large data sets, not only of recently stolen data, but what was very interesting was to see swaths of old data stolen on previous hacks that maybe was thought to be no longer valuable anymore. And that was suddenly back on the market for resale because that large set of data could be used against other set of more recent data to produce a very specific set of profiling on individual customers or people working inside of banks and to do much better social engineering, targeting them in spear phishing type emails and campaigns. So it's both the use of video and voice against customers or even against internal employees talking to each other as well as targeting customers or people who are employed by a bank using this highly expansive set of both old and newer data to further enable spear phishing and social engineering.
Penny Crosman (03:15):
So basically their attacks and the way they're crafting their attacks are just more and more realistic all the time, and therefore harder to identify and detect and deflect.
Valerie Abend (03:31):
Yeah, absolutely. Staying ahead of this for customers, staying ahead of this for the workforce, these attacks are maturing, they're evolving, and the usual, I identified that the language was off or the spelling was off. That's getting harder and harder to detect in a social engineering phishing, spear phishing-type email. But then you have these realistic phone calls where you think that this is a member of your family potentially calling on the other line, asking for money, and always using that element of potential fear or really great deal or all the normal social engineering tactics. But now they're just so real. And I think the other part that these executives are concerned about is that of course, they are hamstrung a little bit by the classic things of regulatory oversight and three lines of defense and their use of this technology, whereas the bad guys, of course, are not hamstrung in how they're using it, and so they're able to use generative AI much more aggressively in how they target the banks.
Penny Crosman (04:50):
And then I think your survey also found that more than half, 54% of security executives admit their bank's business reinvention efforts have introduced more security vulnerabilities and only 32% embed security controls in all these initiatives by design. Why do you think these security executives are being so caught off guard? Is it just that they have so much to do that they just can't build in enough controls as new technologies are being deployed?
Valerie Abend (05:27):
I think there's a range of reasons as to why banks are challenged, and I don't know if executives are necessarily caught off guard per se, so much as the skills have to be pivoted and pivoted quickly to keep up with the pace of evolving technology. And also there has to be buy-in, not just at the top of the house, but all the way through to middle management around security's role in protecting against the use of new technology or even just big priority initiatives. And so even though banks are highly regulated and they are more mature than pretty much most organizations and industry in terms of how they address cybersecurity, still that embedding of cybersecurity upfront as a strategic priority in many institutions is challenged. And a lot of that ends up coming down to having alignment of metrics and all being bought into the same thing.
(06:28):
We see the leaders of this guardian of trust, there's about 10% of whom we surveyed that really we would fit as the leaders of guardians of trust. They really are embedding security upfront in strategic priorities, and they're aligning metrics really closely. So if you think about, Hey, we're going to release this new technology or embed this new technology in our product or a service, how do we make sure that both the security team and the technology executives are held to the same set of accountable standards? So they're all held accountable to how quickly you can detect and fix vulnerabilities in addition to the rate of adoption of a product or the easier use of that customer experience. And that way by aligning their metrics, they all have the same outcome and goals for the bank.
Penny Crosman (07:22):
And just going back to the earlier point about banks being concerned that they're not keeping up with deepfake and other generative AI cybersecurity threats, are there any sort of best practices or tactics they could be taking to better deal with deepfakes and other very realistic phishing attacks and so forth?
Valerie Abend (07:47):
So there's a range of things that banks could be doing to protect themselves increasingly against these types of attacks. When it comes to the leaders in our study, they do three things really well. The first is they proactively and transparently communicate about cybersecurity practices to their customers. Let's face it, the customers, oh, by the way, and their workforce and their supply chain partners, because let's face it, the bad guys are targeting the customers. They're targeting employees of the bank, they're targeting the vendors of the bank. And so the ability to clearly say, these are the kinds of things we're doing and that you also need to be doing to protect yourselves is really important. And we have found that that builds a lot of customer trust. Most banks are looking to hyper-personalized their customer experiences, but the customers want to understand in that personalization, how is that new technology going to also have the right kind of controls in them?
(08:50):
And these banks do that really well so that they build trust as they go out and use new technology. The second thing they do is they embed cybersecurity, as we were just talking about upfront in their strategic priorities. They don't wait until a product is sort of down the road of development or wait for an M&A or a divestiture activity becoming public to engage the security teams. And then the third thing they do, and I think this gets to your point, Penny, in terms of protecting against deepfakes, they really empower their customers, their workforce, and their supply-chain partners with understanding the latest ways that the threat is evolving and what they can do to protect themselves. For example, there are organizations that are not just using their regular data threat intelligence feeds, they're actually going into dark web-specific places where deepfakes are known to be discussed, and tactics are being talked about. They're going into those dark websites on a daily basis, sometimes twice a day, and they take any material change in how that deepfake might appear, and they immediately update their training and awareness efforts to incorporate what that new tactic might look like. So they're not waiting and doing this in an annual process. They're doing it in the quarter, in the month, and sometimes in the week, depending on how that could impact their customers.
Penny Crosman (10:23):
So your first point was about communicating cybersecurity practices and threats to customers. How do you communicate some of the enhanced risks that AI presents and the efforts that banks are making without kind of scaring people?
Valerie Abend (10:44):
Yeah, a little bit. You have to balance that, and I agree that that's important. By the same token, I think a lot of people think, well, that won't be me. And so a wake-up call with real-world examples goes a long way. And what I like about the banks that are the leaders here is they're not just depending on a static website where banks talk kind of in high level about cybersecurity practices and what they're doing. They are meeting the customer where they are in their daily lives. So they might be doing it within app messages. They might be providing customer scoring and give them ways inside that app to learn about what more you could do to improve your overall security score as a customer. But they're also showing up in advertisements and sponsorships, for example, on podcasts. So I was listening to a podcast a few weeks ago, and it was really interesting to see that a bank was not only sponsoring this podcast, it had nothing to do with cybersecurity. It's just a widely listened to podcast. But that bank was talking specifically about customer trust, a little bit about how bad guys can come after the customer, and really focusing on and how do we help you as your trusted bank, protect yourself, and what could you be doing to protect yourself? So educating the customer where they're actually spending their daily lives is building more trust with that bank.
Penny Crosman (12:13):
Are you seeing banks put the right defenses in place for these enhanced threats? For instance, I'm seeing a lot of banks be hit with a lot of social engineering fraud in, and even if they're not on the hook necessarily because the consumer fell for a ruse and the bank is not necessarily held accountable, there's still a huge reputational risk and they're still the risk of losing customers and so on. From there, do you think banks are doing enough to kind of combat this steady onslaught of social engineering tactics?
Valerie Abend (13:10):
So Penny, I think you're raising a really good point, and the banks that we work with are all trying to do the right thing. And a lot of banks have found ways to engage across the bank with not only their cybersecurity teams and their threat intelligence teams, but also their fraud teams and their customer experience teams to say, how can we recognizing, we all have jobs to do come together to address this holistically with our customers. And it's really important because some of the tactics are not unique, whether they're retail customers or commercial clients, but then some of the use cases and the way these threats show up can be, and so they develop a program based on the customer set and continuously evolve it. But some of this will also require technology. And one of the things that we talk about is how can you one implement new technology in addition to new processes to detect when these things are actually happening in real time or near roll time? And some of that means actually using AI to combat some of the fraud that is empowered by AI. And that's something that we know is going to explode over the next several years. It's going to be essential that banks actually use secure approaches involving AI to detect report, and respond to and mitigate some of the threats that are empowered by AI.
Penny Crosman (14:49):
All right. Sounds like good advice. Well, Valerie Abend, thank you so much for joining us today, and to all of you, thank you for listening to the American Banker Podcast. I produced this episode with audio production by Wen-Wyst Jeanmary and Adnan Khan. Special thanks this week to Valerie Abend at Accenture. Rate us, review us and subscribe to our content at www.americanbanker.com/subscribe. For American Banker, I'm Penny Crosman and thanks for listening.