It’s no secret that banks have been aggressive adopters of generative AI thus far. Some have used it to rewrite legacy code into more modern programming languages, while others have rolled out gen-AI powered tools to their employees for various use cases.
But for all the excitement around how gen AI could take waste out and put value back into banking, the technology presents security risks that must be carefully managed.
Deepfakes targeting customers and employees now rank as the most frequently observed threat by banks, according to Accenture’s Cyber Threat Intelligence Research. There have even been instances where hackers are tricking large language models into creating malware that can be used to hack customers’ passwords.
The combination of AI-powered deepfakes and real-time payments fraud is causing an explosion of global consumer fraud, which is hindering consumers’ trust levels and presents a real threat to banks. JPMorganChase CEO Jamie Dimon said that fraud cost its consumer bank $500 million last year – and this figure doesn’t include thousands of scam claims where customers authorized transactions to bad actors but shouldn’t have done so.
Banks’ security teams are struggling to keep pace
Most banking security executives feel the pressure and for good reasons. Four in five (80%) believe that gen AI empowers attackers faster than banks can respond, according to research that Accenture conducted in October.
Nearly three-quarters (74%) of these executives struggle to maintain digital trust in the face of rising fraud risks, while 88% said that meeting customer demands for seamless experiences across platforms without compromising on security is challenging.
Unfortunately, most banks tend to have a compliance mindset around cybersecurity which compounds the issue. They see it as an “essential burden” that drives up costs and slows progress. This view comes despite evidence that robust cybersecurity actually enhances efficiency and helps to build customer trust.
Actions to close the gap
It’s time for banks to be proactive with customers and start playing offense instead of being stuck on defense. Here are four things they can consider to help build customer trust:
- Protect consumers from themselves via education and communication: Banks are increasingly falling short on providing effective communication to customers about cybersecurity. 85% of 1,400 customers surveyed by Accenture in October said clear communication about cybersecurity practices is essential, yet only 28% rate their bank highly on this. In response, banks should regularly communicate security measures, potential risks and any incidents that may affect customer data. They can also share strategies – via websites, customer portals or mobile apps – to mitigate the latest threat tactics and scams, including short training videos on deepfakes.
- Embed cyber and operational security at the core of customer experiences: Banks should increasingly put speedbumps and checkpoints into the payments process to help prevent fraud. Case in point: I recently went to pay a new contact using my bank’s Zelle account. Because it was my first time engaging with this person on Zelle, my bank sent a text to confirm that I had meant to do this and that the contact’s number was correct. In a world of speed, sometimes it’s better to slow down a bit and think. Banks will have to carefully explain to customers that these guardrails are there for their own protection.
- Educate bank staff and every part of the bank ecosystem to detect and counter advanced threats: Banks can enhance workforce competencies through security training and education and this should be extended to banks’ third parties too as they are prime targets for AI-driven threats. Over 70% of the data breaches at banks are caused by third parties.
- Incorporate cybersecurity more into broader reinvention efforts: Many bank security executives feel overwhelmed by what banks are doing on the technology side, with 83% admitting that they struggle to align their bank’s cybersecurity measures with the pace of new technology adoption. As more banks embrace gen AI internally to generate code or transform how banks operate via agentic AI, they need to have robust processes in place to verify that the code is secure or that the agents are operating as intended. This might mean running additional checks or using specialized tools to detect and prevent malware. It will be especially critical as banks look to use generative AI to interact directly with customers.
The stakes are clear
Consumers’ trust levels in their banks are strong, but fragile. A single data breach can erode years of trust that banks have carefully built. Our research shows that 62% of customers lose confidence in their bank after a breach, and 43% stop engaging altogether.
While there’s no magic elixir to solve this problem, by prioritizing these areas, banks can navigate the increasingly complex AI threat landscape and help maintain customer trust. As I’ve mentioned previously, gen AI has the potential to transform how banking gets done. As banks leverage the technology they have to keep customer trust, cybersecurity and data protection – across their entire supply chain – top of mind.
And for all the threats that AI could enable, it’s important to remember that it can be a powerful tool in enhancing cybersecurity, such as through secure code generation, threat intelligence, and real-time threat monitoring and detection. Banks that take a thoughtful approach to adopting AI will be better positioned to manage the risks and gain customer trust.
By making cybersecurity a cornerstone of their strategy, banks can drive both consumer trust and business growth. Building trust through cybersecurity isn’t an option – it’s essential.
Read the full article here