AI Takes Cyberattacks, and Security, to the Next Level

Avatar for Keith Delahunty
By Keith Delahunty Senior Product Manager
April 18th 2024 | 6 minute read

Artificial intelligence is the Janus of cybersecurity: a dual-faced bringer of war and peace, of beginnings and endings, the opening and shutting of doors, with the power to both help and harm.

PwC’s latest CEO survey captured the dilemma. While it found chief executives are increasingly looking to the transformative benefits of generative AI, almost two-thirds expressed concern that it will increase cybersecurity risk over the next 12 months.

The threats are multifaceted and developing fast, noted cybersecurity expert Arnaud Wiehe, author of The Book on Cybersecurity and Emerging Tech, Emerging Threats. He pointed to the recent Joe Biden robocall, which used artificial intelligence to mimic the President’s voice and discourage voters in New Hampshire from voting in the state’s January primary election, in what is expected to be a year of unprecedented election misinformation and interference. In North Korea, hackers are adopting generative AI to hone phishing and social engineering attacks aimed at stealing cutting-edge technologies and funds for its nuclear weapons program. AI services such as ChatGPT could help it develop more sophisticated forms of malware.

AI next-level cyber threats

In financial services, “AI will drive an increase in the volume and sophistication of fraud and scams,” according to research by PwC and Stop Scams UK. It highlighted six key ways AI tools could be used:

  • Generating text and image content – GenAI can create tailored emails, instant messages and image content as bait for phishing scams.
  • Chatbots – enabling fraud at scale by conversing with potential victims and manipulating them into making payments.
  • Deepfake video – used as clickbait to drive traffic to websites that capture card payment details, and to bypass institutions’ ID controls.
  • Voice cloning – for example, an employee receives a call from someone claiming to be the CEO ordering them to make a payment to another account. Voice cloning is expected to be one of the main ways criminals will employ AI to drive sophisticated scam attempts.
  • Sophisticated targeting of victims – through its ability to review large datasets to identify potential victims and tailor scam content to their vulnerabilities, AI tools can automate these types of ‘spearfishing’ attacks at scale.
  • AI-enabled pressure testing – AI could be used to flex mass attack patterns against bank systems and develop more subtle strategies to identify vulnerabilities.

UK not-for-profit fraud prevention service Cifas’ director of intelligence Stephen Dalton has warned of “seeing an increased use of deepfake images, videos and audio being used during application processes.” AI-powered translation tools mean deepfakes can also be used to replicate voices and accents in more languages, allowing them to target victims internationally, raising the threat of a flood of scams across Europe and beyond. Good quality deepfakes – which are increasingly difficult to identify – reportedly cost about $150 on the dark web.

Yet despite the fast-growing threats, many investment organizations may be underestimating the cybersecurity risks posed by AI.

The 2024 Cybersecurity Benchmarking Survey, a joint project between the National Society of Compliance Professionals and ACA Group, found nearly 40% of compliance professionals from asset management, investment adviser and private markets firms have yet to evaluate AI as a cybersecurity risk. A further 27% don’t consider AI relevant to cybersecurity. Respondents are least concerned about the cyber threat posed by deepfakes, with just 5% citing them as a concern.

Fighting AI fire with fire

AI may be transforming the cyberattack landscape, but it is a powerful weapon in the cybersecurity fightback too.

Mastercard has developed an AI-powered screening tool that helps lenders detect fraud before money leaves customers’ accounts. Lloyds Banking Group is leveraging AI’s pattern-recognition capabilities to build detailed profiles of how customers act so it can freeze payments when unusual activity occurs. Google and Alphabet CEO Sundar Pichai noted that some of its tools are up to 70% better at detecting malicious scripts and 300% more effective at identifying files that exploit vulnerabilities. Software vendors are increasingly using AI capabilities to enhance endpoint protection and antivirus tooling, said Wiehe.

AI’s ability to scan massive amounts of data at high speed and uncover patterns is central. It lends itself to comprehensive threat modelling. AI can also monitor systems and networks 24/7 to provide real-time threat detection, and analyze logs and events to identify vulnerability sources and respond to them – checks no organization would have the manpower to do manually. Automating security risk detection then frees up teams to focus on other strategic initiatives.

Amid the technology’s potential though firms must ensure they don’t ignore the cybersecurity fundamentals, cautioned Wiehe. “Focusing on the basics will give you 80% of the benefits,” he said. “For example, who has admin access to your code? Do you have two-factor authentication on that? When did you last back up your code? When did you last test you can recover from it? Do you have vulnerability and patch management protocols?”

Sophisticated deepfake spoofing can be countered with simple money transfer approval processes, Wiehe added. These could include introducing keywords to corroborate executives’ identities, calling the person back on a verified number to confirm the instruction or putting delays in place before any transfer can occur.

Cybersecurity arms race

AI is the latest front in a perennial arms race with cybercriminals. Quantum computing – with the danger it poses to today’s cryptography models – may well be next. Ultimately, firms will need to keep pace with these developing technologies if they don’t want to get overrun. That’s hard. But the alternative is worse.

About Arnaud Wiehe

Arnaud Wiehe is an author, speaker, consultant, and thought leader in cybersecurity. He has worked in leadership and cybersecurity roles for major global companies, including as a CISO for multiple years. He holds several prestigious cybersecurity certifications, including:

•Certified Information Systems Security Professional (CISSP)

•Certified Cloud Security Professional (CCSP)

•Certified Information Security Manager (CISM)

•Certified Information Systems Auditor (CISA)

•Certified Fraud Examiner (CFE)

Throughout his career, Arnaud has demonstrated a strong focus on cybersecurity best practices and keeping current with emerging trends, technologies, and innovation. He is a graduate of the Singularity University and a member of the Association of Professional Futurists. He is widely respected by his peers as an expert in cybersecurity and IT governance.

Arnaud is also an enthusiastic amateur musician and luthier. He plays the violin, viola, cello, and mandolin. He has made two violins, a viola, and numerous bows.

Keith Delahunty
Keith is responsible for all aspects related to Transfer Agency, driving product development, vision, strategy, & execution across Deep Pool applications. Keith holds a master’s degree in finance & has extensive experience working in Private Equity, Alternative & Retail asset classes.