Reinsurance News

GenAI can open doors for malicious cyber actors to exploit capabilities

27th November 2024 - Author: Jack Willard -

Share

Generative AI (GenAI) presents many ways for organisations to revolutionise their respective industries by enhancing business functions, however, the technology may also open doors for malicious actors to exploit capabilities for cyber attacks, according to a new report from Guy Carpenter, the reinsurance broking arm of Marsh McLennan, and CyberCube, a cyber analytics firm.

cyberThe joint report explores how Gen AI exacerbates the risk of cyber-attacks while also providing mechanisms to detect and fend off such attacks.

One major concern that’s highlighted from the report, is around AI’s potential to create more sophisticated polymorphic malware that can continuously evolve to evade detection, which could potentially lead to longer-lasting and more damaging attacks.

“AI enhancements to attack vectors will increase the efficacy and efficiency of attacks in the pre-intrusion phases of the cyber kill chain. Threat actors will be able to attack a larger number of targets in a more cost-efficient manner, with an expected increase in success rate (through greater targeting of weaker organizations), resulting in a larger footprint for a given cyber threat campaign,” the report states.

Adding: “AI can be expected to enhance threat actor capability in target enumeration, lateral movement, privilege escalation and efforts to evade intrusion detection. These enhancements to post-intrusion phases will likely allow attackers to compromise more assets at a greater infection rate, resulting in more significant damage potential.”

Moreover, the study between the two organisations also examined how historical cyber incidents like the 2018 – 2019 Ryuk ransomware attacks and the 2017 Equifax data breach could have been even more devastating if enhanced by AI capabilities. For example, if AI had been included in the Equifax breach, it would have helped attackers find more valuable data, as well as allow them to disguise their extraction activities, which would have likely led to an even greater data loss.

However, the report also explained that defenders also usually have a notable advantage against attackers, which includes better access to data for AI training models, as well as greater resources for developing defensive tools.

The report reads: “However, not all defenders will have the resources or inclination to avail themselves of this advanced technology and, as such, logic dictates that there should be some net increase in the frequency of successful attacks on these less-resourced or less-prepared organizations.”

Adding: “The most likely outcome is that larger, more resourced or more prepared firms have a better chance at reducing their (often outsized) exposure to cyber risk by deploying AI in defensive mechanisms, while smaller, less-resourced or less-prepared firms will likely have increased exposure to these novel attack trends and methods.”

Looking ahead, the insurance sector faces a number of new challenges in quantifying and managing AI-related risks. The report identified four key areas of concern: AI as a software supply-chain threat, AI presenting new attack surfaces, AI presenting a data privacy threat, and risks associated with AI in security roles.

“Recognizing the potential exposure accumulation risk arising from AI, it is important for the (re)insurance industry to look ahead and forge an analytical pathway to measure the risk, while embracing the positive side of AI,” the report reads.

Adding: “As AI technology becomes increasingly integrated into our lives, the (re)insurance industry has a unique opportunity to assist policyholders preparing for potential threats arising from AI.”