GenAI can enhance security awareness training
One of the biggest concerns over generative AI is its ability to manipulate us, which makes it ideal for orchestrating social engineering attacks. From mining someone’s digital footprint to crafting highly convincing spear phishing emails, to voice capture enabling vishing and deep fake videos, GenAI significantly raises the bar in terms of creating convincing attacks.
These types of assaults are no longer purely theoretical. In February, reports emerged of a finance worker in Hong Kong who was tricked into transferring $25m by a deep fake of the CFO on a video conference call. Initially skeptical after receiving a phishing email requesting he make a secret transaction, he was persuaded it was legitimate when he attended a group call and recognized those he saw online. In fact, they weren’t his colleagues at all – everyone on the call was a deepfake version of those people.
Such cases reveal that we can no longer trust what we see and hear. These attacks play on the usual advice in a case of BEC and CFO fraud, which is to speak to the person concerned to check the request is genuine. The tell-tale giveaways of a badly worded phishing email will be a thing of the past, particularly as they are now often sent from legitimate domains.
In effect, these attacks render much of the security awareness training we have as obsolete. So how can we educate users to defend the enterprise in the age of GenAI?
Considerations for GenAI awareness
Firstly, we need to get back to basics. Social engineering is fundamentally all about psychology and putting the victim in a situation where they feel under pressure to make a decision. Therefore, any form of communication (email, call, chat or video) that imparts a sense of urgency and makes an unusual request needs to be flagged, not immediately responded to, and subjected to a rigorous verification process.
Much like the concept of zero trust, the approach should be “never trust, always verify”, and the education process should outline the steps that should be taken following an unusual request. For instance, in relation to CFO fraud, the accounts department should have a set limit for payments and exceeding these should trigger a verification process. This might see staff use a token-based system or authenticator to verify the request is legitimate.
Secondly, users need to be aware of oversharing. Is there a company policy to prevent information being divulged over the phone? Restrictions on the posting of company photos that an attacker could exploit? Could social media posts be used to guess passwords or an individual’s security questions? Such steps can reduce the likelihood of digital details being mined.
But bear in mind that those with a higher profile (e.g., business leaders) are at greater risk of having their image stolen and voice cloned because they have to maintain a presence in the public domain. Running simulations that show senior leader deepfakes could help drive the message home.
And this brings us on to our third point, which is that we should not be looking to harness GenAI for security awareness training. Simulated phishing exercises, for example, should emulate those of an AI-crafted attack by becoming personalized with data mined from individual and company sources. In fact, GenAI promises to transform security training for the better.
How security awareness training will change
Today, generic educational computer-based training is used periodically to meet regulatory and compliance demands, but it does little to reduce risk and boost best practice.
Providing training that reflects the business and is relevant to its employees is far more effective, but this has been challenging to produce in the past, requiring the development of training modules tailored to the risk profile of the business. With GenAI, an even deeper level of customization can be achieved based on the user, their role within the organization and their unique behavior patterns.
In fact, Gartner predicts that by 2026 those businesses that do combine GenAI with their security behavior and security culture programs will experience 40% less incidents brought about by employees. The analyst house states that traditional approaches to security training will be replaced by systems that measure behavioral change, facilitated by GenAI. As a result, cybersecurity control frameworks will switch from the compliance-based training we see today to tangible behavior-based measurement in a bid to reduce human risk.
In real terms, this means we can expect every user to have their own tailor-made training that picks up on their susceptibility and helps reduce the likelihood of erroneous decision making.
And looking to the future, GenAI will become more of a guiding force in day-to-day decision making, nudging users to make the right choices and questioning potentially risky actions. If we look at Microsoft Copilot, for instance, its already being used to make recommendations and suggest next steps in a variety of contexts.
Education on internal use
Of course, introducing GenAI into the mix as a workplace tool also means that we need to be educating the workforce on its safe use. A governance framework should be put in place, such as ISO 22989 and ISO 42001, NIST’s AI Risk Management Framework, or other options.
The framework outlines the controls that should then be translated into GenAI policy and cover responsible use, reporting procedures, and dovetail with existing security and data protection policies. It’s key to ensure that users understand where AI is used and how they can use it safely.
For example, users may not be aware that GenAI is used in the Preview function of Windows 11 so any copying and pasting of sensitive data could see that data reappear elsewhere in the future.
Conclusion
Much has been made of the danger of GenAI being used to create malicious software, but the real threat lies in this potential for data leakage and in conning humans. It’s true that it is being used to create code, with researchers recently discovering that a Powershell script to drop an infostealer used in a phishing campaign was written by ChatGPT. However, the code was no more sophisticated than if it were created by a human and such malware can be easily spotted using automated detection.
Overtime, GenAI is expected to enable threat actors to scale their capabilities seeing attacks become more numerous and evolve faster. But right now it’s GenAI’s ability to warp reality that poses the major threat and the best defense against that is effective security training.
We need to keep our wits about us more than ever, question more, and don’t take anything at face value.