The power of AI in cybersecurity
The widespread adoption of artificial intelligence (AI), particularly generative AI (GenAI), has revolutionized organizational landscapes and transformed both the cyber threat landscape and cybersecurity.
AI as a powerful cybersecurity tool
As organizations handle increasing amounts of data daily, AI offers advanced capabilities that would be harder to achieve with traditional methods.
According to the “best practices” report recently published by Spain’s National Cryptology Centre (NCC), when applied to cybersecurity, AI can:
- Advance threat detection and response
- Use historical data to anticipate threats and vulnerabilities
- Reduce the risk of unauthorized access by accurately authenticating individuals with advanced biometrics, user behavior, etc.
- Identify phishing attempts
- Evaluate security configurations and policies to identify possible weaknesses
Besides helping security teams perform these tasks more accurately, AI also helps them improve their working speed.
AI cybersecurity risks
But speed is also what cybercriminals are achieving when harnessing the power of AI: it allows them to quickly adapt their attacks to new security measures.
According to the NCC, the use of AI in cybersecurity comes with the following challenges and limitations:
- Adversarial attacks against AI models – Aimed at deceiving or confusing machine learning models, to force AI-based systems to make erroneous or malicious decisions
- Overdependence on automated solutions – For reasons such as lack of interpretability, failures in automation, a false sense of security, and others, AI systems should be used in tandem with traditional methods and techniques, not instead of them
- False positives and false negatives could lead to undetected security breaches or unnecessary disruptions
- Privacy and ethics – There are concerns about how personal data is collected, stored and used
Finally, GenAI, which can be used by security practitioners to enhance their system testing processes, can also be leveraged by cybercriminals to generate malware variants, deepfakes, fake websites and convincing phishing emails.
Governments are stepping up
With AI technology continuously improving, cybercriminals will surely find new ways to compromise systems.
Last October, President Biden issued an Executive Order with the intention to manage the risks and guarantee safe, secure, and trustworthy AI.
Soon after, the UK National Cyber Security Centre (NCSC) published security guidelines for developers and providers of AI-powered systems to ensure secure AI system development and deployment.