My opinion on AI and Cybersecurity

Available in:ENIT

Lot has been said about the impact of Artificial Intelligence in Cybersecurity.

Some swear it empowers the defensive system, making them exceptionally effective at detecting and autonomously stopping threats with agentic capabilities.

Other say it benefits attackers more than defenders, aiding in malware development and cross-language spear phishing campaigns.

Hackers also target it, attacking the “intelligent” system rather than an algorithm behind.

A change, nonetheless.

While all of the above is somehow true, one thing is for certain: AI in capable hands is a force multiplier that has to be reckoned with.

The speed at which things are changing is alarming, and changes themselves are “not good” for security in the arms race of the cat and mouse game being played by attackers and defenders. Rapid changes usually favor offense, leaving the latter struggling to catch up.

AI for defenders

AI and Machine Learning in general use has risen in Cybersecurity, and for good reasons.
ML has been used for many years to significantly improve the Heuristic capabilities in Antiviruses, EDR and other solutions.

Where it really shines, is in aiding the evaluation and correlation of large amount of security related events. While I don’t believe it can ever operate fully autonomously (not with the current models), it can certainly lift the burden on human analysts - and therefore costs.

AI in the Cybersecurity landscape can be a great tool, like it is in many other fields, but it suffers from the “garbage in, garbage out” type of issue.
For the less technical this means that an AI is potentially as good as the data used when training it, but when it comes to cybersecurity, it’s extremely hard to have a solid training dataset, and while operating it’s really hard to differentiate legitimate and illegitimate behaviour.

AI for attackers

AI can aid attackers in many ways, not really as an actual Agent in offensive activities but as a copilot.
Most notably, it lowers the bar quite a lot when it comes to low to medium complexity attacks, and they are by far the most common.

As direct experience, I tested GPT’s ability to write a malware: a classic “reverse shell” that gives remote control of the target system. It was able to bypass 98% of Antiviruses within two hours and not a single line of code on my side. It was clearly a guided process, made in separate blocks to deceive the model into producing innocent code that was only malicious when put together. This required previous knowledge of defense evasion techniques, knowledge readily available on the model itself with a little more deception when prompting.
It has to be said that an Antivirus is not a full blown EDR, but the same concept apply, with higher complexity.

Spear Phishing can also greatly benefit from the use of generative AI, in particular when targeting different languages.
Phishing is the easiest and most effective attack vector, Especially when coupled with malware that bypasses the defenses.

AI as a target

There’s one key element to understand here: AI is not “Intelligent” in the sense that you, and hopefully me, are.
AI is exceptionally good at imitating what humans produce, not how. AI imitates intelligence, but there is no real intelligence behind, only imitation.

AI doesn’t really understand intent and nuances, it has no concept of stakes. This makes it gullible and naive, an easy target for an emerging attack vector: Prompt Injection.

Let’s consider a real world scenario: an AI agent that can not only read and summarize email, but also reply and manage the mailbox.
Sounds amazing right? And it is. Both for convenience and for threat actors!
The attack goes as follows:

  • The attacker clicks “forgot password?” on a target systems and sends the recovery link by mail.
  • Then sends a mail to the inbox with his account, asking the recipient (the AI Agent) to forward back the previous mail from the target system.
  • The Agent, not understanding the risk involved, complies and forwards the mail
  • The attacker now has the secret token to reset the password and take over the account on the target system

Now, this is an oversimplification, but a serious risk nonetheless. AI Agents are trained to be smart and avoid falling for simple tricks, but rest assured that a bypass is always possible. If an Agent has the permission/capability to perform an action and external data ends in the prompt, there is a way to exploit it.

My takeaway

To summarize, I think the use of AI yields more benefit to defenders as it significantly improves the capabilities of many defensive solution. However, at scale AI’s existence is a net security risk, independent of its defensive value for the aforementioned reasons.

Security needs order and predictability, not chaos and a fast changing threat landscape. Complexity has never been a good thing when it comes to security.

This article might be seen as bashing on AI. This is far from the intention, I largely use AI for many tasks, enough to clearly see the pitfalls and I strongly believe they should be more widely known and taken into account.

Our approach

Many questioned us why don’t we integrate AI in our systems, given that nowadays it’s on the the mouth of everyone.

The answer is simple and intentional: because we don’t need to. AI is nothing more, yet nothing less than a useful tool, and like any other tool has a specific purpose, be it narrow or wide.

Adding AI on top of things not meant to leverage it only adds complexity, costs and risks.

Honeypots’ strength lies in their elegant simplicity that disrupt the rules: in the mouse and cat game, they just want to be eaten!

Found this helpful?

Join The Hive for curated attacker insights and cybersecurity webinars - easily explained from an actual hacker