Back to Blog

AI: Two Small Letters –
Many Big Advantages

By
Yann Fareau
|
March 10, 2022

“Artificial intelligence is no match for human stupidity,” observed a wry Albert Einstein. Today, we have evolved to where AI can deliver critical and indispensable advantages in the race toward cybersecurity. Nevertheless, even brilliant security managers do not always see how or why this is the case.

State of Affairs: AI and Cybersecurity

This post commences my modest campaign to explain the background of this situation.

 

The term “AI" (some say "machine learning") is popping up today everywhere in elaborate marketing presentations from cybersecurity vendors. In fact, I would say it is present in almost all promotional materials. Even though AI is en vogue right now, the cybersecurity community seems to have acquired only scant knowledge and few skills on the subject. The average decision maker may understand some of the broad principles -- supervised versus unsupervised mode, for example -- and they are probably even able to distinguish deep learning from shallow learning. But how to tactically apply it in cybersecurity remains largely unknown – or misunderstood.

 

Fortunately, these deficits are unlikely to remain a permanent state of affairs. We may recall that only just a few years ago similar comprehension gaps existed for the cloud, which has since entered the mainstream. Yet we can do much to remedy these deficits and hasten the implementation of AI.

 

Vectra AI has placed AI at the heart of its strategy.[1] This is an organization that creates AI-augmented solutions to deduce how attackers act to achieve their desired ends: Will they deploy ransomware? Or data exfiltration? In this context, AI becomes a means of detecting anomalies indiscernible by human analysis. It sorts through anomalies and classifies them according to decision trees or clustering algorithms[2]. They are then mapped in sequence to reveal future attack paths, prioritizing alerts to draw attention to the most critical threats.

 

Vectra AI defines the learning models most relevant to achieving this goal. This sector will continue to grow in importance as the state of the art of cyberattacks evolves.

 

Four things to know about AI

So, what do we need to know about AI? Let me share four brief suggestions:

 

First, it is not impossible to understand AI’s power. There are many more valuable resources available to draw on than there are superficial marketing presentations. I myself have benefited from recent popular, substantial discussions of the cybersecurity issues machine learning can address,[3] principles governing the use of AI,[4] and white papers that explain adaptive machine-learning models.[5] Of course, much of the content on this topic is challenging, delving deep into the complex mathematical formulas that propel machine-learning models to achieve specific objectives.[6] But there is a growing body of more accessible material for all nonspecialists to engage with – the best of it from Vectra AI.[7]

 

Second, organizations that truly lead in this effort have no need to divert their customers with decorative gimmicks. During my consulting days I learned that, when the content of a presentation was thin or of limited persuasive power, it was natural to want to compensate by producing a more exciting format. In the AI sector, wherever you come across an extraordinary UI with stunning graphical effects, ask yourself this: Is the aim perhaps to mask a paucity of real insight or value? Customers want efficiency and profitability, not beauty contestants.

 

Third, AI is not automatically sentient; it must be trained well. A fundamental principle of any AI model, particularly a supervised model, is to create value – which requires training with a dataset free of bias and sufficiently representative of the real-world threats. Too many solutions available today tend to gloss over this fact. Unsupervised models seek to recognize “normal” behavior across their domains, then spot deviations, which is not without value; but doing so without training has clear drawbacks, starting with the difficulty of introducing such a solution into an already compromised environment.[8]

 

When I was young and getting my risk management certification, we learned to apply an adage when calculating the risk of data-dependent actuaries: “Garbage in, garbage out.” The same axiom applies to AI today.

 

Fourth, in cybersecurity, AI cannot replace experienced human analysts; it can only complement them. In any AI-augmented security environment, the investigative power and judgment of humans remain essential -- but they necessarily observe a different terrain, focused on searching for indicators of compromise (IOCs). Ideally, adept security researchers can analyze attack patterns and collaborate with data scientists to build AI-based security solutions. The technology they produce becomes ever better at detecting and reacting to threats. But human input remains essential.

 

This article has deliberately avoided going into too much detail. My intent was only to awaken interest and provide a starting point for further inquiries. My Vectra AI colleagues and I regularly publish articles exploring cybersecurity applications for machine-learning models, furnishing detailed explanations and descriptions of their objectives in terms of detecting, prioritizing, and correlating threats.

 

Let’s close the AI comprehension gap together!

 

 

References


[1] https://content.vectra.ai/rs/748-MCE-447/images/Ebook_NewThreatDetectionModel.pdf

[2] https://content.vectra.ai/rs/748-MCE-447/images/WhitePaper_DataScienceBehindCognito.pdf

[3] https://www.wavestone.com/fr/insight/intelligence-artificielle-cybersecurite/

[4] https://www.microsoft.com/en-us/ai/responsible-ai?activetab=pivot1%3aprimaryr6

[5] https://content.vectra.ai/rs/748-MCE-447/images/WhitePaper_AugmentSOCwithAI.pdf

[6] https://link.springer.com/article/10.1007/s42979-021-00557-0

[7] https://content.vectra.ai/rs/748-MCE-447/images/WhitePaper_DetectMaliciousCovertCommunications.pdf

[8] https://hbr.org/2021/01/when-machine-learning-goes-off-the-rails