AI has a PR problem – two, actually. Vectra and I are here to address both of them.
First problem: The term AI – artificial intelligence – is overused in today's marketplace, to the point of real carelessness – particularly in cybersecurity. In fact, AI is being mentioned so often and so casually, it's losing its meaning. AI is in vogue, so you'll see references to "AI-enhanced" or "AI-powered" security solutions that are, strictly speaking, not AI implementations at all. Often, they're merely rules-based systems: "If this, do that" type of workflows.
Second problem: Much of the public does not understand AI, making it fodder for sensational stories in the mainstream press. A Google engineer, Blake Lemoine, for example, recently got worldwide attention when he claimed an in-house chatbot-generating system had become sentient. It had asserted "personhood," he claimed, and had volunteered fears about its own mortality.
Google quickly debunked Lemoine's assertion. But for nonexperts, this tale raised visions from sci-fi of rogue computers rebelling, starting wars, or locking astronauts out of their spaceships. That kind of story leaves an impression.
In cybersecurity, real-world AI is certainly not provisioned to go rogue. But it does provide far more value than a simple rules-based digital sentinel. The latter reflexively flags anomalies without context, that is, without awareness of precedent or parallel events in progress.
Today, we need generalizable threat-detection capabilities that go beyond a narrow focus on known attack tools to deduce an attacker's objectives. We need solutions that scale without performance loss and can anticipate yet-unknown forms of cyberaggression. AI is an indispensable defensive tool in today's threat toolbox and a pillar of the Vectra solution portfolio.
Vectra uses machine learning and advanced analytics to detect threats, not just anomalies. Our products recognize the attacker's behaviors and patterns within the historical context of the local environment – and in hundreds more clouds and network domains we protect. We classify threats by their severity, elevating real attacks and supplying tools and data to support speedy remediation.
So, if all AI is not created equal – and some other offers are less than meets the eye – how do you tell the difference? Ask these four questions. The answers can reveal how real an AI proposition really is.
1. Does their AI proposition focus on what is weird or the security problem at hand? A product that executes general anomaly spotting doesn't pass that test: Not every anomaly is a threat. Moreover, not every real threat advertises itself as anomalous. The Vectra Platform focuses on highlighting the actions of attackers and minimizing the background noise, enabling more actionable alerts.
2. How and where is a provider putting AI to work? Some use AI to address peripheral problems but remain reliant on legacy technology when it comes to core execution challenges. The more central the role AI plays, the more value it can provide. Vectra deploys AI at the core of its Platform – to find the attacker's behaviors, to prioritize multistage attacks, and to manage the system.
3. What about a complementary human factor? AI is only as good as its creators. It may reflect human blindspots, biases, or other limitations. High-performing teams with superior vision and skills are essential to AI design. The data science and security research specialists at Vectra contribute diverse skillsets to AI, from neuroscience to physics, from physical-world incident response to reverse engineering. Their deep knowledge of their respective domains and combined experience differentiate Vectra AI from the rest of the pack.
The human factor also matters when AI is deployed for customers. Vectra analyst teams support and guide SOCs around the clock. As my Vectra colleague Christian Borst, Field CTO for EMEA, says: "We should always consider AI only an aid and a guide – the educated decision for many areas still lies with human beings."
4. Is AI presented as a cure-all? As a general-purpose antibiotic for all cyberspace infections? Real AI doesn't function like that. The shifting, heaving topography of network domains – exemplified by new hybrid cloud strategies and the rise of SaaS and PaaS vendors – regularly presents unique challenges. Be wary of providers who may be overpromising their wares.
Furthermore, the shape of future threats is unknown. The best defense against the unknown lies in the right amount of experience, agility, and readiness to iterate. In that spirit, Vectra applies diverse AI techniques to create cybersecurity solutions and has an ongoing commitment to research and innovation. No provider can claim to have all the answers.
Identifying real AI and optimizing it for cyberdefense purposes is a more nuanced, complex task than some wish to believe. It does not replace, let alone reject, human ingenuity and judgment. But today, it is the best tool for identifying new, persistent threat patterns and distinguishing them from benign anomalies. Meanwhile, the benchmark for what constitutes real AI continues to advance, and adversaries continue to blaze new trails in the malevolent deployment of new technologies – with the goal still to evade detection. That is why we are duty-bound to invest in staying ahead.
One thing I can promise you, though: AI is not going to lock you out of your spaceship. Let's do all we can to help people shake off misconceptions and see the real value effective AI can bring. At Vectra, we deploy it in smart, discerning ways to complement human effort and to help us achieve the safer, fairer world we deserve.
 Blake Lemoine, “What is LaMDA and What Does it Want?” Medium.com, 11 June 2022. https://cajundiscordian.medium.com/what-is-lamda-and-what-does-it-want-688632134489