Blog - article

AI and the future of cybersecurity work

Sohrob Kazerounian
November 7, 2018

In February 2014, journalist Martin Wolf wrote a piece for the London Financial Times[1] titled Enslave the robots and free the poor. He began the piece with the following quote:

“In 1955,Walter Reuther, head of the US car workers’ union, told of a visit to a new automatically operated Ford plant. Pointing to all the robots, his host asked: How are you going to collect union dues from those guys? Mr. Reuther replied: And how are you going to get them to buy Fords?”

The fundamental tension Wolf points out between labor and automation has always existed. It not only makes up a large portion of academic literature on political economy (think Karl Marx and Adam Smith), but also ignited many of the worlds labor struggles.

The Luddites for example, were a movement of textile workers and weavers who opposed the mechanization of factories, not, as they are famously depicted, because they were opposed to machines in principle, but because it led to the exploitation of workers who were not allowed to share in the profits made from the increased productivity resulting from automation.

Our current era of technological expansion (The Second Machine Age, as it is referred to by Brynjolfsson and McAfee[2], The Fourth Industrial Revolution by Schwab[3], and The Singularity is Near by Kurzweil[4]) has given birth to a variety of new tensions resulting from AI and machine learning.

These issues range from questions that strike at the core of our notions of law and ethics, like the inevitable trolley problem that will arise with self-driving cars (i.e., what should a self-driving car do if the only way to avoid an accident that would certainly kill it’s passengers was to swerve in a manner that would certainly kill an even larger number of pedestrians?) to far-reaching existential questions like What is the role or purpose of humans in a world with super-intelligent AI?

Nevertheless, the most pressing question remains: How should we organize the economy, and, more broadly, society, in a world where large swathes of human labor are beginning to be automated away? Put more simply, How will people live if they can’t get jobs because they were replaced by cost-effective, better-performing machines?

In just the last few years, numerous studies have been published and institutes inaugurated that are dedicated to studying which jobs of the future will remain in the hands of humans, and which will be doled out to the machines.

For example, the 2013 Oxford Report on The Future of Employment[1] attempted to describe what categories of jobs would be safe from automation and which are at greatest risk to it.

The study went much further than that and attempted to place probabilities on how “computerisable” various jobs are. The Oxford study, as well as many subsequent ones, generally argue that creative jobs, like artists and musicians, are less likely to be automated.

Yet, we live in a world where just last week, the first AI-generated painting was sold at Sotheby’s for nearly $500,000[6]. And The Verge just published an article about how AI-Generated Music is Changing the Way Hits are Made[7].

While there are no clear-cut rules for which types of cognitive and manual-labor jobs will be replaced, what I can say is that the recent application of advanced AI and machine learning techniques in the field of cybersecurity is highly unlikely to put security analysts out of work. Understanding why requires an appreciation for the complexity of cybersecurity and the current state of AI.

Advanced attackers constantly develop novel methods to attack networks and computer systems. Moreover, these networks and the devices connected to them are constantly evolving, from running new and updated software all the time, to adding new types of hardware as technology progresses.

The current state of AI on the other hand, while advanced, performs a lot like the human perceptual system. AI methods can process and recognize patterns in streams of incoming data, much like the human eye processes incoming visual input and the ear processes incoming acoustic input.

However, they aren’t yet capable of representing the full breadth of knowledge an experienced system administrator has, neither about the networks they are administering, nor the complex web of laws, corporate guidelines, and best-practices that govern how best to respond to an attack.

The development of the calculator did not reduce the need for people to understand mathematics but instead greatly expanded the scope and possibilities of what could be computed – and consequently the need for people with mathematical understanding to explore those possibilities.

Similarly, AI is just a tool that expands the scope and possibilities of detecting attacks that would otherwise have been undetectable. Don’t believe me? Just try looking at a high-frequency, multi-dimensional time-series of encrypted traffic and determine whether that traffic is an attack or benign.

For the foreseeable future, AI will simply remain a tool in a defender’s pocket, making it possible to detect, and therefore respond to, ever evolving advanced attacks.








About the author

Sohrob Kazerounian

Sohrob Kazerounian is the senior data scientist at Vectra AI with experienced in artificial intelligence, deep learning, recurrent neural networks, and machine learning.

Most recent blog posts from the same author

Artificial intelligence

AI and the future of cybersecurity work

November 7, 2018
Read blog post
Artificial intelligence

Near and long-term directions for adversarial AI in cybersecurity

September 12, 2018
Read blog post
Artificial intelligence

Choosing an optimal algorithm for AI in cybersecurity

August 15, 2018
Read blog post