Whether the task is driving a nail, fastening a screw, or detecting a hidden HTTP tunnel, it pays to have the right tool for the job. The wrong tool can increase the time to accomplish a task, waste valuable resources, or worse. Leveraging the power of machine learning is no different.
Vectra has adopted the philosophy of implementing the most optimal machine learning tool for each attacker behavior detection algorithm. Each method has its own strengths.
The model best suited to finding SQL injection against an internal database will not be the same model that is best suited to discerning a hidden HTTP tunnel or an attacker moving laterally via remote procedure calls.
Because the underlying behaviors can be so different, our data science and threat research teams study each one before choosing the best machine learning tool to detect the behavior.
Enter deep learning, which has its roots in mathematics that was originally explored in the 1940s. The computational resources and power necessary to implement and train deep learning models prevented them from being utilized early on.
During the 1980s and 1990s, deep learning was in and out of vogue with the academic and industrial research communities. In fact, my thesis advisor at Caltech had students working on it back then. But even then, the compute resources available at the time were inadequate.
Interest in deep learning has come back in a major way, thanks to the efforts of Silicon Valley tech startups and stalwarts. The New York Times magazine published a fantastic article on how Google leveraged deep learning to great effect in its translation product. The article also offers a nice history of artificial intelligence at Google and the story of its research in the area. I recommend checking it out.
In the modern era, computing power has grown to the stage – and in the right ways – where it can finally manipulate massive data sets and efficiently perform the calculations needed to train models that can translate Hemmingway from English to Japanese and back to English again with high fidelity. Investments in these forms of artificial intelligence are not only growing at the software layer, but at the silicon layer as well.
Why is deep learning such a good translator? Deep learning models are extremely effective at learning and predicting what will come next in a sequence of things. These sequences might be words in a sentence or sentences in a paragraph.
Similarly, it is extremely effective for detecting the next letters in a domain name as found in a DNS query or packets in a network data stream that might represent a cybersecurity threat. Deep learning models have a memory-like characteristic that enables them to predict what is likely to happen next based on what they have seen.
In a recent product release, Vectra data science really stepped-up its machine learning game by employing deep learning algorithms to detect attacker behaviors in enterprise networks.
The first Vectra deep-learning model focuses on detecting algorithmically-generated domains that cyber attackers set up as the front-end of their command-and-control infrastructure. Within Vectra, this behavior is referred to as Suspect Domain.
Our customers have been extremely pleased with the results.
Considering deep learning’s role in translation, the model’s performance against non-English-language domains comes as no surprise. For example, controlling for the different mix of consonants, vowels and characters in German or Chinese domains requires many extra layers of correlation and checking using more standard machine-learning techniques. Deep learning handles them as a matter of course.
Our implementation of deep learning for the Vectra Suspect Domain detection has dramatically helped our multinational customers who have deployments at their facilities in Germany and China.
More detection algorithms based on deep learning are on the way to help expose advanced cyber-attacker behaviors in our customers’ networks. We are pleased to announce that Security that thinks® is now thinking deeply.
For more information about this topic, download the white paper, the Data Science Behind Vectra Threat Detections.
 A. Agranat and A. Yariv, “Semiparallel microelectronic implementation of neural network models using CCD technology,” inElectronics Letters, Vol. 23, No. 11, pp. 580-581, May 21 1987. URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4257748&isnumber=4257722
Jacob Sendowski, Ph.D., is the director of product management at Vectra. Before joining Vectra, he was CEO and co-founder at Souper Products LLC and was a product manager at Intel Security prior to that. He received a undergraduate in electrical engineering from University of California, San Diego as well as a graduate in electrical engineering and doctorate in electrical engineering from the California Institute of Technology.