AI and the future of cybersecurity work

Posted by Sohrob Kazerounian on Nov 7, 2018 8:08:00 AM

In February 2014, journalist Martin Wolf wrote a piece for the London Financial Times[1] titled Enslave the robots and free the poor. He began the piece with the following quote:

“In 1955, Walter Reuther, head of the US car workers’ union, told of a visit to a new automatically operated Ford plant. Pointing to all the robots, his host asked: How are you going to collect union dues from those guys? Mr. Reuther replied: And how are you going to get them to buy Fords?”

Read More »

Topics: AI, machine learning, deep learning

Integrating with Microsoft to detect cyberattacks in Azure hybrid clouds

Posted by Gareth Bradshaw on Sep 25, 2018 5:58:37 AM

Microsoft unveiled the Azure Virtual Network TAP, and Vectra announced its first-mover advantage as a development partner and the demonstration of its Cognito platform operating in Azure hybrid cloud environments.

Read More »

Topics: AI, machine learning, deep learning, cloud, Microsoft

Near and long-term directions for adversarial AI in cybersecurity

Posted by Sohrob Kazerounian on Sep 12, 2018 6:00:00 AM

The frenetic pace at which artificial intelligence (AI) has advanced in the past few years has begun to have transformative effects across a wide variety of fields. Coupled with an increasingly (inter)-connected world in which cyberattacks occur with alarming frequency and scale, it is no wonder that the field of cybersecurity has now turned its eye to AI and machine learning (ML) in order to detect and defend against adversaries.

The use of AI in cybersecurity not only expands the scope of what a single security expert is able to monitor, but importantly, it also enables the discovery of attacks that would have otherwise been undetectable by a human. Just as it was nearly inevitable that AI would be used for defensive purposes, it is undeniable that AI systems will soon be put to use for attack purposes.

Read More »

Topics: AI, machine learning, deep learning

Choosing an optimal algorithm for AI in cybersecurity

Posted by Sohrob Kazerounian on Aug 15, 2018 6:00:00 AM

In the last blog post, we alluded to the No-Free-Lunch (NFL) theorems for search and optimization. While NFL theorems are criminally misunderstood and misrepresented in the service of crude generalizations intended to make a point, I intend to deploy a crude NFL generalization to make just such a point.

You see, NFL theorems (roughly) state that given a universe of problem sets where an algorithm’s goal is to learn a function that maps a set of input data X to a set of target labels Y, for any subset of problems where algorithm A outperforms algorithm B, there will be a subset of problems where B outperforms A. In fact, averaging their results over the space of all possible problems, the performance of algorithms A and B will be the same.

With some hand waving, we can construct an NFL theorem for the cybersecurity domain:  Over the set of all possible attack vectors that could be employed by a hacker, no single detection algorithm can outperform all others across the full spectrum of attacks.

Read More »

Topics: AI, machine learning, deep learning

Types of learning that cybersecurity AI should leverage

Posted by Sohrob Kazerounian on Jul 18, 2018 6:00:00 AM

Despite the recent explosion in machine learning and artificial intelligence (AI) research, there is no singular method or algorithm that works best in all cases.

In fact, this notion has been formalized and shown mathematically in a result known as the No Free Lunch theorem (Wolpert and Macready 1997).

Read More »

Topics: AI, machine learning, deep learning

Neural networks and deep learning

Posted by Sohrob Kazerounian on Jun 13, 2018 6:00:00 AM

Deep learning refers to a family of machine learning algorithms that can be used for supervised, unsupervised and reinforcement learning. 

These algorithms are becoming popular after many years in the wilderness. The name comes from the realization that the addition of increasing numbers of layers typically in a neural network enables a model to learn increasingly complex representations of the data.

Read More »

Topics: AI, machine learning, deep learning

How algorithms learn and adapt

Posted by Sohrob Kazerounian on May 24, 2018 12:59:06 PM

There are numerous techniques for creating algorithms that are capable of learning and adapting over time. Broadly speaking, we can organize these algorithms into one of three categories – supervised, unsupervised, and reinforcement learning.

Supervised learning refers to situations in which each instance of input data is accompanied by a desired or target value for that input. When the target values are a set of finite discrete categories, the learning task is often known as a classification problem. When the targets are one or more continuous variables, the task is called regression.

Read More »

Topics: AI, machine learning

AI vs. machine learning

Posted by Sohrob Kazerounian on Apr 26, 2018 2:54:47 PM

“The original question ‘Can machines think?’ I believe to be too meaningless to deserve discussion. Nevertheless, I believe that at the end of the century, the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.” – Alan Turing

Read More »

Topics: AI, machine learning

The rise of machine intelligence

Posted by Sohrob Kazerounian on Apr 10, 2018 8:35:27 AM

Can machines think?

The question itself is deceptively simple in so far as the human ability to introspect has made each of us intimately aware of what it means to think.

Read More »

Topics: AI, machine learning, alan turing

Alan Turing and the birth of machine intelligence

Posted by Sohrob Kazerounian on Mar 15, 2018 10:32:29 AM

“We may compare a man in the process of computing a real number to a machine which is only capable of a finite number of conditions…” – Alan Turing


It is difficult to tell the history of AI without first describing the formalization of computation and what it means for something to compute. The primary impetus towards formalization came down to a question posed by the mathematician David Hilbert in 1928.

Read More »

Topics: AI, machine learning, alan turing

Subscribe to the Vectra Blog

Recent Posts

Posts by Topic

Follow us