Blog

Machine learning: The cornerstone of Network Traffic Analytics (NTA)

Posted by Eric Ogren, 451 Research on Jan 25, 2019 4:18:41 PM

Imagine having a security tool that thinks the way you teach it to think, that takes action when and how you have trained it to act. No more adapting your work habits to generic rules written by a third party and wondering how to fill in security gaps that the rules did not tell you about.

Read More »

Topics: machine learning, Threat Detection


AI and the future of cybersecurity work

Posted by Sohrob Kazerounian on Nov 7, 2018 8:08:00 AM

In February 2014, journalist Martin Wolf wrote a piece for the London Financial Times[1] titled Enslave the robots and free the poor. He began the piece with the following quote:

“In 1955, Walter Reuther, head of the US car workers’ union, told of a visit to a new automatically operated Ford plant. Pointing to all the robots, his host asked: How are you going to collect union dues from those guys? Mr. Reuther replied: And how are you going to get them to buy Fords?”

Read More »

Topics: machine learning, AI, deep learning


Integrating with Microsoft to detect cyberattacks in Azure hybrid clouds

Posted by Gareth Bradshaw on Sep 25, 2018 5:58:37 AM

Microsoft unveiled the Azure Virtual Network TAP, and Vectra announced its first-mover advantage as a development partner and the demonstration of its Cognito platform operating in Azure hybrid cloud environments.

Read More »

Topics: machine learning, cloud, Microsoft, AI, deep learning


Near and long-term directions for adversarial AI in cybersecurity

Posted by Sohrob Kazerounian on Sep 12, 2018 6:00:00 AM

The frenetic pace at which artificial intelligence (AI) has advanced in the past few years has begun to have transformative effects across a wide variety of fields. Coupled with an increasingly (inter)-connected world in which cyberattacks occur with alarming frequency and scale, it is no wonder that the field of cybersecurity has now turned its eye to AI and machine learning (ML) in order to detect and defend against adversaries.

The use of AI in cybersecurity not only expands the scope of what a single security expert is able to monitor, but importantly, it also enables the discovery of attacks that would have otherwise been undetectable by a human. Just as it was nearly inevitable that AI would be used for defensive purposes, it is undeniable that AI systems will soon be put to use for attack purposes.

Read More »

Topics: machine learning, AI, deep learning


Choosing an optimal algorithm for AI in cybersecurity

Posted by Sohrob Kazerounian on Aug 15, 2018 6:00:00 AM

In the last blog post, we alluded to the No-Free-Lunch (NFL) theorems for search and optimization. While NFL theorems are criminally misunderstood and misrepresented in the service of crude generalizations intended to make a point, I intend to deploy a crude NFL generalization to make just such a point.

You see, NFL theorems (roughly) state that given a universe of problem sets where an algorithm’s goal is to learn a function that maps a set of input data X to a set of target labels Y, for any subset of problems where algorithm A outperforms algorithm B, there will be a subset of problems where B outperforms A. In fact, averaging their results over the space of all possible problems, the performance of algorithms A and B will be the same.

With some hand waving, we can construct an NFL theorem for the cybersecurity domain:  Over the set of all possible attack vectors that could be employed by a hacker, no single detection algorithm can outperform all others across the full spectrum of attacks.

Read More »

Topics: machine learning, AI, deep learning


Types of learning that cybersecurity AI should leverage

Posted by Sohrob Kazerounian on Jul 18, 2018 6:00:00 AM

Despite the recent explosion in machine learning and artificial intelligence (AI) research, there is no singular method or algorithm that works best in all cases.

In fact, this notion has been formalized and shown mathematically in a result known as the No Free Lunch theorem (Wolpert and Macready 1997).

Read More »

Topics: machine learning, AI, deep learning


Neural networks and deep learning

Posted by Sohrob Kazerounian on Jun 13, 2018 6:00:00 AM

Deep learning refers to a family of machine learning algorithms that can be used for supervised, unsupervised and reinforcement learning. 

These algorithms are becoming popular after many years in the wilderness. The name comes from the realization that the addition of increasing numbers of layers typically in a neural network enables a model to learn increasingly complex representations of the data.

Read More »

Topics: machine learning, AI, deep learning


How algorithms learn and adapt

Posted by Sohrob Kazerounian on May 24, 2018 12:59:06 PM

There are numerous techniques for creating algorithms that are capable of learning and adapting over time. Broadly speaking, we can organize these algorithms into one of three categories – supervised, unsupervised, and reinforcement learning.

Supervised learning refers to situations in which each instance of input data is accompanied by a desired or target value for that input. When the target values are a set of finite discrete categories, the learning task is often known as a classification problem. When the targets are one or more continuous variables, the task is called regression.

Read More »

Topics: machine learning, AI


AI vs. machine learning

Posted by Sohrob Kazerounian on Apr 26, 2018 2:54:47 PM

“The original question ‘Can machines think?’ I believe to be too meaningless to deserve discussion. Nevertheless, I believe that at the end of the century, the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.” – Alan Turing

Read More »

Topics: machine learning, AI


The rise of machine intelligence

Posted by Sohrob Kazerounian on Apr 10, 2018 8:35:27 AM

Can machines think?

The question itself is deceptively simple in so far as the human ability to introspect has made each of us intimately aware of what it means to think.

Read More »

Topics: machine learning, AI, alan turing


Alan Turing and the birth of machine intelligence

Posted by Sohrob Kazerounian on Mar 15, 2018 10:32:29 AM

“We may compare a man in the process of computing a real number to a machine which is only capable of a finite number of conditions…” – Alan Turing

 

It is difficult to tell the history of AI without first describing the formalization of computation and what it means for something to compute. The primary impetus towards formalization came down to a question posed by the mathematician David Hilbert in 1928.

Read More »

Topics: machine learning, AI, alan turing


A sinuous journey through ``tensor_forest``

Posted by Sophia Lu on Dec 11, 2017 11:45:30 AM

Random forest, an ensemble method

The random forest (RF) model, first proposed by Tin Kam Ho in 1995, is a subclass of ensemble learning methods that is applied to classification and regression. An ensemble method constructs a set of classifiers – a group of decision trees, in the case of RF – and determines the label for each data instance by taking the weighted average of each classifier’s output.

The learning algorithm utilizes the divide-and-conquer approach and reduces the inherent variance of a single instance of the model through bootstrapping. Therefore, “ensembling” a group of weaker classifiers boosts the performance and the resulting aggregated classifier is a stronger model.

Read More »

Topics: Data Science, machine learning, AI, tensor forest, tensorflow


Why it's okay to be underwhelmed by Cisco ETA

Posted by Oliver Tavakoli, CTO, Vectra Networks on Jun 26, 2017 3:59:54 PM

Cisco recently announced the term “intent-based networking” in a press release that pushes the idea that networks need to be more intuitive. One element of that intuition is for networks to be more secure without requiring a lot of heavy lifting by local network security professionals. And a featured part of that strategy is Cisco ETA: 

"Cisco's Encrypted Traffic Analytics solves a network security challenge previously thought to be unsolvable," said David Goeckeler, senior vice president and general manager of networking and security. "ETA uses Cisco's Talos cyber intelligence to detect known attack signatures even in encrypted traffic, helping to ensure security while maintaining privacy."

Read More »

Topics: machine learning, network security, external remote access


Security that thinks is now thinking deeply

Posted by Jacob Sendowski on Apr 26, 2017 8:05:42 AM

Whether the task is driving a nail, fastening a screw, or detecting a hidden HTTP tunnel, it pays to have the right tool for the job. The wrong tool can increase the time to accomplish a task, waste valuable resources, or worse. Leveraging the power of machine learning is no different.

Vectra has adopted the philosophy of implementing the most optimal machine learning tool for each attacker behavior detection algorithm. Each method has its own strengths.

Read More »

Topics: machine learning, cybersecurity, deep learning


AI: Is science fiction on a collision course with science fact?

Posted by Chris Morales on Mar 30, 2017 3:48:43 PM

Sometimes science fiction becomes less fantastic over time than the actual reality. Take the film Ghost in the Shell, for example, which hits the big screen this week. It’s an adaptation of the fictional 28-year-old cult classic Japanese manga about human and machine augmentation.

Read More »

Topics: cyber security, machine learning, artificial intelligence


Security automation isn't AI security

Posted by Günter Ollmann on Jan 17, 2017 2:11:52 PM

This blog was orignially published on ISACA Now.

In many spheres of employment, the application of Artificial Intelligence (AI) technology is creating a growing fear. Kevin Maney of Newsweek vividly summarized the pending transformation of employment and the concerns it raises in his recent article "How artificial intelligence and robots will radically transform the economy."

In the Information Security (InfoSec) community, AI is commonly seen as a savior – an application of technology that will allow businesses to more rapidly identify and mitigate threats, without having to add more humans. That human factor is commonly seen as a business inhibitor as the necessary skills and experience are both costly and difficult to obtain.

As a consequence, over the last few years, many vendors have re-engineered and re-branded their products as employing AI – both as a hat-tip to their customer’s growing frustrations that combating every new threat requires additional personnel to look after the tools and products being sold to them, and as a differentiator amongst “legacy” approaches to dealing with the threats that persist despite two decades of detection innovation.

The rebranding, remarketing, and inclusion of various data science buzzwords – machine intelligence, machine learning, big data, data lakes, unsupervised learning – into product sales pitches and collateral have made it appear that security automation is the same as AI security.

Read More »

Topics: cyber security, machine learning, cybersecurity, artificial intelligence, security automation


Politics and the bungling of big data

Posted by David Pegna on Nov 17, 2016 12:00:00 PM

We live in the age where big data and data science are used to predict everything from what I might want to buy on Amazon to the outcome of an election.

The results of the Brexit referendum caught many by surprise because pollsters suggested that a “stay” vote would prevail. And we all know how that turned out.

History repeated itself on Nov. 8 when U.S. president-elect Donald Trump won his bid for the White House. Most polls and pundits predicted there would be a Democratic victory, and few questioned their validity.

The Wall Street Journal article, Election Day Forecasts Deal Blow to Data Science, made three very important points about big data and data science:

  • Dark data, data that is unknown, can result in misleading predictions.
  • Asking simplistic questions yields a limited data set that produces ineffective conclusions.
  • “Without comprehensive data, you tend to get non-comprehensive predictions.”
Read More »

Topics: Data Science, cyber security, machine learning


From the Iron Age to the “Machine Learning Age”

Posted by Günter Ollmann on Aug 30, 2016 8:00:00 AM

It is likely self-evident to many that the security industry’s most overused buzzword of the year is “machine learning.” Yet, despite the ubiquity of the term and its presence in company marketing literature, most people – including those working for many of the vendors using the term – don’t actually know what it means.

Read More »

Topics: cyber security, machine learning, cybersecurity


Ransomware, encryption and machine learning – Three key takeaways from Infosecurity 2016

Posted by Matt Walmsley on Jun 15, 2016 3:00:25 AM

Ransomware, encryption and machine learning – Three key takeaways from Infosecurity 2016

Last week was a long one. Vectra participated for the first time at Infosecurity Europe in London. Now that my feet have recovered from our very busy booth I thought I shared a few of the recurring themes I noticed at the show.

Ransomware. Definitely the threat de rigueur with vendors coming at the problem from various angles, including DNS management and client based solutions. Vectra was part of the buzz too, offering a network-centric approach with our newly announced ransomware file activity detection.

Read More »

Topics: machine learning, Encryption, Ransomware


Automate to optimise your security teams

Posted by Matt Walmsley on Jan 4, 2016 12:56:29 PM

Mind the gap
87% of U.K. senior IT and business professionals believe there is a shortage of skilled cybersecurity staff, the same percentage of UK security leaders also want to hire CISSP credentialed staff into their teams. Nothing of real surprise in that there’s a gap; let’s fill it with demonstrable high calibre professionals, right? Well, not quite. That skills gap also includes a “CISSP” gap. With 10,000+ UK security positions out there but just over 5,000 UK CISSPs, the math simply doesn’t add up. We should also consider that credentials like CISSP demonstrate excellent existing domain knowledge but does not help hiring managers understand soft skills, attitudes and other characteristics that combine to form the overall “talent and capabilities” of a candidate.

A pragmatic approach is therefore to hire on traits such as adaptability, collaboration and innovation alongside evidence of requisite technical capabilities. After all, in a rapidly changing digital landscape you’re hiring for tomorrow’s battle not yesterday’s, so agility is essential. Today’s security teams need to be ready to handle the new risks, challenges and the increased pace of change that Internet of Things (IoT) [Read more on IoT security],  cloud, mobility and social media all bring to the security challenge. The talent pool is limited, as are organisations' overall cyber security resources. It’s time to develop and support from within and broaden recruitment methodologies for those hard-to-fill open positions.

Read More »

Topics: machine learning, Automated Threat Detection.


Insider Threats: Spotting “the Inside Job“

Posted by Angela Heindl-Schober on Dec 14, 2015 11:38:29 AM

Incidents of fraud, theft and abuse enacted by rogue insiders present organisations with the ultimate in targeted threats. These are executed against them from highly motivated actors, operating with a high degree of internal organisational knowledge and comparative ease of access. Such threats have the ability to create sizable risks in relation to digital assets and are also the most challenging to manage.

Security leaders have to understand their organisation’s context and operations in order to strike a balance between protection, control and creating value.

Users tied up in complex and over-controlling systems are unable to perform. Too light a touch sees key assets and resources too easy to misuse, alter or steal. Blending layers of organisational, physical and technical policy and management can provide a meaningful way of reducing internal cyber attacks, but no solution can be perfect. Organisations must also enable themselves to identify and recognise illegitimate internal actions and make timely interventions.

Read More »

Topics: Insider Threats, machine learning


Automate detection of cyber threats in real time. Why wait?

Posted by Jerish Parapurath on May 15, 2015 10:01:43 AM

Time is a big expense when it comes to detecting cyber threats and malware. The proliferation of new malware variants makes it impossible to detect and prevent zero-day threats in real-time. Sandboxing takes at least 30 minutes to analyze a file and deliver a signature – and by then, threats will have spread to many more endpoints. 

Read More »

Topics: Targeted Attacks, Malware Attacks, Data Science, machine learning


Cybersecurity, data science and machine learning: Is all data equal?

Posted by David Pegna on May 9, 2015 9:00:00 AM

Big Data Sends Cybersecurity back to the future In big-data discussions, the value of data sometimes refers to the predictive capability of a given data model and other times to the discovery of hidden insights that appear when rigorous analytical methods are applied to the data itself. From a cybersecurity point of view, I believe the value of data refers first to the "nature" of the data itself. Positive data, i.e. malicious network traffic data from malware and cyberattacks, have much more value than some other data science problems. To better understand this, let's start to discuss how a wealth of network traffic data can be used to build network security models through the use of machine learning techniques.

Read More »

Topics: Data Science, cyber security, machine learning


Subscribe to the Vectra Blog



Recent Posts

Posts by Topic

Follow us