Being a cybersecurity analyst at a large company today is a bit like looking for a needle in a haystack — if that haystack were hurtling toward you at fiber optic speed.
Every day, employees and customers generate loads of data that establish a normal set of behaviors. An attacker will also generate data while using any number of techniques to infiltrate the system; the goal is to find that “needle” and stop it before it does any damage.
The data-heavy nature of that task lends itself well to the number-crunching prowess of machine learning, and an influx of AI-powered systems have indeed flooded the cybersecurity market over the years. But such systems can come with their own problems, namely a never-ending stream of false positives that can make them more of a time suck than a time saver for security analysts.
To read the full article,
click here