Bloomberg Government regularly publishes insights, opinions and best practices from our community of senior leaders and decision makers. This column is written by Oliver Tavakoli, chief technology officer at Vectra Networks.
We have seen Federal Government agencies express interest in artificial intelligence-based (AI) endpoint security solutions and they are starting to look at AI-based cyber security for the network, but they are early in their process.
As marketers and the media blend and entwine artificial intelligence, machine learning and cybersecurity buzzwords into a confusing cocktail of badness-stopping power, government security buyers, like their enterprise counterparts, are swamped with misconceptions and a lack of differentiated product clarity.
Some marketing of artificial intelligence promises more that it can deliver today. While cybersecurity vendors add AI branding to their products, the reality is that a majority of today’s solutions deliver subsets of AI capability – in particular, Machine Learning and Deep Learning.
At this year’s GITEC summit, there were discussions around AI, High Value Assets and Data gathering/hoarding. Hot topics also focused on the lack of qualified cyber security experts and the need to shorten the amount of time to find an attacker in their network. Most felt their endpoints and boundaries were well secured, but understood and recognized there is always the chance that attackers will find their way in. As technology gets smarter, so do the hackers. Government agencies need to carefully evaluate where their efforts to secure High Value Assets and reduce time to detection can be enhanced with technologies like AI. However, they also need to distinguish between actual capability and marketing buzzwords.
The long-term premise of AI is that we’ll be able to take the flexibility and thinking capacity of the human mind, digitize it, and create a genius system faster, more consistent and potentially more capable than any human brain. Such an AI, like the human brain, would be ultimately flexible and capable of learning any new task and arriving at new conclusions.
While researchers around the world continue to work on AI, subsets of the math and cognitive capabilities that are mature enough have been readily consumed by the cybersecurity industry. Machine Learning is used to create flexible multi-dimensional decision processes; supervised models capable of rapidly detecting and labeling new classes of threats, and unsupervised systems that learn the behaviors of a system or network over time and alert to attacker behaviors and rare threat events. Meanwhile Deep Learning systems plow through colossal mountains of data, often in real time, to uncover new and often unexpected relationships – providing unique insights into threats, and empowering improvements to machine learning models and mitigation processes.
As AI, Machine Learning and Deep Learning technologies advance and are further incorporated into the security technologies and operation processes of government and enterprise security teams, some may fear or resent that encroachment – in particular, future employment prospects. The reality is that today’s technologies (and likely those of the coming decade) are primarily focused on removing the repetitive tasks that most security professionals hate in the first place, and consequently allow the humans to focus on more stimulating, specialized and demanding security tasks.
A new generation of security technologies wield machine learning as a flexible automation enabler. Adopting and improving on decades-old “expert system” learning processes, security anomalies (false positive, true positive and unlabeled alerts) are initially responded to by a skilled security analyst, and their deduction processes and conclusions are learned by the system. Thereafter, if a similar (or highly correlated) event is uncovered again, the system can automatically propose a triage solution to the analyst. As trust in such systems grows, it is inevitable that the human analyst no longer needs to supervise the easy and known events, and instead can focus on the next toughest event – learning and applying that newfound knowledge and skill to the task at hand.
As any security professional will testify, there is no shortage of daily alerts and anomaly events within a large network that need to be investigated. Any technology that enables an analyst or threat responder to focus on the half-dozen critical events of the day (rather than distill 50,000 erroneous alerts generated each day) is viewed as a gift from above.
Several civilian government agencies have specified use cases for AI to automate functions that people can’t perform fast enough. One CISO related it to the movie ‘Hidden Figures’ and said “a computer was once a person until a machine was invented to calculate the math in seconds.” He defined a use case for AI to triage a single security alert which can currently take a person several hours. “We need a machine to do it in seconds and keep doing it after 5 pm.”
Another use case stated by a DoD decision maker was to have AI perform correlations. He said it is typical for two security analysts working on separate investigations to find related data like a command and control server address, and AI can work across groups and time zones to correlate data to speed up threat investigations.
Machine learning, deep learning, expert systems and other mature mathematical constructs of AI encompass many of the newest technologies designed to aid and augment the security professional – not replace him or her. As government entities battle the tsunami of data, it becomes ever more critical that experts can distill intelligence from such floods and derive all actionable intelligence they can. Machines are adept at the repetitive grunt-work.
Cybersecurity analysts and other professional security occupations can instead focus on areas that require deep understanding of the organization and make judgement calls based off of new “big data” insights.
The flexibility of machine learning techniques in not only identifying threats, but also aiding legal and compliance commitments, can be observed through examples such as spotting attacks within encrypted network streams of traffic – and not requiring the decryption of the data, nor deep inspection of the content.
For government buyers of security technologies, the prospect of increasing both the breadth and depth of threat detection and mitigation capabilities, without having to deal with the headache of classified content inspection and compartmentalization of traffic streams, is clearly an advantage over legacy, signature-based approaches.
AI-related technologies augment the existing workforce, making them faster and more capable of securing their organization and combatting threats. The AI portrayed in movies are interesting, but today’s buyers should probably think more along the lines of “Ironman” than “Terminator.”