The average business receives 10,000 alerts every day from the various software tools it uses to monitor for intruders, malware, and other threats. Cybersecurity staff often find themselves inundated with data they need to sort through to manage their cyber defenses.
These challenges underscore the need for better ways to stem the tide of cyber-breaches. Artificial intelligence is particularly well suited to finding patterns in huge amounts of data. As a researcher who studies A.I. and cybersecurity, I find that A.I. is emerging as a much-needed tool in the cybersecurity toolkit.
There are two main ways A.I. is bolstering cybersecurity. First, A.I. can help automate many tasks that a human analyst would often handle manually. These include automatically detecting unknown workstations, servers, code repositories, and other hardware and software on a network. It can also determine how best to allocate security defenses. These are data-intensive tasks, and A.I. has the potential to sift through terabytes of data much more efficiently and effectively than a human could ever do.
Second, A.I. can help detect patterns within large quantities of data that human analysts can’t see. For example, A.I. could detect the key linguistic patterns of hackers posting emerging threats on the dark web and alert analysts.
More specifically, A.I.-enabled analytics can help discern the jargon and code words hackers develop to refer to their new tools, techniques, and procedures. One example is using the name Mirai to mean botnet. Hackers developed the term to hide the botnet topic from law enforcement and cyberthreat intelligence professionals.
A.I. has already seen some early successes in cybersecurity. Increasingly, companies such as FireEye, Microsoft, and Google are developing innovative A.I. approaches to detect malware, stymie phishing campaigns and monitor the spread of disinformation. One notable success is Microsoft’s Cyber Signals program that uses A.I. to analyze 24 trillion security signals, 40 nation-state groups, and 140 hacker groups to produce cyber threat intelligence for C-level executives.
Federal funding agencies such as the Department of Defense and the National Science Foundation recognize the potential of A.I. for cybersecurity and have invested tens of millions of dollars to develop advanced A.I. tools for extracting insights from data generated from the dark web and open-source software platforms such as GitHub, a global software development code repository where hackers, too, can share code.
Downsides of A.I.
Despite the significant benefits of A.I. for cybersecurity, cybersecurity professionals have questions and concerns about A.I.’s role. Companies might be thinking about replacing their human analysts with A.I. systems, but might be worried about how much they can trust automated systems. It’s also not clear whether and how the well-documented A.I. problems of bias, fairness, transparency, and ethics will emerge in A.I.-based cybersecurity systems.
Also, A.I. is useful not only for cybersecurity professionals trying to turn the tide against cyberattacks but also for malicious hackers. Attackers are using methods like reinforcement learning and generative adversarial networks, which generate new content or software based on limited examples, to produce new types of cyberattacks that can evade cyber defenses.
Researchers and cybersecurity professionals are still learning all the ways malicious hackers are using A.I.
The road ahead
Looking forward, there is significant room for growth for A.I. in cybersecurity. In particular, the predictions A.I. systems make based on the patterns they identify will help analysts respond to emerging threats. A.I. is an intriguing tool that could help stem the tide of cyberattacks and, with careful cultivation, could become a required tool for the next generation of cybersecurity professionals.
The current pace of innovation in A.I., however, indicates that fully automated cyber battles between A.I. attackers and A.I. defenders are likely years away.