- The rise in cyberattacks is helping to fuel growth in the market for AI-based security products.
- The global market for AI-based cybersecurity products is estimated to reach $133.8 billion by 2030, up from $14.9 billion last year.
- Hackers are taking advantage, too: AI-generated phishing emails have higher rates of being opened than manually crafted phishing emails.
Artificial intelligence is playing an increasingly important role in cybersecurity — for both good and bad. Organizations can leverage the latest AI-based tools to better detect threats and protect their systems and data resources. But cyber criminals can also use the technology to launch more sophisticated attacks.
The rise in cyberattacks is helping to fuel growth in the market for AI-based security products. A July 2022 report by Acumen Research and Consulting says the global market was $14.9 billion in 2021 and is estimated to reach $133.8 billion by 2030.
An increasing number of attacks such as distributed denial-of-service (DDoS) and data breaches, many of them extremely costly for the impacted organizations, are generating a need for more sophisticated solutions.
Feeling out of the loop? We'll catch you up on the Chicago news you need to know. Sign up for the weekly Chicago Catch-Up newsletter here.
Another driver of market growth was the Covid-19 pandemic and shift to remote work, according to the report. This forced many companies to put an increased focus on cybersecurity and the use of tools powered with AI to more effectively find and stop attacks.
Looking ahead, trends such as the growing adoption of the Internet of Things (IoT) and the rising number of connected devices are expected to fuel market growth, the Acumen report says. The growing use of cloud-based security services could also provide opportunities for new uses of AI for cybersecurity.
AI's security boost
Among the types of products that use AI are antivirus/antimalware, data loss prevention, fraud detection/anti-fraud, identity and access management, intrusion detection/prevention system, and risk and compliance management.
Up to now, the use of AI for cybersecurity has been somewhat limited. "Companies thus far aren't going out and turning over their cybersecurity programs to AI," said Brian Finch, co-leader of the cybersecurity, data protection & privacy practice at law firm Pillsbury Law. "That doesn't mean AI isn't being used. We are seeing companies utilize AI but in a limited fashion," mostly within the context of products such as email filters and malware identification tools that have AI powering them in some way.
"Most interestingly we see behavioral analysis tools increasingly using AI," Finch said. "By that I mean tools analyzing data to determine behavior of hackers to see if there is a pattern to their attacks — timing, method of attack, and how the hackers move when inside systems. Gathering such intelligence can be highly valuable to defenders."
In a recent study, research firm Gartner interviewed nearly 50 security vendors and found a few patterns for AI use among them, says research vice president Mark Driver.
"Overwhelmingly, they reported that the first goal of AI was to 'remove false positives' insofar as one major challenge among security analysts is filtering the signal from the noise in very large data sets," Driver said. "AI can trim this down to a reasonable size, which is much more accurate. Analysts are able to work smarter and faster to resolve cyber attacks as a result."
In general, AI is used to help detect attacks more accurately and then prioritize responses based on real world risk, Driver said. And it allows automated or semi-automated responses to attacks, and finally provides more accurate modelling to predict future attacks. "All of this doesn't necessarily remove the analysts from the loop, but it does make the analysts' job more agile and more accurate when facing cyber threats," Driver said.
Adding to cyber threats
On the other hand, bad actors can also take advantage of AI in several ways. "For instance, AI can be used to identify patterns in computer systems that reveal weaknesses in software or security programs, thus allowing hackers to exploit those newly discovered weaknesses," Finch said.
When combined with stolen personal information or collected open source data such as social media posts, cyber criminals can use AI to create large numbers of phishing emails to spread malware or collect valuable information.
"Security experts have noted that AI-generated phishing emails actually have higher rates of being opened — [for example] tricking possible victims to click on them and thus generate attacks — than manually crafted phishing emails," Finch said. "AI can also be used to design malware that is constantly changing, to avoid detection by automated defensive tools."
Constantly changing malware signatures can help attackers evade static defenses such as firewalls and perimeter detection systems. Similarly, AI-powered malware can sit inside a system, collecting data and observing user behavior up until it's ready to launch another phase of an attack or send out information it has collected with relatively low risk of detection. This is partly why companies are moving towards a "zero trust" model, where defenses are set up to constantly challenge and inspect network traffic and applications in order to verify that they are not harmful.
But Finch said, "Given the economics of cyberattacks — it's generally easier and cheaper to launch attacks than to build effective defenses — I'd say AI will be on balance more hurtful than helpful. Caveat that, however, with the fact that really good AI is difficult to build and requires a lot of specially trained people to make it work well. Run of the mill criminals are not going to have access to the greatest AI minds in the world."
Cybersecurity program might have access to "vast resources from Silicon Valley and the like [to] build some very good defenses against low-grade AI cyber attacks," Finch said. "When we get into AI developed by hacker nation states [such as Russia and China], their AI hack systems are likely to be quite sophisticated, and so the defenders will generally be playing catch up to AI-powered attacks."