AI vs AI: The Great Game

By Ajay Singh, Author of CyberStrong! A Primer on Cyber Risk Management for Business Managers

Artificial Intelligence (AI) has boosted productivity, enhanced the quality of decision making and provided solutions to many complex problems. AI comprises a wide range of algorithms, models, and analytical technologies working in tandem to enable computers and other machines to sense, evaluate, act autonomously and even learn with human-like capabilities. While it is considered among the most disruptive technologies to revolutionize the management and business models of organizations, it has also changed the way we live, work, learn and play. Unwittingly, it has also kick-started a great game in cyberspace between hackers deploying AI for their nefarious goals and defenders who use AI for identifying threats and shoring up their defenses. It may be early days yet, but the AI-driven cyber arms race in cyberspace is truly underway between hackers and defenders and is expected to go on for a long time to come.

Bruce Schneier a public-interest technologist and fellow at the Harvard Kennedy School suggests that ‘artificial intelligence will eventually find vulnerabilities in all sorts of social, economic, and political systems, and then exploit them at unprecedented speed, scale, and scope. After hacking humanity, AI systems will then hack other AI systems, and humans will be little more than collateral damage. He further goes on to say that ‘Hackers Used to Be Humans. Soon, AIs Will Hack Humanity’(Hackers Used to Be Humans. Soon, AIs Will Hack Humanity | WIRED, n.d.). Is this an extreme view? A closer look at issues around the use of artificial intelligence will tell us that we are well on our way.

To understand the issues involved, we need to examine the role of AI in offensive maneuvers by hackers and how AI can help the defenders.

AI is a powerful weapon for hackers

The weaponization of AI was inevitable. As the defenders got their act together and began to close vulnerability gaps and protect entry points, hackers found AI as a useful way to not only find innovative ways to get past defenses but also to automate a lot of human effort. AI proved a valuable addition to their arsenal which helped them to add speed, stealth, and unpredictability into their attacks. Further advances in the form of Machine Learning (ML) enabled them to even adapt and boost their chances of success of the attacks.

The World Economic Forum in a report(3 Ways AI Will Change the Nature of Cyber Attacks | World Economic Forum, n.d.) suggests that AI-powered cyberattacks are not a hypothetical threat to be dealt with in the future.  They observe that the required building blocks for the use of offensive AI are already in place such as -highly sophisticated malware, financially motivated – and ruthless – criminals willing to use any means possible to increase their return on investment, and open-source AI research projects which make highly valuable information available in the public domain.  They further propose that AI can change the nature of cyber-attacks. Firstly, through impersonation of users by using AI to capture the characteristics of an individual’s behavior and language by analyzing email and social media communications. Secondly, using AI for stealth, timing, and speed. Hackers can use AI to maintain a long-term presence in targeted environments, identify vulnerabilities and attack opportunities by analyzing large volumes of data as well evade security controls and compromise more devices. Thirdly, AI is useful for hackers in incorporating greater levels of sophistication and conducting their operations at great speed and at many times the scale. It is worth mentioning hackers today can cause greater harm by sourcing or renting advanced AI-driven technologies on the darknet and deploying them without having the skill and knowledge to develop them. This easy availability makes the threat for AI-driven attacks a major cause for concern and raises the table stakes in the security game to another level.

Hackers have recognized that AI can boost their efforts and criminal activities in a big way.  It allows them to operate on a bigger scale with the added benefits of speed, minimum effort, and cost. Some of the areas where they have been actively using AI are as follows:

AI-powered Phishing

To be able to harvest credentials and launch mass bespoke phishing attacks was every hacker’s dream. Today, by deploying AI, they can firstly gather, analyze, and use information about companies, employees, and other targets more easily and quickly. This capability enables them to plan and execute targeted mass spear-phishing attacks with greater chances of success on unsuspecting victims.

AI for Deep Fakes & deception

Spoofing and impersonation are techniques that have been used by hackers for some time now by impersonating a company, brand, or known person. Now, they can use deep fakes which combine audio and/or video that is either entirely created or modified by artificial intelligence or machine learning to plausibly misrepresent someone as doing or saying something that the hacker wants to convey. The story of a CEO of a UK Energy based company who was deceived through deep fake audio is an example of the kind of damage a deep fake can cause. In this instance, cybercriminals called the U.K. company’s CEO impersonating the CEO of the parent company and managed to deceive him into making an urgent wire transfer of $243,000.

The threat from this kind of AI-driven deep fakes is set to rise further as cybercriminals are taking advantage of remote workers and a distributed workforce which makes their job of manipulation easier. It also enables them to launch more deceptive phishing campaigns via email or business communication platforms which serve as useful delivery mechanisms for deep fakes. As users are more likely to trust organizational communications from known sources, the hackers’ chances of success are much greater. The next frontier for deep fake technology also known as AI that deceives is to defeat biometric authentication. For now, the answer is that it is possible but fails in the face of a ’liveness’ test which is performed to if the biometric traits are from a living person rather than an artificial or lifeless person. Only time will tell if deep fakes can evolve to defeat this.

AI-powered Malware

Conventional malware is designed to be both deceptive and malicious. Introducing AI into malware can make it much more potent and powerful. A malware that is AI-powered can be capable of adapting to existing protection systems and finding ways of bypassing them. There are experts who feel that AI-powered malware will initially be based on the exploitation of known vulnerabilities and misconfigurations which can be detected through security audits and vigilance and prompt action can neutralize the potential threat from materializing. Hackers are meanwhile exploring ways in which AI can be used to remain untraced in target environments for long periods of time and activate triggers that can be voice or facial recognition driven among others to launch full-scale attacks

AI-driven Vulnerability discovery & Automated Hacking

The use of AI and ML has not only made the discovery of vulnerabilities in IT infrastructure, software, and systems easier but has provided ways of understanding the context, developing risk scores for prioritizing risk mitigation actions, and correlating them to vulnerability trends. However, the same information in the wrong hands can lead to exploitation of vulnerabilities in ways by which hackers can inflict maximum knowledge. Today, many hacking tools are easily available that help in finding exploitable weaknesses in computer systems, web applications, servers, and networks. To develop a clear picture of their target networks and devices, hackers can use programs like Shodan to compile a comprehensive list of internet-connected devices, including web servers, surveillance cameras, webcams, and printers. Once all this information is gathered, hackers use automated hacking tools to minimize human efforts and give them the ability to scrutinize large amounts of information which they can then exploit to their advantage. Examples of automated hacking attacks include brute force attacks, credential stuffing, hacker bots, scraping, captcha bypass, etc. These attacks can vary in scale, timing, duration, and frequency and AI is useful in exercising control over these attacks. 

AI-powered botnets

DDoS attacks, which use botnets or zombie machines, often involve the use of AI to control attacks and make them more devastating. The cyberattack against TaskRabbit in 2018 is a prime example where a large botnet controlled by AI was used to perform a massive DDoS attack where 3.75 million users were impacted. The magnitude of the attack was such that the entire site had to be disabled until security could be restored. This further affected an additional 141 million users.

The above modes of attack powered by AI are not the only ones. Hackers have tasted the power of AI and learned to harness it to make their attacks more effective. On the other hand, the induction of AI in the hacker’s arsenal can lead to wide-ranging consequences for businesses, governments, and society at large. There is a thriving market for tools and services which hackers regardless of their technical capability can access and use. AI-based tools are increasingly available to help hackers identify targets and launch attacks in minutes. A greater worry relates to the use of AI in new malware strains that can avoid detection or evade defenses by modifying their behavior based on their learning from detection mechanisms and controls.

A recent variation of a typical botnet attack was that of bot extortion where hackers threatened to launch an SEO attack on CheapAir, a flight price comparison website. When the company refused to pay the money demanded by the hackers, they unleashed a torrent of negative reviews via bots(Automated Cyber Attacks Are the Next Big Threat. Ever Hear of “Review Bombing”? n.d.).

In the future, we can expect hackers to add many use cases involving artificial intelligence/ machine learning to enable cyber-attacks which will be both dramatic in the way they are crafted and massive in scale to cause unprecedented damage on organizations, mission critical systems, and individuals.

Fighting AI with AI

While hackers can use AI for gaining an offensive advantage, equally defenders can use AI to bolster their defenses. The same advantages of speed, analyzing vast amounts of data, and automating various aspects of cybersecurity hold great promise for the defenders. Security experts and solution providers are actively engaged in harnessing the capabilities of AI and incorporating them into security solutions. The rising frequency of cyber threats and attacks leaves organizations (particularly large ones) little choice but to use AI-enabled security tools and products that can enable them to detect and respond quickly to cybersecurity incidents with limited or no human intervention. 

AI for Security Threat Intelligence

The use of threat intelligence in cyberspace draws its origins from its use by militaries around the world to identify and respond to potential security threats. Cyber threat intelligence refers to information that is collated and analyzed by an organization to identify the cyber threats that they are facing or will face in the future. Threat intelligence today involves the analysis of large amounts of data on an ongoing basis which is where the role of AI and machine learning becomes critical. Several research reports support the view that AI is necessary for making the analysis of threats more efficient as well as in taking preventive and preemptive security decisions and actions.

AI for identifying new threats and malicious activities

Traditional antivirus and threat detection software are based mainly on heuristics and virus signatures for detection. This leaves room for hackers to use new malicious codes which can bypass such protection systems. AI and ML models enable threat detection software to gather, process, and use it to form inferences which can lead to better threat detection and predictions. Deep learning which enables learning by example ability can further augment threat detection abilities.

Emails are the preferred method of hackers to deliver malicious links and attachments for launching phishing forays. Surveys indicate that spam accounts for over half of received emails. Much of this could contain malicious links and payloads. AI-enabled email scanning has proved to be extremely useful in identifying phishing emails and other types of threats and given the volumes of emails involved seems the only way not only for identification of malicious links, messages, and attachments, but also flagging suspicious activities and anomalies.

Threats can emanate not only from external sources (hackers), but from internal sources (insiders) as well. AI is increasingly being deployed to understand and analyze user behavior to identify patterns, trends, anomalies, and gain other insights to implement necessary security controls and enable appropriate security actions.

AI for better Endpoint Protection

As we continue to add more devices connected to the Internet every day, securing these endpoint devices has become critical. The concept of defending a well-defined corporate perimeter is over. Instead, we are faced with protecting a large number of devices distributed across geographies, users, and applications. AI-driven endpoint protection ensures that a baseline for endpoint device behavior is established, monitored, and maintained. Deviant behavior from the baseline can be identified and flagged for further actions.

AI for combating Bots

Combatting bot traffic through manual systems is no longer effective when it comes to dealing with the large amount of bot traffic that is generated today. AI and ML can help analyze vast amounts of data traffic as well as distinguish and categorize the same.

While the above represents a few important use cases for AI in bolstering cybersecurity, there are many more. The Capgemini Research Institute in their report (Reinventing Cybersecurity with Artificial Intelligence – Capgemini Worldwide, n.d.) on the role of AI in cybersecurity found that: 

  • The use of AI for cybersecurity is a growing necessity
  • The pace of adoption of AI for cybersecurity is increasing
  • There is a strong business case for using AI for cybersecurity
  • AI enables organizations to respond faster to breaches

Just as the weaponization of AI by hackers was inevitable so is its increasing use by defenders. As networks become larger, distributed, and support diverse devices, data traffic increases manyfold every year and it becomes more difficult to deal with the associated complexities, the use of AI seems like the best way to deal with not only threat and security management but issues such as accountability, transparency, privacy, safety, and fairness.

A peek into the future may show that innovations in AI may lead to AI systems attacking and defending against each other making us bystanders hoping for the best outcome. Before the AI vs AI arms race escalates to this level and we possibly lose complete control over these technologies, we must adopt a hybrid approach that combines the power of AI and human intervention (wherever and whenever) required to identify, analyze, predict, and even resolve new and complex cyber threats.  AI is a great source of innovation in many spheres of our existence. While it can provide immense benefits, in the wrong hands can cause large-scale damage. In times to come, we will see the great game between the hackers and defenders play out with the stakes going up in each round.