Cyberattacks Top Global Risks as AI Fuels Misinformation and Disinformation

By Ajay Singh, Author- CyberStrong & Introduction to Cybersecurity-Concepts, Principles, Technologies, and Practices

The just released World Economic Forum (WEF) Global Risks Outlook Report 2024 highlights two important risks that are tightly linked to the cyber world. The report which is based on a survey of 1500 global experts suggests that in the near-term (next 2 years) Artificial Intelligence (AI) generated misinformation and disinformation is the top concern while cyber insecurity ranks at number four among the top ten risks.

The year 2023 has been a year of geopolitical uncertainty, economic stress and accelerated technological changes. During the year generative AI has made its way into the mainstream of digital systems and has demonstrated the potential to transform the world in several ways. It has made a significant impact on various domains such as boosting productivity of businesses, transforming writing, researching, designing, and coding in the educational domain, and stamping its influence on societal aspects such as culture, politics, economy, and ethics. There is a generative AI wave that is engulfing the cyber world which while becoming a source of competitive advantage, can also pose serious challenges and risks, such as misinformation, disinformation, prejudice, and privacy violations. We can expect that these risks will become more pronounced not only in 2024, but in the years ahead.

Misinformation and disinformation that are created or spread using artificial intelligence systems, have the ability to cause massive harm to society at large by manipulating public opinion, influencing behaviour, undermining trust in established democratic process, institutions, and media as well as endangering lives of human beings by violating rights, inciting hatred, and violence. In the year 2024, over 50 countries are going to hold elections and the impact of misinformation and disinformation cannot be underestimated.

To be able to effectively counter the threat of AI-generated misinformation and disinformation, demands high levels of digital literacy, complete transparency and accountability of AI systems/ platforms, stringent global and local regulations as well as independent watchdogs who are equipped to identify and act against the spread of AI-generated misinformation and disinformation. Unfortunately, at this time the world is ill equipped in every respect to control the negative impact of this technological marvel which has a monstrous side to it.

Among the different and most potent manifestations of AI generated content are deep fakes. These are synthetic content that include videos, images, or audio, which are created or manipulated by artificial intelligence (AI) systems. Deep fakes are a growing cause of concern as they can be used to spread misinformation and disinformation making it more believable by impersonating or defaming public figures, celebrities, or anyone for that matter. They can be used to fabricate, distort events, facts and evidence which can deceive audiences, consumers, and voters.

Going by the Global Risks Outlook reports of the WEF over the past five years (2020 to 2024), cybersecurity has figured in the top risks ten times in the last five years, with cyberattacks being the most prominent risk in 2024. This year, the report uses a new term ‘cyber insecurity’ and perhaps rightly so as the number of organizations that maintain minimum viable cyber resilience is down by nearly a third compared to previous year. Another insight provided in the report is that while large organizations have demonstrated improvements in cyber resilience, small and medium-sized companies showed a significant decline. Among the causes for this growing inequity are economic stress, new regulations and more importantly the rapid adoption of game changing technologies such as AI and ML by some organizations. All this while the vast number of organizations are still struggling to cope with traditional cyber threats such as ransomware, phishing, social engineering and malware that they have been combatting in the past few years. The shortage of cyber skills and talent continues to undermine security efforts and is likely to continue in the near future.

AI has already become a major part of the hackers’ arsenal given its ability to create and deploy more sophisticated, stealthy, and adaptive attacks. Hackers are now able to generate realistic but fake media content, build new malware that can evade detection and adapt to different environments, set up back doors, extract data, eavesdrop, or install ransomware and automate password cracking.

It is now imperative that cybersecurity responses and solutions also incorporate AI to develop countermeasures and defences that can keep up with the evolving threats. Even though the use of AI has once again tilted the scales in favour of the attackers, cybersecurity solution providers have leveraged the use of AI and ML (machine learning) in areas such as malware detection, behavioural threat detection and incident triage and response.  By no means, can AI and ML be considered as silver bullets for cybersecurity, but when deployed in tandem with approaches such as zero trust can provide effective protection from cyber-attacks.

It continues to be a matter of global concern that cyber regulation is lagging behind the rapid development and evolution of cyber threats and technologies. The World Economic Forum has highlighted on more than one occasion that cyber regulation remains one of the key challenges for improving global cybersecurity. There is a crying need for regulation to deal with emerging cybersecurity challenges that call for greater collaboration, coordination, and communication among different stakeholders and levels of governance. The complexity and dynamism of the cyber environment requires not only intent and prioritization of regulation but speed and urgency. Balancing competing interests and needs of various stakeholders, such as governments, businesses, civil society, and individuals is often cited as an impediment to bringing in effective regulation to combat cyber risks, but it is more often a lack of understanding, commitment and cooperation to the task that leaves us to devise our own responses to existing and emerging cyber risks.

By Ajay Singh, Author- CyberStrong & Introduction to Cybersecurity-Concepts, Principles, Technologies, and Practices

The just released World Economic Forum (WEF) Global Risks Outlook Report 2024 highlights two important risks that are tightly linked to the cyber world. The report which is based on a survey of 1500 global experts suggests that in the near-term (next 2 years) Artificial Intelligence (AI) generated misinformation and disinformation is the top concern while cyber insecurity ranks at number four among the top ten risks.

The year 2023 has been a year of geopolitical uncertainty, economic stress and accelerated technological changes. During the year generative AI has made its way into the mainstream of digital systems and has demonstrated the potential to transform the world in several ways. It has made a significant impact on various domains such as boosting productivity of businesses, transforming writing, researching, designing, and coding in the educational domain, and stamping its influence on societal aspects such as culture, politics, economy, and ethics. There is a generative AI wave that is engulfing the cyber world which while becoming a source of competitive advantage, can also pose serious challenges and risks, such as misinformation, disinformation, prejudice, and privacy violations. We can expect that these risks will become more pronounced not only in 2024, but in the years ahead.

Misinformation and disinformation that are created or spread using artificial intelligence systems, have the ability to cause massive harm to society at large by manipulating public opinion, influencing behaviour, undermining trust in established democratic process, institutions, and media as well as endangering lives of human beings by violating rights, inciting hatred, and violence. In the year 2024, over 50 countries are going to hold elections and the impact of misinformation and disinformation cannot be underestimated.

To be able to effectively counter the threat of AI-generated misinformation and disinformation, demands high levels of digital literacy, complete transparency and accountability of AI systems/ platforms, stringent global and local regulations as well as independent watchdogs who are equipped to identify and act against the spread of AI-generated misinformation and disinformation. Unfortunately, at this time the world is ill equipped in every respect to control the negative impact of this technological marvel which has a monstrous side to it.

Among the different and most potent manifestations of AI generated content are deep fakes. These are synthetic content that include videos, images, or audio, which are created or manipulated by artificial intelligence (AI) systems. Deep fakes are a growing cause of concern as they can be used to spread misinformation and disinformation making it more believable by impersonating or defaming public figures, celebrities, or anyone for that matter. They can be used to fabricate, distort events, facts and evidence which can deceive audiences, consumers, and voters.

Going by the Global Risks Outlook reports of the WEF over the past five years (2020 to 2024), cybersecurity has figured in the top risks ten times in the last five years, with cyberattacks being the most prominent risk in 2024. This year, the report uses a new term ‘cyber insecurity’ and perhaps rightly so as the number of organizations that maintain minimum viable cyber resilience is down by nearly a third compared to previous year. Another insight provided in the report is that while large organizations have demonstrated improvements in cyber resilience, small and medium-sized companies showed a significant decline. Among the causes for this growing inequity are economic stress, new regulations and more importantly the rapid adoption of game changing technologies such as AI and ML by some organizations. All this while the vast number of organizations are still struggling to cope with traditional cyber threats such as ransomware, phishing, social engineering and malware that they have been combatting in the past few years. The shortage of cyber skills and talent continues to undermine security efforts and is likely to continue in the near future.

AI has already become a major part of the hackers’ arsenal given its ability to create and deploy more sophisticated, stealthy, and adaptive attacks. Hackers are now able to generate realistic but fake media content, build new malware that can evade detection and adapt to different environments, set up back doors, extract data, eavesdrop, or install ransomware and automate password cracking.

It is now imperative that cybersecurity responses and solutions also incorporate AI to develop countermeasures and defences that can keep up with the evolving threats. Even though the use of AI has once again tilted the scales in favour of the attackers, cybersecurity solution providers have leveraged the use of AI and ML (machine learning) in areas such as malware detection, behavioural threat detection and incident triage and response.  By no means, can AI and ML be considered as silver bullets for cybersecurity, but when deployed in tandem with approaches such as zero trust can provide effective protection from cyber-attacks.

It continues to be a matter of global concern that cyber regulation is lagging behind the rapid development and evolution of cyber threats and technologies. The World Economic Forum has highlighted on more than one occasion that cyber regulation remains one of the key challenges for improving global cybersecurity. There is a crying need for regulation to deal with emerging cybersecurity challenges that call for greater collaboration, coordination, and communication among different stakeholders and levels of governance. The complexity and dynamism of the cyber environment requires not only intent and prioritization of regulation but speed and urgency. Balancing competing interests and needs of various stakeholders, such as governments, businesses, civil society, and individuals is often cited as an impediment to bringing in effective regulation to combat cyber risks, but it is more often a lack of understanding, commitment and cooperation to the task that leaves us to devise our own responses to existing and emerging cyber risks.