The cybersecurity threat landscape

This section explores the evolving cybersecurity threat landscape and its key technological and sociopolitical influences

Learning objectives

  • Describe the changing cybersecurity threat landscape

  • Describe technological and sociopolitical drivers of the change

This section explores cybersecurity threats at the individual, business, and societal levels and key underlying technological and sociopolitical factors.

Topics covered in this section

  • Cybersecurity threats at the societal and individual levels

  • Cybersecurity threats to business/industry

  • Technological drivers of change

  • Sociopolitical drivers of change

Cybersecurity threats at the societal and individual levels

Societal level threats

Cyberwarfare/cyberattacks on critical infrastructure such as power grids, defence facilities, and health services: “As the number and variety of devices used to support, monitor, and control critical infrastructure become more interconnected, the likelihood of cyber threat actors disrupting critical infrastructure has increased” (CSE, 2018, p. 23).

Cyberwarfare/cyberattacks on public institutions and sensitive information: Cyber threat activity “against public institutions—such as government departments, universities, and hospitals—is likely to persist because of the essential nature of the services and the sensitivity of the information they manage” (CSE, 2018, p. 26).

Cyberwarfare: Targeted propaganda/misinformation/disinformation via social media platforms (e.g., to foment unrest/public discord against authorities/government).

Individual level threats

Cybercrime (personal information/identity theft): Canadians face a rising cyber risk of falling victim to cybercrime, especially identity theft. Theft of personal and financial information is lucrative for cybercriminals and is very likely to increase (CSE, 2018). Cybercriminals profit "by obtaining account login credentials, credit card details, and other personal information. They exploit this information to directly steal money, to resell information on cybercrime marketplaces, to commit fraud, or for extortion" (CSE, 2018, p. 11).

Political interference/cyber warfare (malicious online influence activity): Cyber threat actors can amplify or suppress social media content using botnets, which automate online interactions and share content with unsuspecting users (CSE, 2018). By spreading their preferred content among large numbers of paid and legitimate users, cyber threat actors can promote their specific point of view and potentially influence Canadians. (CSE, 2018, p. 15)

State and business surveillance of individuals: Cases in point are NSA's Dragnet surveillance program, and Facebook’s Cambridge-Analytica data scandal. The ability to extract value from surveillance data has made privacy and innovation “the duet of the century” (Bains, 2019).

Threat landscape in Canada

Table 1: Cybersecurity Threats Facing Individuals, Businesses, and Society (CSE, 2018)

Cybersecurity threats to business/industry

Businesses face an increasing risk of cybercrime, especially data breaches from commercial espionage, commercial data theft, and social engineering schemes (social engineering often combined with malware/ransomeware or phishing attacks).

Cybercrime (data breaches)

“Canadian businesses, especially those active in strategic sectors of the economy, are subject to cyber espionage aimed at stealing intellectual property and other commercially sensitive information.” This cyber threat activity “can harm Canada’s competitive business advantage and undermine our strategic position in global markets” (CSE, 2018, p. 19).

Espionage/information theft: Cyber campaigns launched by hackers from one country targeting firms of another country resulting in the theft of business information “such as bid prices, contracts and information related to mergers and acquisitions” (Onag, 2018).

“Foreign and domestic adversaries target higher education institutions that have military and government contracts” (McNamara, March 15, 2019).

Political interference/cyberwarfare

Cyber warfare can involve sabotage (e.g., Stuxnet, attributed to the United States and Israel) and malware attacks on financial institutions (e.g., WannaCry and Petya ransomware, attributed to North Korea).

Risk in higher education (industry in focus)

According to EDUCAUSE, a U.S. based nonprofit association that helps higher education elevate the impact of IT, with community of over 100,000 members spanning 45 countries, information security was the number one IT governance issue in 2016. The top higher education information security risks that were a priority for IT in 2016 were 1) phishing and social engineering; 2) end-user awareness, training, and education; 3) limited resources for the information security program (i.e., too much work and not enough time or people); and 4) addressing regulatory requirements (Grama & Vogel, 2017).

The top higher education information security risks in the U.S. and Canada that are a priority for IT in 2016 (Grama & Vogel, 2017) are summarized as follows.

Information Security Risk in Higher Education (Adapted from EDUCAUSE, 2019)

Risk
Impact

1) Phishing and Social Engineering

“Over the past two decades, phishing scams have become more sophisticated and harder to detect.” While traditional phishing messages “sought access to an end user’s institutional access credentials (e.g., username and password),” today “ransomware and threats of extortion are common in phishing messages, leaving end users to wonder if they have to actually pay the ransom.”

2) End-User Awareness, Training, and Education

End-user awareness, training, and education “is critical as campuses combat persistent threats and try to make faculty, students, and staff more aware of the current risks.” While “the majority of U.S. institutions (74%) require information security training for faculty and staff, those programs tend to be leanly staffed with small budgets.”

3) Limited Resources for the Information Security Program

The 2015 EDUCAUSE Core Data Service survey covering all US higher education institutions showed that about 2 percent of total central IT spending is allocated for information security and that there is 0.1 central IT information security FTEs per 1,000 institutional FTEs (full time equivalents). About 55% of surveyed respondents said the security awareness budget for 2016 was less than 5K; and about 25% said they do not know; 15% said between 5-25k; and 7% said between 25-50k; and less than 1% said between 50 and 100K. “With limited resources, higher education institutions must be creative and collaborative in addressing information security awareness needs.”

4) Addressing Regulatory Requirements

The regulatory environment impacting higher education IT systems is complex. Data protection in higher education IT systems is governed by a patchwork of different federal and/or state laws rather than by one national data protection law. Student data are traditionally protected by the Family Educational Rights and Privacy Act of 1974 (FERPA) “although some types of student data, when it is held in healthcare IT systems, may be protected by the Health Insurance Portability and Accountability Act of 1996 (HIPAA).” In addition, some types of student and institutional employee financial data may be protected by the Gramm Leach Bliley Act (GLBA). State laws may have data-breach notification requirements, and contractual agreements may have their own list of security technological controls that must be implemented and validated in IT systems. (Grama & Vogel, 2017)

Technological drivers of change

• Social digitization

• Digital convergence of communications channels

• AI/ML

• Internet of things (IoT)

• A growing spyware industry

Social digitization

Kool, Timmer, Royakkers, and van Est (2017) of the Dutch Rathenau Instituut argue that the digitization of society has entered a cybernetic phase, thanks to a host of emergent technological innovations in computing and communications together generating a new wave of digitization. The concept of digitization refers to a large cluster of digital technologies such as robotics, the Internet of Things, artificial intelligence and algorithms, and big data. Artificial intelligence is becoming ubiquitous, increasingly finding its way into more and more software applications, and involves giving computer systems a form of intelligence, such as learning and autonomous decision making, and thus supports a myriad of emerging and disruptive technological innovations (e.g., smart environments, robotics, and network monitoring). “Urgent Upgrade: Protect Public Values in Our Digitized Society” explores the ethical and societal challenges of digitization and the challenges of the governance landscape in the Netherlands. “We investigated which technologies are expected to shape digital society in the coming years, and which social and ethical challenges they will bring” (p. 116). The analysis involved an examination of the role of the scientific community and knowledge institutions, institutions responsible for protecting human rights, civil society, and “the roles of policy makers and politicians in agenda setting, in political decision making, and in the implementation of policy” (p. 11).

The analysis investigated the ethical and social issues that arise in the material, biological, socio-cultural and digital worlds and focused on eight technology areas that “best illustrate a wide range of the impact of the new wave of digitization” (p. 23)--that is, IoT and robotics; biometrics and persuasive technology; digital platforms, augmented reality, virtual reality and social media; and artificial intelligence, algorithms and big data (see Table 2).

Table 2: Technology Areas in The Four Worlds (Kool et al., 2017, p. 45)

Material world
Biological world
Socio-cultural world
Digital world

Robotics

Persuasive technology

Platforms

Artificial intelligence

Internet of Things

Multimodal biometrics

VR/AR and social media

Big data and algorithms

Although “digitization has been going on for decades,” recently it has become “easier to intervene real time in the physical world at an increasingly detailed level.” This “ushered in a new phase in the development of the digital society; a phase in which a cybernetic loop exists between the physical and the digital world” (p. 44). This means,

processes in the physical world are measured, the resulting data is analysed, and then real time intervention takes place based on that data analysis. The impact of the intervention can subsequently be measured, analysed and adjusted, before rejoining the following cybernetic loop cycle. (p. 44)

Kool et al. (2017) see “a return to the so-called ‘cybernetic thinking’ that attracted interest in the 1950s and 1960s.” In cybernetics “biological, social and cognitive processes can be understood in terms of information processes and systems, and thus digitally programmed and controlled” (p. 44). Based on the various phases in the cybernetic loop--collection, analysis, and application--the authors “see various ethical and social issues emerging” related to the development of technology that require attention in the coming years. The new wave of digitization is “leading to a world in which continuous feedback and realtime management and control are increasingly important principles for a range of services.” This exerts “a strain on important public values” such as privacy, equity and equality, autonomy and human dignity. These values are clustered into seven topics (see Table 3: Overview of Ethical and Societal Issues Related to Digitization). Analysis of the scientific literature on technologies revealed several recurring themes – “privacy, autonomy, security, controlling technology, human dignity, equity and inequality, and power relations” (Kool et al., 2017, p. 47).

Table 3: Overview of Ethical and Societal Issues Related to Digitization (Kool et al., 2017, p. 8)

Central topic
Issues

Privacy

Data protection, privacy, mental privacy, spatial privacy, surveillance, function creep

Autonomy

Freedom of choice, freedom of expression, manipulation, paternalism

Safety and security

Information security, identity fraud, physical safety

Control over technology

Control and transparency of algorithms, responsibility, accountability, unpredictability

Human dignity

Dehumanization, instrumentalization, deskilling, desocialization, unemployment

Equity and equality

Discrimination, exclusion, equal treatment, unfair bias, stigmatization

Balances of power

Unfair competition, exploitation, shifting relations consumers and businesses, government and businesses

Kool et al. (2017) argue that while initially digitization processes consisted of “the large-scale collection of data on the physical, biological and social world,” a new wave of digitization characterized by continuous, cybernetic, feedback loops is focused on the large-scale analysis and application of that data. Nowadays “we can analyse this data on a large scale and apply the acquired knowledge directly in the real world” (p. 43). On one hand, real-time intervention and cybernetic (re)directing can be beneficial to society in various sectors--e.g., self-driving cars that update their digital maps through experience (learning). On the other hand, “Take for example social media users’ newsfeeds, which social media companies are now ‘customizing’ based on their monitoring and analysis of these same users’ surfing behaviour” (Kool et al., 2017, p. 25). Surveillance capitalism “commodifies personal clicking behavior” -- “it unilaterally claims private human experience as a free source of raw material” (Thompson, 2019). Social media sites are “calibrated” for user engagement and interaction. Surveillance can influence user behavior in complex ways, including unconsciously--hitting either the information security or the political autonomy of citizens. Data surveillance “can unconsciously influence a user’s identity, and lead to ‘filter bubbles’, in which the system only suggests news, information and contacts that match the user’s previous behaviour, choices and interests” (Kool et al., 2017, p. 10).

Kool et al. (2017) conclude that “the far-reaching digitization of society is raising fundamental ethical and societal issues.” Government and society “are not adequately equipped to deal with these issues” (p. 26). The governance system “needs to be upgraded if it is to “safeguard our public values and fundamental rights in the digital age now and in the future.” This upgrading “requires that all parties – government, business and civil society – take action to keep digitization on the right track” (p. 26).

Digital convergence of communications channels

The increasing digitization and convergence of communications channels—spanning personal, business, and government domains—with telecommunications and broadcast industries have significantly expanded the cybersecurity threat landscape. As these traditionally separate sectors migrate toward IP-based networks and cloud platforms, previously isolated systems now interconnect, creating new vulnerabilities (ENISA, 2023). Cyber adversaries exploit this convergence, using weaknesses in one sector (e.g., telecom infrastructure) to attack others (e.g., enterprise VoIP systems or emergency broadcast networks). Threats like SS7/Diameter protocol exploits, large-scale DDoS attacks, and supply chain compromises now propagate more easily across converged digital ecosystems (Kshetri, 2023).

The shift to digital broadcasting and IP-based services has further introduced novel risks, including deepfake-driven disinformation and ransomware attacks targeting live media streams (NIST, 2022). Meanwhile, telecom providers—now acting as hybrid IT and communications operators—face heightened targeting by nation-state actors seeking to disrupt critical services or intercept sensitive data. Legacy systems, such as traditional PSTN networks, remain operational alongside modern 5G and IoT infrastructures, creating security gaps that attackers actively exploit (Lewis, 2023). Addressing these challenges requires adaptive policies, cross-sector collaboration, and updated regulatory frameworks to secure increasingly interconnected digital environments.

Related: Concentration of media ownership.

AI/ML

The integration of artificial intelligence (AI) and machine learning (ML) into cybersecurity has significantly altered the threat landscape, introducing both defensive advancements and sophisticated offensive capabilities. On the defensive side, AI/ML enhances threat detection by analyzing vast datasets to identify anomalies, predict attacks, and automate responses (Buczak & Guven, 2016). For instance, ML algorithms can detect previously unknown malware by recognizing behavioral patterns rather than relying on signature-based methods (Yadav & Rao, 2015). However, adversaries have also leveraged AI to develop more evasive threats, such as polymorphic malware that adapts to bypass traditional security measures (Miller et al., 2020). This dual-use nature of AI/ML has created an ongoing arms race between cyber defenders and attackers.

One of the most concerning developments is the use of AI-driven social engineering attacks, such as deepfake phishing and automated spear-phishing campaigns. Attackers employ natural language processing (NLP) to craft highly personalized messages, increasing the success rate of deception (Aburaddad et al., 2021). Additionally, adversarial machine learning techniques enable attackers to manipulate AI systems by injecting malicious data or exploiting model biases (Biggio & Roli, 2018). For example, evasion attacks can fool ML-based intrusion detection systems by subtly altering input data to avoid classification as malicious (Papernot et al., 2016). These advancements underscore the need for robust, adaptive security frameworks that account for AI-augmented threats.

Despite these challenges, AI/ML also offers promising solutions to enhance cybersecurity resilience. Autonomous response systems powered by reinforcement learning can mitigate attacks in real time, reducing the window of vulnerability (Sarker et al., 2020). Furthermore, AI-driven threat intelligence platforms improve situational awareness by correlating global attack patterns and predicting emerging threats (Mohammed et al., 2021). However, the effectiveness of these defenses depends on continuous model retraining and adversarial testing to prevent exploitation (Carlini & Wagner, 2017). As AI/ML continues to evolve, policymakers and security professionals must prioritize ethical guidelines and collaborative frameworks to mitigate risks while harnessing its defensive potential.

Internet of things (IoT)

The Internet of Things (IoT) has significantly expanded the cybersecurity threat landscape by introducing a vast array of interconnected devices, many of which lack robust security measures. Unlike traditional computing systems, IoT devices often prioritize functionality and cost-efficiency over security, making them vulnerable to exploitation (Kolias et al., 2017). Attack surfaces have grown exponentially as IoT deployments span critical sectors such as healthcare, smart cities, and industrial control systems, providing adversaries with new entry points for breaches, data theft, and large-scale attacks like Distributed Denial-of-Service (DDoS) (Antonakakis et al., 2017). Furthermore, the heterogeneity of IoT ecosystems complicates security standardization, leaving gaps that cybercriminals can exploit.

The proliferation of IoT has also amplified the scale and sophistication of cyber threats. Compromised IoT devices are frequently weaponized in botnets, enabling attacks that disrupt critical infrastructure—exemplified by the Mirai botnet, which harnessed vulnerable IoT devices to launch devastating DDoS attacks (Antonakakis et al., 2017). Additionally, the convergence of IoT with emerging technologies like 5G and edge computing introduces new attack vectors, such as man-in-the-middle attacks and firmware exploits (Kolias et al., 2017). As IoT adoption continues, organizations must prioritize security-by-design principles, threat monitoring, and regulatory frameworks to mitigate these evolving risks.

A growing spyware industry

Spyware is software designed to secretly monitor, collect, and transmit a user’s activities, personal data, or sensitive information to a third party—often without consent. While commonly associated with cybercriminals, spyware can also be commercially developed and sold for surveillance purposes, blurring the line between malicious hacking and lawful monitoring.

Types of Spyware

1. Malicious Spyware (Illegitimate Use)

  • Keyloggers – Records keystrokes (e.g., passwords, credit card numbers).

    • Examples: Hawkeye, Spyrix

  • Adware with Spyware – Displays ads while secretly harvesting data.

    • Examples: Search Marquis, Fireball

  • Trojans – Disguised as legitimate software but installs spyware.

    • Examples: Emotet, Zeus

  • Mobile Spyware – Infects smartphones (e.g., stalkerware like FlexiSPY).

2. Commercial Spyware (Legal but Controversial)

Developed by private firms and sold to governments, law enforcement, or private entities for surveillance—often marketed as "lawful intercept" tools.

  • NSO Group (Israel) – Known for Pegasus spyware, which infects smartphones, extracts messages, calls, and even activates cameras/microphones.

    • Criticism: Used against journalists, activists, and dissidents.

    • Sanctions: Added to U.S. export blacklist in 2021.

  • Paragon (Israel) – Sells Graphite, a spyware tool targeting iOS and Android.

How Spyware Infects Your Computer

  1. Pirated Software/Cracks – Fake downloads bundle spyware.

  2. Malicious Ads/Pop-ups – Redirects to infected sites.

  3. Phishing Emails – Infected attachments (PDFs, Word files).

  4. Fake Updates – Disguised as Flash Player or browser updates.

  5. Bundled Freeware – Installs spyware if "Advanced" settings are skipped.

  6. Zero-Click Exploits (Advanced Spyware like Pegasus) – No user interaction needed.

Signs Your Device Has Spyware

  • Slow performance, frequent crashes

  • Unusual pop-ups & browser redirects

  • Increased data usage (spyware transmitting data)

  • Unknown programs in Task Manager

  • Antivirus suddenly disabled

The Legal & Ethical Debate on Commercial Spyware

Defenders Argue:

  • Helps governments combat terrorism and crime.

  • Used for lawful surveillance with warrants.

Critics Counter:

  • Enables human rights abuses (targeting activists, journalists).

  • Sold to authoritarian regimes with poor oversight.

  • Should face stricter bans (e.g., U.S. blacklisting NSO Group).

Spyware is evolving—from criminal hacking tools to government-grade surveillance software. While some uses are legal, the lack of regulation raises serious privacy and human rights concerns. Staying informed and practicing good cybersecurity hygiene is crucial in this shifting landscape.

Mercenary spyware

Several companies, often referred to as "cyber-mercenaries" or "private-sector offensive actors" (PSOAs), specialize in developing and selling advanced spyware to governments, law enforcement, and private entities. Many operate in legal gray areas, facing criticism for enabling surveillance abuses.

Notable firms include NSO Group (Israel), Paragon (Israel), Candiru (Israel), Circles (Israel), Intellexa (Greece/Cyprus), Cytrox (North Macedonia, part of Intellexa), FinFisher (Gamma Group, UK/Germany), Wintego (Spain), RCS Labs (Italy, now part of Cy4Gate), BellTroX (India), and Zerodium (France/US).

NSO Group’s Legal Survival Tactics & Rebranding Efforts

  • Bankruptcy & Reinvention: After U.S. sanctions, NSO Group shifted ownership (backed by U.K. firm) and rebranded as "Dream Security" (2024), claiming a focus on "defensive cybersecurity."

  • Lobbying & PR: Hired ex-NSA officials to lobby Western governments, arguing spyware is "essential" against encrypted apps (e.g., WhatsApp, Signal).

  • Ongoing Lawsuits:

    • Apple’s Lawsuit (2023): Accused NSO of violating U.S. laws by targeting iPhone users.

    • Meta (WhatsApp) Lawsuit: Settled in 2024 after NSO allegedly hacked 1,400 phones via WhatsApp calls.

Paragon’s Stealthy Business Model & "No Trace" Claims

  • Ghost Infrastructure: Paragon allegedly uses front companies (e.g., "Itervest") to obscure its sales, making accountability difficult.

  • "Forensic Disappearance" Feature: Its Graphite spyware, a spyware tool targeting iOS and Android, reportedly self-destructs if detected, leaving minimal traces—a selling point for clients avoiding exposure.

  • Controversy: Despite claims of "vetted government clients," leaks suggest deployments in Kazakhstan and Mexico, where spyware targeted opposition figures.

Sociopolitical drivers of change

• U.S.-China rivalry for technological and geopolitical dominance

• Expansion of the military-industrial complex (collusion between Western governments and the dominant media companies)

These two sociopolitical influences are powerful, interconnected drivers shaping the modern cybersecurity threat landscape. Here is a detailed elaboration on each.

U.S.-China Rivalry for Technological and Geopolitical Dominance

This is not merely a trade war; it is a comprehensive strategic competition between a reigning superpower and a rising challenger. This rivalry fundamentally reshapes cybersecurity by blurring the lines between economic competition, espionage, and preparation for potential conflict.

How U.S.-China Rivalry Influences the Cybersecurity Threat Landscape:

  • From Espionage to Sabotage: The primary goal of cyber operations has expanded beyond traditional espionage (stealing blueprints, intellectual property, and government secrets). It now includes sabotage and pre-positioning. For example:

    • Intellectual Property Theft: State-sponsored actors (often categorized as Advanced Persistent Threats or APTs) aggressively target rival companies in key sectors like semiconductors, artificial intelligence, biotechnology, and renewable energy. This targeting aims to accelerate domestic technological development and erode the competitive advantage of the rival powers and their allied firms.

    • Pre-positioning in Critical Infrastructure: Both the U.S. and China (and other nations) are suspected of implanting malware within each other's critical infrastructure—power grids, financial systems, water treatment facilities, and transportation networks. The goal is not to cause immediate damage but to have the capability to disrupt or destroy these systems during a geopolitical crisis or military conflict, acting as a deterrent or a first strike option. The discovery of Chinese malware implants in U.S. critical infrastructure networks is a stark example (CISA, May 24, 2023)

  • The "Civil-Military Fusion" Doctrine: China's national strategy explicitly mandates that private companies, academic institutions, and tech startups must collaborate with and support the goals of the People's Liberation Army (PLA). This means:

    • A Blurred Line: A Chinese tech company developing AI for facial recognition is also developing technology for military use. This makes almost any Chinese tech firm a potential collaborator with state-sponsored cyber operations.

    • A Vast Ecosystem of Actors: The threat is no longer just from government hackers. It comes from a sprawling, state-directed ecosystem, making attribution and defense more complex.

  • Supply Chain Compromises: The rivalry has turned global technology supply chains into a key battleground.

    • Hardware and Software Dependencies: The U.S. fears that reliance on Chinese-made hardware (e.g., Huawei 5G equipment) or software could create "backdoors" for espionage or sabotage. This has led to bans and the "rip and replace" initiatives.

    • Weaponizing Interdependence: The U.S. uses its market power to cut off Chinese companies from critical American technology (like advanced chips from NVIDIA or ASML's EUV lithography machines). In response, China is motivated to copy that technology or develop it independently, increasing incentives for cyber theft.

  • The Battle for Norms and Standards: The rivalry is also about who sets the rules for the future internet and technologies like AI. China promotes a state-controlled, censored internet model, while the U.S. advocates for a more subtly governed and manipulated model. Winning this battle means embedding your technological standards and cybersecurity protocols globally, which grants long-term economic and intelligence advantages.

Expansion of the Military-Industrial Complex (into the Digital Realm)

The traditional concept of the "military-industrial complex" (the symbiotic relationship between a nation's military, its government, and the defense contractors that supply it) has expanded into the digital age. It now includes a new, powerful actor: major technology and media companies. This "collusion" creates a self-reinforcing cycle that amplifies cyber threats and shapes public perception.

How it Influences the Cybersecurity Threat Landscape:

  • The Cyber-Industrial Complex: A vast ecosystem of private cybersecurity firms, defense contractors with cyber units, and threat intelligence companies has emerged. Their business model depends on the existence of a pervasive and evolving threat.

    • Financial Incentive for Threat Inflation: While the threats are very real, these companies have a vested interest in highlighting and even exaggerating the severity of cyber threats. This drives government spending, increases their contracts, and sells their products (e.g., zero-trust architectures, advanced endpoint detection).

    • The "Cyber War" Narrative: This complex benefits from framing cybersecurity through a lens of perpetual "war," requiring a wartime budget and the suspension of certain norms. This mindset encourages more aggressive offensive cyber operations by states, which in turn provokes responses from adversaries, escalating the overall threat level for everyone.

  • The Government-Media-Tech Nexus:

    • Media's Role: Dominant media companies, often reliant on access to government officials for stories, can become conduits for shaped narratives. Leaks about cyber threats (e.g., "Russian hackers targeting the grid") are often strategically released by government agencies to achieve a goal: warn the public, deter an adversary, or justify a new policy or budget request. This creates a cycle of fear and reaction.

    • Tech's Dual Role: Companies like Google, Microsoft, Amazon (AWS), and Meta are now critical infrastructure. They host government data, provide communication platforms, and build the cloud infrastructure that powers the modern economy.

      • Partnerships: Governments are increasingly dependent on them for threat intelligence, and these companies have their own elite security teams that often find and disclose state-sponsored attacks. This is a form of collusion that is often necessary for national security but also concentrates immense power in a few private hands.

      • Surveillance Capitalism: The business models of these companies are based on data collection. This creates massive, lucrative targets for hackers (nation-state and criminal) and raises the stakes of every breach. Furthermore, the data harvesting techniques pioneered by tech companies are often adopted and used by state intelligence agencies.

  • The Privatization of Cyber Conflict: Governments now contract private companies to conduct offensive and defensive cyber operations. Mercenary hacker groups (like the Israeli NSO Group with its Pegasus spyware) sell intrusion capabilities to any government that can pay, dramatically empowering smaller states and autocratic regimes and spreading advanced cyber capabilities globally. This directly fuels a more dangerous and unpredictable threat landscape.

Conclusion: The Interconnection

These two influences are deeply intertwined. The U.S.-China rivalry provides the motivation and justification for massive spending and aggressive action. The expanded military-industrial complex provides the means and the machinery to execute that action, while simultaneously amplifying the threat narrative to sustain its own growth.

This creates a feedback loop: geopolitical tension fuels cyber conflict, which the cyber-industrial complex monetizes and the media amplifies, which in turn leads to greater public and governmental fear, resulting in more funding for cyber capabilities and more aggressive actions that further intensify the geopolitical rivalry. This cycle ensures that the cybersecurity threat landscape will remain dynamic, dangerous, and increasingly central to global politics.

Key takeaways

  • Cybersecurity threats at the societal level include state-sponsored attacks on critical infrastructure (power grids, health services), cyberwarfare targeting public institutions to steal sensitive information, and targeted propaganda or disinformation campaigns on social media designed to foment public discord.

  • Cybersecurity threats at the individual level include cybercrime such as identity theft and financial fraud, political interference through malicious online influence campaigns and botnets, and pervasive surveillance from both state actors and corporations leading to a loss of privacy.

  • Cybersecurity threats at the business level include cyber espionage aimed at stealing intellectual property and sensitive commercial data, disruptive ransomware and malware attacks, and sophisticated social engineering schemes like phishing that target employees.

  • Technological drivers of change include social digitization (creating a cybernetic loop between the physical and digital worlds), digital convergence of communications channels (expanding the attack surface), and the dual-use nature of AI/ML (which enhances both defense and offense capabilities).

  • Society's governance structures are struggling to keep pace. The rapid digitization of society, characterized by cybernetic feedback loops, is straining public values like privacy, autonomy, and equity, and current policies are not adequately equipped to address these challenges.

  • Spyware is a growing industry. The market now includes both malicious software and controversial commercial-grade surveillance tools sold by "Private-Sector Offensive Actors" (PSOAs) to governments, blurring the lines between legal and illegal surveillance and raising serious ethical concerns.

  • Sociopolitical drivers of change include the U.S.-China rivalry for technological and geopolitical dominance (fueling espionage and pre-positioning in critical infrastructure) and the expansion of the military-industrial complex into the digital realm, involving collusion between governments, defense contractors, and dominant media companies to shape threat narratives.

References

Aburaddad, J., et al. (2021). AI-powered cyber threats: A review of offensive AI in cybersecurity.

Antonakakis, M., et al. (2017). "Understanding the Mirai Botnet." USENIX Security Symposium.Computers & Security, 102, 102152.

Biggio, B., & Roli, F. (2018). Wild patterns: Ten years after the rise of adversarial machine learning. Pattern Recognition, 84, 317-331.

Buczak, A. L., & Guven, E. (2016). A survey of data mining and machine learning methods for cybersecurity intrusion detection. IEEE Communications Surveys & Tutorials, 18(2), 1153-1176.

Carlini, N., & Wagner, D. (2017). Towards evaluating the robustness of neural networks. IEEE Symposium on Security and Privacy.

Cybersecurity & Infrastructure Security Agency (CISA). (May 24, 2023). People's Republic of China State-Sponsored Cyber Actor Living off the Land to Evade Detection (Cybersecurity Advisory). https://www.cisa.gov/news-events/cybersecurity-advisories/aa23-144a

ENISA. (2023). Threat landscape for converged communications networks. European Union Agency for Cybersecurity. https://www.enisa.europa.eu/publications/threat-landscape-for-converged-communications-networks

Kolias, C., et al. (2017). "DDoS in the IoT: Mirai and Other Botnets." IEEE Computer Society.

Kool, L., Timmer, J., Royakkers, L. M. M., & van Est, Q. C. (2017). Urgent upgrade: Protect public values in our digitized society. The Hague, Rathenau Instituut.

Kshetri, N. (2023). Cyberthreats in digital convergence: Risks and responses. Telecommunications Policy, 47(4), 102476. https://doi.org/10.1016/j.telpol.2023.102476

Lewis, J. A. (2023). The geopolitics of converged telecommunications. Center for Strategic and International Studies (CSIS). https://www.csis.org/analysis/geopolitics-converged-telecommunications

Miller, B., Kantchelian, A., Afroz, S., Bachwani, R., Dauber, E., Huang, L., ... & Goldberg, A. (2020). Adversarial active learning. USENIX Security Symposium.

Mohammed, N., et al. (2021). AI-based threat intelligence: A systematic review. Computers & Security, 105, 102258.

NIST. (2022). Security challenges in digital broadcasting (NIST Special Publication 1800-32). National Institute of Standards and Technology. https://doi.org/10.6028/NIST.SP.1800-32

Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z. B., & Swami, A. (2016). The limitations of deep learning in adversarial settings. IEEE Symposium on Security and Privacy.

Sarker, I. H., et al. (2020). Cybersecurity data science: An overview from machine learning perspective. Journal of Big Data, 7(1), 1-29.

Yadav, T., & Rao, A. M. (2015). Technical aspects of cyber kill chain. International Journal of Computer Science and Engineering, 3(5), 81-85.

Last updated