githubEdit

The evolving cybersecurity threat landscape

This section explores the evolving cybersecurity threat landscape and its key technological and sociopolitical influences

Learning objectives

  • Describe the changing cybersecurity threat landscape

  • Describe technological and sociopolitical drivers of the change

This section explores how digital technologies and geopolitical forces are transforming the nature of cyber risk across three interconnected levels: societal, individual, and business. At the societal level, nation-states now conduct cyber operations targeting critical infrastructure—power grids, defense facilities, and health services—while simultaneously weaponizing information through disinformation campaigns designed to undermine public trust and democratic institutions. Individuals face persistent threats ranging from identity theft and financial fraud to sophisticated political influence operations and pervasive surveillance by both state actors and corporations. Businesses contend with industrial espionage targeting intellectual property, ransomware attacks disrupting operations, and complex supply chain compromises that exploit trusted vendor relationships. Underlying these threats are powerful technological drivers—including digital convergence, artificial intelligence, the Internet of Things, and the commercialization of spyware—that expand attack surfaces and accelerate the global arms race. These technological forces intersect with sociopolitical dynamics, particularly the strategic competition between the United States and China and the expanding military-industrial-digital complex, creating a threat landscape characterized by escalating complexity, blurred boundaries between peace and conflict, and profound implications for privacy, security, and public values in an increasingly digitized world.

Topics covered in this section

  • Cybersecurity threats at the societal level

  • Cybersecurity threats at the individual level

  • Cybersecurity threats to businesses

  • Technological drivers of change

  • Sociopolitical drivers of change

Cybersecurity threats at the societal level

As nations increasingly rely on interconnected digital systems for the management and operation of critical infrastructure and essential services, the potential for cyber operations to disrupt daily life has grown exponentially. Critical infrastructure such as power grids, defence facilities, and health services—once protected by physical isolation—now exist within complex digital ecosystems that span public and private networks. The Canadian Communications Security Establishment (CSE) warns that as "the number and variety of devices used to support, monitor, and control critical infrastructure become more interconnected, the likelihood of cyber threat actors disrupting critical infrastructure has increased" (2018, p. 23). This interconnection means that a vulnerability in a seemingly minor component can potentially cascade into region-wide blackouts, compromised military communications, or the disruption of hospital operations during a public health crisis.

Beyond critical infrastructure, public institutions face persistent targeting by cyber threat actors seeking access to sensitive information. Government departments, universities, and hospitals manage vast repositories of data that are invaluable to adversaries—ranging from confidential communications and defense research to personal health records and academic intellectual property. The CSE notes that cyber threat activity "against public institutions—such as government departments, universities, and hospitals—is likely to persist because of the essential nature of the services and the sensitivity of the information they manage" (2018, p. 26). These institutions are particularly vulnerable because they must balance security with accessibility, maintaining open environments for education, research, and public service while defending against determined adversaries.

A third dimension of societal-level threat operates not in the technical infrastructure but in the information space itself. Cyber warfare now encompasses targeted propaganda, misinformation, and disinformation campaigns conducted via social media platforms. Adversaries exploit the architecture of digital communication to foment unrest, deepen social divisions, and undermine public confidence in democratic institutions. By deploying coordinated networks of automated accounts and manipulating algorithmic content distribution, state-sponsored actors can amplify divisive messages, suppress opposing viewpoints, and create the illusion of widespread support for or against particular policies. These information operations represent a fundamental challenge to societal resilience, as they target the shared understanding and social cohesion upon which democratic societies depend.

Cybersecurity threats at the individual level

At the individual level, cybercrime—particularly personal information and identity theft—represents the most immediate and widespread threat facing Canadians. The digital transformation of banking, commerce, and government services has concentrated unprecedented volumes of personal data within systems that are continuously targeted by criminal actors. The CSE (2018) observes that Canadians face a rising cyber risk of falling victim to cybercrime, especially identity theft, and that the theft of personal and financial information is both lucrative for cybercriminals and very likely to increase. Cybercriminals profit "by obtaining account login credentials, credit card details, and other personal information. They exploit this information to directly steal money, to resell information on cybercrime marketplaces, to commit fraud, or for extortion" (p. 11). The consequences for victims extend beyond financial loss to include reputational damage, emotional distress, and years of effort restoring compromised identities.

Individuals are also targets of political interference and cyber warfare through malicious online influence activities designed to shape perceptions and behavior. Cyber threat actors deploy botnets—networks of compromised devices that automate online interactions—to amplify or suppress social media content, creating artificial trends and manipulating public discourse. "By spreading their preferred content among large numbers of paid and legitimate users, cyber threat actors can promote their specific point of view and potentially influence Canadians" (CSE, 2018, p. 15). These operations blur the line between foreign interference and domestic political discourse, exploiting the very mechanisms that make social media platforms engaging to create echo chambers and filter bubbles that distort individual understanding of public affairs.

A third and increasingly pervasive individual-level threat arises from state and business surveillance of personal activity. High-profile cases including the NSA's dragnet surveillance programs and Facebook's Cambridge Analytica data scandal have exposed the extent to which both governments and corporations collect, analyze, and exploit individual data. The ability to extract economic and political value from surveillance data has made privacy and innovation what former Canadian Minister of Innovation, Science and Industry Navdeep Bains (2019) called "the duet of the century". Unlike the criminal threats described above, surveillance operates within legal frameworks, yet its scale and opacity raise profound questions about consent, autonomy, and the long-term implications of a society in which individual behavior is continuously monitored and monetized.

The Threat Landscape in Canada

Table 1: Cybersecurity Threats Facing Individuals, Businesses, and Society (CSE, 2018)arrow-up-right

Cybersecurity threats to businesses

Businesses across all sectors face an escalating risk of cybercrime, with data breaches resulting from commercial espionage, theft of sensitive information, and social engineering schemes representing the most pervasive threats. These attacks frequently combine psychological manipulation with technical exploits—social engineering tactics such as phishing are often the delivery mechanism for malware, ransomware, or unauthorized access that enables data exfiltration. The CSE notes, "Canadian businesses, especially those active in strategic sectors of the economy, are subject to cyber espionage aimed at stealing intellectual property and other commercially sensitive information". This targeted activity, "can harm Canada's competitive business advantage and undermine our strategic position in global markets" (2018, p. 19). The stakes extend beyond immediate financial loss to include long-term erosion of competitiveness as proprietary research, manufacturing processes, and business strategies find their way to competitors willing to bypass the costs of legitimate innovation.

Cyber espionage campaigns often transcend borders, with hackers operating from one country targeting firms in another to obtain intelligence that confers commercial advantage. These operations seek information "such as bid prices, contracts and information related to mergers and acquisitions" (Onag, 2018)—data that enables adversaries to outmaneuver targeted companies in competitive bidding or to anticipate strategic moves before they become public. Higher education institutions occupy a particularly vulnerable position, as universities conducting advanced research that have military and government contracts face persistent targeting from both foreign and domestic adversaries seeking to acquire cutting-edge knowledge before it can be commercialized or translated into defense applications (McNamara, 2019). The convergence of academic openness with national security sensitivity creates unique vulnerabilities that threat actors exploit with increasing sophistication.

Beyond espionage and data theft, businesses must contend with politically motivated cyber operations that blur the line between criminal activity and state-sponsored warfare. Destructive attacks such as the Stuxnet worm—attributed to joint U.S. and Israeli operations—demonstrated that cyber weapons could physically sabotage industrial equipment, causing damage that previously required kinetic military action. Financial institutions have emerged as frequent targets of such campaigns, with ransomware attacks including WannaCry and Petya—attributed to North Korean state actors—disrupting operations globally and causing billions in damages. These incidents illustrate how businesses become collateral damage in geopolitical conflicts, their networks serving as battlegrounds where nations prosecute campaigns of disruption and coercion. The convergence of criminal profit motives with state strategic objectives means that enterprises must defend not only against financially motivated actors but also against sophisticated adversaries whose goals extend far beyond the balance sheet.

Information security risks in higher education (industry in focus)

According to EDUCAUSE, a U.S. based nonprofit association that helps higher education elevate the impact of IT, with community of over 100,000 members spanning 45 countries, information security was the number one IT governance issue in 2016. The top higher education information security risks that were a priority for IT in 2016 were 1) phishing and social engineering; 2) end-user awareness, training, and education; 3) limited resources for the information security program (i.e., too much work and not enough time or people); and 4) addressing regulatory requirements (Grama & Vogel, 2017).

The top higher education information security risks in the U.S. and Canada that are a priority for IT in 2016 (Grama & Vogel, 2017) are summarized as follows.

Information Security Risk in Higher Education (Adapted from EDUCAUSE, 2019)

Risk
Impact

1) Phishing and Social Engineering

“Over the past two decades, phishing scams have become more sophisticated and harder to detect.” While traditional phishing messages “sought access to an end user’s institutional access credentials (e.g., username and password),” today “ransomware and threats of extortion are common in phishing messages, leaving end users to wonder if they have to actually pay the ransom.”

2) End-User Awareness, Training, and Education

End-user awareness, training, and education “is critical as campuses combat persistent threats and try to make faculty, students, and staff more aware of the current risks.” While “the majority of U.S. institutions (74%) require information security training for faculty and staff, those programs tend to be leanly staffed with small budgets.”

3) Limited Resources for the Information Security Program

The 2015 EDUCAUSE Core Data Service survey covering all US higher education institutions showed that about 2 percent of total central IT spending is allocated for information security and that there is 0.1 central IT information security FTEs per 1,000 institutional FTEs (full time equivalents). About 55% of surveyed respondents said the security awareness budget for 2016 was less than 5K; and about 25% said they do not know; 15% said between 5-25k; and 7% said between 25-50k; and less than 1% said between 50 and 100K. “With limited resources, higher education institutions must be creative and collaborative in addressing information security awareness needs.”

4) Addressing Regulatory Requirements

The regulatory environment impacting higher education IT systems is complex. Data protection in higher education IT systems is governed by a patchwork of different federal and/or state laws rather than by one national data protection law. Student data are traditionally protected by the Family Educational Rights and Privacy Act of 1974 (FERPA) “although some types of student data, when it is held in healthcare IT systems, may be protected by the Health Insurance Portability and Accountability Act of 1996 (HIPAA).” In addition, some types of student and institutional employee financial data may be protected by the Gramm Leach Bliley Act (GLBA). State laws may have data-breach notification requirements, and contractual agreements may have their own list of security technological controls that must be implemented and validated in IT systems. (Grama & Vogel, 2017)

Technological drivers of change

  • Social digitization

  • Digital convergence of communications channels

  • AI/ML

  • Internet of things (IoT)

  • A growing spyware industry

Social digitization

Kool, Timmer, Royakkers, and van Est (2017) of the Dutch Rathenau Instituut argue that the digitization of society has entered a cybernetic phase, thanks to a host of emergent technological innovations in computing and communications together generating a new wave of digitization. The concept of digitization refers to a large cluster of digital technologies such as robotics, the Internet of Things, artificial intelligence and algorithms, and big data. Artificial intelligence is becoming ubiquitous, increasingly finding its way into more and more software applications, and involves giving computer systems a form of intelligence, such as learning and autonomous decision making, and thus supports a myriad of emerging and disruptive technological innovations (e.g., smart environments, robotics, and network monitoring). “Urgent Upgrade: Protect Public Values in Our Digitized Society” explores the ethical and societal challenges of digitization and the challenges of the governance landscape in the Netherlands. “We investigated which technologies are expected to shape digital society in the coming years, and which social and ethical challenges they will bring” (p. 116). The analysis involved an examination of the role of the scientific community and knowledge institutions, institutions responsible for protecting human rights, civil society, and “the roles of policy makers and politicians in agenda setting, in political decision making, and in the implementation of policy” (p. 11).

The analysis investigated the ethical and social issues that arise in the material, biological, socio-cultural and digital worlds and focused on eight technology areas that “best illustrate a wide range of the impact of the new wave of digitization” (p. 23)--that is, IoT and robotics; biometrics and persuasive technology; digital platforms, augmented reality, virtual reality and social media; and artificial intelligence, algorithms and big data (see Table 2).

Table 2: Technology Areas in The Four Worlds (Kool et al., 2017, p. 45)

Material world
Biological world
Socio-cultural world
Digital world

Robotics

Persuasive technology

Platforms

Artificial intelligence

Internet of Things

Multimodal biometrics

VR/AR and social media

Big data and algorithms

Although “digitization has been going on for decades,” recently it has become “easier to intervene real time in the physical world at an increasingly detailed level.” This “ushered in a new phase in the development of the digital society; a phase in which a cybernetic loop exists between the physical and the digital world” (p. 44). This means,

processes in the physical world are measured, the resulting data is analysed, and then real time intervention takes place based on that data analysis. The impact of the intervention can subsequently be measured, analysed and adjusted, before rejoining the following cybernetic loop cycle. (p. 44)

Kool et al. (2017) see “a return to the so-called ‘cybernetic thinking’ that attracted interest in the 1950s and 1960s.” In cybernetics “biological, social and cognitive processes can be understood in terms of information processes and systems, and thus digitally programmed and controlled” (p. 44). Based on the various phases in the cybernetic loop--collection, analysis, and application--the authors “see various ethical and social issues emerging” related to the development of technology that require attention in the coming years. The new wave of digitization is “leading to a world in which continuous feedback and realtime management and control are increasingly important principles for a range of services.” This exerts “a strain on important public values” such as privacy, equity and equality, autonomy and human dignity. These values are clustered into seven topics (see Table 3: Overview of Ethical and Societal Issues Related to Digitization). Analysis of the scientific literature on technologies revealed several recurring themes – “privacy, autonomy, security, controlling technology, human dignity, equity and inequality, and power relations” (Kool et al., 2017, p. 47).

Table 3: Overview of Ethical and Societal Issues Related to Digitization (Kool et al., 2017, p. 8)

Central topic
Issues

Privacy

Data protection, privacy, mental privacy, spatial privacy, surveillance, function creep

Autonomy

Freedom of choice, freedom of expression, manipulation, paternalism

Safety and security

Information security, identity fraud, physical safety

Control over technology

Control and transparency of algorithms, responsibility, accountability, unpredictability

Human dignity

Dehumanization, instrumentalization, deskilling, desocialization, unemployment

Equity and equality

Discrimination, exclusion, equal treatment, unfair bias, stigmatization

Balances of power

Unfair competition, exploitation, shifting relations consumers and businesses, government and businesses

Kool et al. (2017) argue that while initially digitization processes consisted of “the large-scale collection of data on the physical, biological and social world,” a new wave of digitization characterized by continuous, cybernetic, feedback loops is focused on the large-scale analysis and application of that data. Nowadays “we can analyse this data on a large scale and apply the acquired knowledge directly in the real world” (p. 43). On one hand, real-time intervention and cybernetic (re)directing can be beneficial to society in various sectors--e.g., self-driving cars that update their digital maps through experience (learning). On the other hand, “Take for example social media users’ newsfeeds, which social media companies are now ‘customizing’ based on their monitoring and analysis of these same users’ surfing behaviour” (Kool et al., 2017, p. 25). Surveillance capitalism “commodifies personal clicking behavior” -- “it unilaterally claims private human experience as a free source of raw material” (Thompson, 2019). Social media sites are “calibrated” for user engagement and interaction. Surveillance can influence user behavior in complex ways, including unconsciously--hitting either the information security or the political autonomy of citizens. Data surveillance “can unconsciously influence a user’s identity, and lead to ‘filter bubbles’, in which the system only suggests news, information and contacts that match the user’s previous behaviour, choices and interests” (Kool et al., 2017, p. 10).

Kool et al. (2017) conclude that “the far-reaching digitization of society is raising fundamental ethical and societal issues.” Government and society “are not adequately equipped to deal with these issues” (p. 26). The governance system “needs to be upgraded if it is to “safeguard our public values and fundamental rights in the digital age now and in the future.” This upgrading “requires that all parties – government, business and civil society – take action to keep digitization on the right track” (p. 26).

Digital convergence of communications channels

The increasing digitization and convergence of communications channels—spanning personal, business, and government domains—with telecommunications and broadcast industries have significantly expanded the cybersecurity threat landscape. As these traditionally separate sectors migrate toward IP-based networks and cloud platforms, previously isolated systems now interconnect, creating new vulnerabilities (ENISA, 2023). Cyber adversaries exploit this convergence, using weaknesses in one sector (e.g., telecom infrastructure) to attack others (e.g., enterprise VoIP systems or emergency broadcast networks). Threats like SS7/Diameter protocol exploits, large-scale DDoS attacks, and supply chain compromises now propagate more easily across these converged digital ecosystems.

This interconnectivity directly enables new classes of attacks. The shift to digital broadcasting and IP-based services has introduced risks such as deepfake-driven disinformation campaigns and ransomware attacks targeting live media streams. Furthermore, telecom providers—now acting as hybrid IT and communications operators—face heightened targeting by nation-state actors seeking to disrupt critical services or intercept sensitive data. The complexity is compounded by the coexistence of legacy systems, such as traditional PSTN networks, alongside modern 5G and IoT infrastructures, creating persistent security gaps that attackers actively exploit. Addressing these challenges requires a strategic shift toward "secure by design" principles for new infrastructure and harmonized international standards to secure these increasingly interconnected digital environments.

This erosion of traditional sector boundaries forces a fundamental re-evaluation of network security architectures. The classic perimeter-based defense model, which assumed a clear demarcation between trusted internal networks (like a corporate LAN) and untrusted external ones (like the PSTN), is no longer viable in a fully converged IP environment. Security teams must now contend with attack surfaces that span heterogeneous infrastructures, from virtualized cloud-native network functions (CNFs) in 5G cores to legacy embedded systems in broadcast hardware. Consequently, defending these environments demands a shift toward zero trust architecture (ZTA) principles, requiring strict verification for every user and device attempting to access resources, regardless of their location or network origin. This is particularly critical as operational technology (OT) used in broadcast and telecom merges with information technology (IT), necessitating unified visibility and security orchestration across previously siloed operational domains.

AI/ML

The integration of artificial intelligence (AI) and machine learning (ML) into cybersecurity has fundamentally altered the threat landscape, introducing a double-edged sword of defensive advancements and sophisticated offensive capabilities. On the defensive side, AI/ML enhances threat detection by analyzing vast datasets to identify anomalies, predict attacks, and automate responses (Buczak & Guven, 2016). For instance, ML algorithms can detect previously unknown malware by recognizing behavioral patterns rather than relying on signature-based methods (Yadav & Rao, 2015).

However, adversaries have rapidly weaponized these same technologies. Attackers leverage AI to develop more evasive threats, such as polymorphic malware that adapts in real-time to bypass traditional security measures (Miller et al., 2014). More concerning is the rise of AI-driven social engineering, where natural language processing (NLP) crafts highly personalized spear-phishing messages and deepfake audio/video content, drastically increasing the success rate of deception. Furthermore, adversaries employ machine learning techniques to directly undermine defensive AI systems. Evasion attacks subtly alter malicious input data to fool ML-based intrusion detection systems, while poisoning attacks inject corrupted data into training sets to corrupt a model's future judgments (Biggio & Roli, 2018; Papernot et al., 2016).

This arms race necessitates a paradigm shift in defensive strategies. To counter AI-augmented attacks, defenders must deploy autonomous response systems powered by reinforcement learning that can contain threats in real time, drastically reducing the window of vulnerability (Sarker et al., 2020). Furthermore, AI-driven threat intelligence platforms are critical for correlating global attack patterns to predict and preempt emerging campaigns. Crucially, the effectiveness of these defenses is not static; it demands continuous model retraining and rigorous adversarial testing (such as red-teaming AI systems) to identify and patch vulnerabilities before attackers can exploit them (Carlini & Wagner, 2017).

This growing reliance on AI introduces systemic risks that extend beyond the immediate attacker-defender dynamic. The opacity of many advanced ML models—often referred to as the "black box" problem—complicates incident response and forensic analysis, as security teams may struggle to determine why a particular alert was generated or missed. Moreover, the supply chain for AI models has become a critical vulnerability. Organizations increasingly rely on pre-trained models or third-party ML services, inheriting the biases, backdoors, or weaknesses of those foundational components. A compromise of a widely used training dataset or model library could have cascading effects, poisoning security tools across numerous enterprises simultaneously. Addressing these challenges requires a focus on MLSecOps (Machine Learning Security Operations), integrating model validation, version control, and continuous monitoring into the secure software development lifecycle to ensure the integrity of the AI systems we depend on for protection.

Internet of things (IoT)

The Internet of Things (IoT) has fundamentally expanded the cybersecurity threat landscape by introducing a vast and heterogeneous array of interconnected devices, many of which lack robust security measures. Unlike traditional computing systems, IoT devices often prioritize functionality and cost-efficiency over security, creating inherent vulnerabilities that are difficult to patch post-deployment (Kolias et al., 2017). The attack surface has grown exponentially as IoT deployments now permeate critical sectors—including healthcare (e.g., infusion pumps), smart cities (e.g., traffic controllers), and industrial control systems (e.g., remote telemetry units)—providing adversaries with new, often poorly monitored entry points for data theft, network pivoting, and large-scale attacks (Antonakakis et al., 2017). Furthermore, the extreme heterogeneity of IoT ecosystems, spanning diverse protocols, operating systems, and lifecycles, complicates security standardization and leaves persistent gaps that cybercriminals actively exploit.

The scale of IoT adoption has directly amplified the potential impact of cyber threats. Compromised IoT devices are frequently weaponized into botnets, enabling attacks that can disrupt critical infrastructure—exemplified by the Mirai botnet, which harnessed vulnerable cameras and routers to launch devastating terabit-scale DDoS attacks (Antonakakis et al., 2017). More recently, threat actors have evolved beyond simple DDoS, deploying ransomware against IoT-enabled industrial environments and using compromised edge devices as persistent footholds within corporate networks. Additionally, the convergence of IoT with 5G and edge computing introduces new attack vectors, including lateral movement from IT to OT networks and exploits targeting vulnerable firmware in cellular IoT modules. As IoT adoption accelerates, mitigating these risks requires a shift toward security-by-design principles, network micro-segmentation to contain breaches, and the implementation of device behavioral analytics to detect anomalous activity at scale.

A critical yet often underestimated challenge in IoT security is the management of device identity and software lifecycle at scale. Traditional security models rely on human-mediated authentication, but an IoT environment comprising thousands of headless sensors or actuators requires a fundamentally different approach. Public Key Infrastructure (PKI) and machine-to-machine (M2M) authentication protocols, such as those defined by the FIDO Alliance, are essential for establishing trust and ensuring that only authorized devices can communicate with network resources. However, the operational reality is complicated by the long deployment lifecycles of many IoT devices—industrial controllers or smart meters may remain in service for a decade or more, often running outdated firmware with known vulnerabilities. This creates a perpetual challenge of vulnerability management, where patching cycles are infrequent or impossible due to availability requirements. Consequently, organizations must complement preventive controls with robust network visibility and the ability to quarantine or segment compromised devices dynamically, treating the network itself as the primary security control plane when endpoint integrity cannot be guaranteed.

A growing spyware industry

Spyware encompasses software designed to covertly monitor user activity, collect personal data, and transmit that information to third parties—often without user consent. While historically associated with cybercriminals seeking financial gain, the spyware ecosystem has evolved into a sophisticated industry where commercially developed tools are legally marketed for surveillance—blurring the boundaries between malicious hacking and lawful monitoring.

Parasitic Commercial Spyware

The criminal dimension of spyware includes a range of malicious tools deployed against individuals and organizations. Keyloggers, such as Hawkeye and Spyrix, record keystrokes to capture credentials and financial data. Adware with embedded spyware capabilities, including Search Marquis and Fireball, displays unwanted advertisements while harvesting browsing habits and personal information. Trojans like Emotet and Zeus masquerade as legitimate software but deliver spyware payloads upon installation. Mobile spyware—often termed "stalkerware," with products like FlexiSPY—targets smartphones, extracting messages, call logs, and real-time location data.

Infection vectors for these threats include pirated software cracks, malicious advertisements, phishing emails with infected attachments, fake software updates (e.g., Flash Player or browser updates), and bundled freeware installed when users skip advanced configuration settings.

Mercenary Commercial Spyware

A parallel market has emerged around commercially developed spyware, produced by private firms and sold to governments, law enforcement agencies, and private entities under the rubric of "lawful intercept" or "national security" tools. These vendors—often described as private-sector offensive actors (PSOAs) or cyber-mercenaries—operate in legally ambiguous spaces, facing mounting criticism for enabling surveillance abuses against journalists, activists, and political dissidents. Notable firms in this sector include NSO Group (Israel), Paragon Solutions (Israel), Candiru (Israel), Circles (Israel), the Intellexa Consortium (Greece/Cyprus, incorporating Cytrox of North Macedonia), FinFisher (Gamma Group, UK/Germany), Wintego (Spain), RCS Labs (Italy, now part of Cy4Gate), BellTroX (India), and Zerodium (France/US).

These entities develop advanced capabilities, including zero-click exploits that require no user interaction—exemplified by NSO's Pegasus, which can infiltrate smartphones to extract messages, calls, and activate cameras or microphones remotely.

Related LinkedIn article: Removing firmware spyware from iPhonearrow-up-right

Advanced Infection Mechanisms

Commercial spyware leverages sophisticated delivery methods beyond conventional phishing. Zero-click exploits represent the pinnacle of stealth, enabling infection without any target interaction—Pegasus famously exploited WhatsApp's calling protocol to compromise devices. Paragon's Graphite, targeting both iOS and Android, reportedly includes a "forensic disappearance" feature that self-destructs upon detection, minimizing traces for investigators. Some vendors, including Paragon, allegedly employ "ghost infrastructure"—using front companies like Itervest to obscure sales and deployment activity, complicating accountability efforts.

Legal, Ethical, and Regulatory Dimensions

The proliferation of commercial spyware has ignited intense debate. Proponents argue these tools are essential for counterterrorism, crime prevention, and lawful surveillance conducted with proper warrants. Critics counter that insufficient oversight and review enable human rights abuses, particularly when tools are sold to authoritarian regimes. The targeting of journalists, opposition figures, and civil society actors—documented in deployments of Pegasus and Paragon tools in multiple jurisdictions—underscores these concerns.

Regulatory responses have intensified. The U.S. Commerce Department added NSO Group to its export blacklist in 2021, restricting access to American technology. Legal actions have followed: Apple sued NSO Group in 2023 for violating federal laws by targeting U.S. users, and Meta (WhatsApp) settled a lawsuit in 2024 after allegations that NSO facilitated hacking of 1,400 devices via WhatsApp calls. In response to sanctions and legal pressures, NSO Group restructured, shifting ownership to a U.K.-backed entity and rebranding as "Dream Security" in 2024, claiming a pivot to defensive cybersecurity. The firm has simultaneously engaged in lobbying efforts, hiring former NSA officials to argue that spyware capabilities are "essential" for law enforcement access to encrypted communications on platforms like WhatsApp and Signal.

As spyware tools grow more sophisticated—incorporating zero-click exploits, forensic evasion, and ghost infrastructure—the gap between legitimate surveillance needs and privacy rights widens. Addressing this challenge requires international regulatory cooperation, robust export controls, and sustained pressure from civil society to ensure that surveillance technologies serve public safety rather than enabling oppression.

Conclusion: The interconnection of technological drivers

These technological drivers do not operate in isolation; they form a deeply interconnected ecosystem where advances in one domain amplify risks across all others. Digital convergence has collapsed the boundaries between telecommunications, broadcasting, and enterprise IT, transforming previously isolated systems into a unified attack surface. The proliferation of inherently insecure IoT devices has expanded this surface to billions of endpoints, providing adversaries with unprecedented entry points into critical networks. Meanwhile, the dual-use nature of AI accelerates both sides of the conflict: defenders leverage automation for threat detection while attackers employ the same technologies to craft evasive malware, deepfake disinformation, and highly personalized phishing campaigns that bypass traditional defenses.

The commercial spyware industry represents the convergence of these technological trends into a marketable product, packaging advanced exploitation techniques—zero-click exploits, forensic evasion, AI-enhanced surveillance—that were once the exclusive domain of elite nation-state actors. This interconnection creates a self-perpetuating cycle of escalation: each defensive innovation is met with an offensive countermeasure leveraging the same foundational technologies. As ever more critical functions migrate to networked, IP-based systems, technological progress itself becomes the primary driver of cyber risk, demanding not merely better security tools but a fundamental reassessment of how technology is designed, deployed, and governed across the interconnected digital ecosystem.

Sociopolitical drivers of change

  • U.S.-China rivalry for technological and geopolitical dominance

  • Expansion of the military-industrial complex (collusion between Western governments and the dominant media companies)

U.S.-China Rivalry for Technological and Geopolitical Dominance

This is not merely a trade war; it is a comprehensive strategic competition between a reigning superpower and a rising challenger. This rivalry fundamentally reshapes the cybersecurity threat landscape by blurring the lines between economic competition, espionage, and preparation for potential conflict.

From Espionage to Sabotage: The Expanding Objective

The primary goal of cyber operations has historically been espionage—the stealthy extraction of blueprints, intellectual property, and government secrets. In the context of U.S.-China strategic competition, this traditional objective remains central but has evolved in scope and intensity. State-sponsored actors—categorized as Advanced Persistent Threats (APTs)—aggressively target rival companies and research institutions to accelerate domestic technological development and erode the competitive advantage of opposing powers. This intellectual property theft focuses on sectors deemed critical for national security and economic dominance, including semiconductors, artificial intelligence, biotechnology, renewable energy, and advanced manufacturing. The scale of this espionage is unprecedented. By stealing proprietary designs, manufacturing processes, and scientific research, rivals seek to close the technological gap between them without bearing the full cost of research and development.

However, the strategic calculus has shifted. The primary goal of cyber operations has expanded beyond traditional espionage to include sabotage and pre-positioning. This evolution reflects a recognition that in a prolonged geopolitical rivalry, the ability to disrupt an adversary's capabilities may prove as valuable as the ability to steal its secrets.

Sabotage involves destructive cyber operations designed to degrade, disrupt, or destroy an adversary's infrastructure, military capabilities, or economic productivity. Unlike espionage, which seeks stealth and persistence, sabotage is often intended to be visible—sending a message of strength, retaliating for perceived aggression, or imposing costs in a conflict short of war. Examples include the NotPetya attack (attributed to Russian military actors), which caused billions in damage to Ukrainian and global companies, and the Stuxnet worm (attributed to U.S. and Israeli military actors), which physically destroyed Iranian centrifuges. In the U.S.-China context, the risk of sabotage extends to undersea communication cables, financial market infrastructure, and manufacturing supply chains. A successful sabotage operation against a semiconductor fabrication plant or a power grid could inflict strategic damage rivaling a kinetic military strike.

Pre-positioning represents a more insidious tactic. Both the U.S. and China—along with other nation-states—are suspected of implanting malware within each other's critical infrastructure: power grids, financial systems, water treatment facilities, and transportation networks. The immediate goal is not to cause damage but to establish persistent access that can be activated during a geopolitical crisis or military conflict. These implants function as a form of cyber deterrence or as a first-strike option, capable of paralyzing an adversary's society at a moment of peak tension. The discovery of Chinese malware implants in U.S. critical infrastructure networks, as documented by CISA in May 2023, illustrates this threat. Such pre-positioning effectively extends the battlefield into the civilian domain, ensuring that any future conflict would begin with a cyber dimension aimed at the very systems populations depend upon for daily life.

The "Civil-Military Fusion" Doctrine

China's national strategy fundamentally reshapes the threat landscape through its explicit mandate of "civil-military fusion." This doctrine requires private companies, academic institutions, and technology startups to collaborate with and support the objectives of the People's Liberation Army (PLA). The practical consequence is a deliberate blurring of the line between civilian commerce and military capability. A Chinese technology company developing facial recognition algorithms for commercial applications is simultaneously advancing technology that will be integrated into military surveillance systems or employed in state-sponsored cyber operations. This fusion creates a vast, state-directed ecosystem of actors extending far beyond traditional government hacker units. For defenders, this means the threat surface expands dramatically—nearly any Chinese tech firm with sophisticated capabilities becomes a potential collaborator in cyber operations, making attribution, threat modeling, and defensive planning exponentially more complex. The distinction between economic competitor and military adversary dissolves, forcing a reassessment of how nations engage with Chinese technology at every level.

Supply Chain Compromises

The U.S.-China rivalry has transformed global technology supply chains from a matter of economic efficiency into a critical battleground for national security. At the heart of this conflict lies a fundamental fear: that reliance on Chinese-manufactured hardware or software creates inherent vulnerabilities that can be exploited for espionage or sabotage. Concerns over Huawei's 5G telecommunications equipment exemplify this anxiety, with the U.S. government warning that the company's deep integration into foreign networks could provide Beijing with backdoors for surveillance or disruption. These fears have driven aggressive policy responses, including bans on Chinese technology and "rip and replace" initiatives compelling allied nations to remove Chinese equipment from their core infrastructure. Simultaneously, the United States has weaponized its own technological advantages, leveraging market dominance to restrict Chinese access to critical American innovations—most notably advanced semiconductors from NVIDIA and extreme ultraviolet lithography machines from ASML. This strategy of denial forces China into a position of technological dependency, which it perceives as an unacceptable strategic vulnerability. The predictable response has been intensified cyber espionage aimed at reverse-engineering and replicating these denied technologies, creating a self-perpetuating cycle of restriction and espionage that fundamentally reshapes the cybersecurity environment.

Expansion of the Military-Industrial Complex (into the Digital Realm)

The traditional concept of the military-industrial complex—the symbiotic relationship between a nation's military, its government, and the defense contractors that supply it—has undergone a profound transformation in the digital age. This expanded complex now incorporates not only major technology and media companies but also the permanent surveillance infrastructure that has become embedded within Western societies in the decades since September 11, 2001. The resulting collusion between government, industry, media platforms, and intelligence agencies creates a self-reinforcing cycle that simultaneously amplifies cyber threats, normalizes mass data collection, and shapes public perception of the digital domain. Understanding this expanded complex is essential for comprehending how sociopolitical forces drive the evolution of the cybersecurity threat landscape.

Rise of the Surveillance Society

In the years following the September 11, 2001 attacks, intelligence agencies in the United States and Canada have steadily acquired unprecedented surveillance powers, fundamentally altering the relationship between citizens and the state. In the United States, this transformation was defended by former CIA and NSA Director Michael Hayden, who argued that the nature of modern communications required new approaches to intelligence collection. The USA PATRIOT Act, renewed and expanded multiple times since 2001, granted intelligence agencies broad authority to access phone records, email communications, and financial data without traditional warrant requirements. These authorities enabled programs such as the NSA's dragnet surveillance initiative, which systematically collected metadata on millions of Americans' phone calls and electronic communications. Subsequent revelations by Edward Snowden in 2013 exposed the scale of these programs, confirming that the agency had built a permanent infrastructure for mass surveillance that captured records of countless individuals entirely unconnected to any terrorism investigation.

Canada followed a similar trajectory, though with less public scrutiny. The Anti-terrorism Act, 2015 (Bill C-51) granted the Communications Security Establishment expanded powers to conduct foreign intelligence and cybersecurity activities that incidentally intercept Canadian communications. More significantly, Bill C-59 (2019) later refined but largely preserved these surveillance capabilities while creating new oversight mechanisms. These legislative changes have effectively normalized what was once exceptional: the systematic collection and analysis of citizen data by intelligence agencies. The result is a surveillance society in which the technical infrastructure for mass monitoring exists permanently, available for deployment against threats that range from terrorism to economic espionage to cyberattacks—a development that would have been politically inconceivable before the digital age. This permanent surveillance apparatus, justified through national security needs, operates alongside and increasingly intersects with the commercial data collection practices of technology giants, creating a hybrid public-private surveillance ecosystem that touches nearly every aspect of modern life.

The Cyber-Industrial Complex

At the heart of this transformation lies the emergence of what can be termed the "cyber-industrial complex"—a vast ecosystem of private cybersecurity firms, defense contractors with dedicated cyber units, and threat intelligence companies whose business models depend fundamentally on the existence of a pervasive and evolving threat environment. While the threats these companies address are undeniably real, the structure of the industry creates a financial incentive for threat inflation. Highlighting the severity and sophistication of cyber attacks drives government spending, secures lucrative contracts, and sells products ranging from zero-trust architectures to advanced endpoint detection systems. This dynamic encourages framing cybersecurity through a lens of perpetual "cyber war," requiring wartime budgets and the suspension of certain norms around privacy and oversight. The "cyber war" narrative, once embedded in policy discourse, encourages more aggressive offensive cyber operations by states, which in turn provokes responses from adversaries, escalating the overall threat level for everyone caught in this cycle of action and reaction.

The Government-Media-Tech Nexus

Parallel to this industrial dynamic operates what can be described as the government-media-tech nexus. Dominant media companies, often reliant on access to government officials for exclusive stories, can become conduits for strategically shaped narratives. Leaks about cyber threats—reports of Russian hackers targeting the electrical grid or Chinese malware embedded in critical infrastructure—are frequently released by government agencies to achieve specific objectives: warning the public, deterring adversary behavior, or justifying new policy initiatives and budget requests. This creates a cycle of fear and reaction that serves institutional interests on all sides. Simultaneously, technology giants including Google, Microsoft, Amazon Web Services, and Meta have become critical infrastructure in their own right. They host government data, provide communication platforms, and operate the cloud infrastructure upon which the modern economy depends. Governments increasingly rely on these companies for threat intelligence, and the tech firms employ security teams that routinely discover and disclose state-sponsored attacks. This partnership, while often necessary for national security, concentrates immense power in a handful of private corporations with their own commercial and geopolitical interests. Compounding this concentration is the reality of surveillance capitalism—the business model of these companies depends on extensive data collection, creating massive and lucrative targets for both nation-state and criminal hackers. Every breach of these platforms exposes unprecedented volumes of personal and corporate information, while the data harvesting techniques pioneered by tech companies are increasingly adopted and adapted by state intelligence agencies for their own purposes.

The Privatization of Cyber Conflict

The logical extension of these trends is the privatization of cyber conflict itself. Governments now routinely contract private companies to conduct both offensive and defensive cyber operations. Mercenary hacker groups—exemplified by Israel's NSO Group with its Pegasus spyware—develop and sell intrusion capabilities to any government capable of paying, regardless of that government's human rights record or adherence to international norms. This commercialization of cyber weapons dramatically empowers smaller states and autocratic regimes that lack the capacity to develop such sophisticated capabilities independently. The global spread of advanced intrusion tools, from zero-click exploits to forensic evasion techniques, directly fuels a more dangerous and unpredictable threat landscape. When nation-state capabilities become commercially available commodities, the barrier to entry for conducting devastating cyber operations plummets, and the number of actors capable of threatening critical infrastructure expands. This privatization represents a fundamental shift in the nature of conflict, transferring powers traditionally reserved for sovereign states into the hands of corporate entities accountable only to their shareholders and the highest bidder.

Conclusion: The interconnection of sociopolitical drivers

These two sociopolitical influences are powerful, interconnected drivers shaping the modern cybersecurity threat landscape. The U.S.-China rivalry provides the motivation and justification for massive spending and aggressive action. The expanded military-industrial complex provides the means and the machinery to execute that action, while simultaneously amplifying the threat narrative to sustain its own growth. This creates a feedback loop: geopolitical tension fuels cyber conflict, which the cyber-industrial complex monetizes and the media amplifies, which in turn leads to greater public and governmental fear, resulting in more funding for cyber capabilities and more aggressive actions that further intensify the geopolitical rivalry. This cycle ensures that the cybersecurity threat landscape will remain dynamic, dangerous, and increasingly central to global politics.

Key takeaways

  • Cybersecurity threats at the societal level include state-sponsored attacks on critical infrastructure (power grids, health services), cyberwarfare targeting public institutions to steal sensitive information, and targeted propaganda or disinformation campaigns on social media designed to foment public discord.

  • Cybersecurity threats at the individual level include cybercrime such as identity theft and financial fraud, political interference through malicious online influence campaigns and botnets, and pervasive surveillance from both state actors and corporations leading to a loss of privacy.

  • Cybersecurity threats at the business level include cyber espionage aimed at stealing intellectual property and sensitive commercial data, disruptive ransomware and malware attacks, and sophisticated social engineering schemes like phishing that target employees.

  • Technological drivers of change include social digitization (creating a cybernetic loop between the physical and digital worlds), digital convergence of communications channels (expanding the attack surface), and the dual-use nature of AI/ML (which enhances both defense and offense capabilities).

  • Society's governance structures are struggling to keep pace. The rapid digitization of society, characterized by cybernetic feedback loops, is straining public values like privacy, autonomy, and equity, and current policies are not adequately equipped to address these challenges.

  • Spyware is a growing industry. The market now includes both malicious software and controversial commercial-grade surveillance tools sold by "Private-Sector Offensive Actors" (PSOAs) to governments, blurring the lines between legal and illegal surveillance and raising serious ethical concerns.

  • Sociopolitical drivers of change include the U.S.-China rivalry for technological and geopolitical dominance (fueling espionage and pre-positioning in critical infrastructure) and the expansion of the military-industrial complex into the digital realm, involving collusion between governments, defense contractors, and dominant media companies to shape threat narratives.

References

Antonakakis, M., April, T., Bailey, M., Bernhard, M., Bursztein, E., Cochran, J., ... & Zhou, Y. (2017). Understanding the Mirai botnet. Proceedings of the 26th USENIX Security Symposium, 1093-1110.

Biggio, B., & Roli, F. (2018). Wild patterns: Ten years after the rise of adversarial machine learning. Pattern Recognition, 84, 317-331.

Buczak, A. L., & Guven, E. (2016). A survey of data mining and machine learning methods for cyber security intrusion detection. IEEE Communications Surveys & Tutorials, 18(2), 1153-1176.

Carlini, N., & Wagner, D. (2017). Towards evaluating the robustness of neural networks. Proceedings of the 2017 IEEE Symposium on Security and Privacy, 39-57.

Communications Security Establishment. (2018). National Cyber Threat Assessment 2018. Canadian Centre for Cyber Security. Retrieved August 1, 2019, from https://www.cyber.gc.ca/en/guidance/national-cyber-threat-assessment-2018

Cybersecurity & Infrastructure Security Agency (CISA). (May 24, 2023). People's Republic of China State-Sponsored Cyber Actor Living off the Land to Evade Detection (Cybersecurity Advisory). https://www.cisa.gov/news-events/cybersecurity-advisories/aa23-144a

ENISA (European Union Agency for Cybersecurity). (2023). Threat landscape for converged communications networks. European Union Agency for Cybersecurity. https://www.enisa.europa.eu/publications/threat-landscape-for-converged-communications-networks

Kolias, C., Kambourakis, G., Stavrou, A., & Voas, J. (2017). DDoS in the IoT: Mirai and other botnets. Computer, 50(7), 80-84.

Kool, L., Timmer, J., Royakkers, L. M. M., & van Est, Q. C. (2017). Urgent upgrade: Protect public values in our digitized society. The Hague, Rathenau Instituut.

Miller, B., Kantchelian, A., Afroz, S., Bachwani, R., Dauber, E., Huang, L., ... & Tygar, J. D. (2014, November). Adversarial active learning. In Proceedings of the 2014 workshop on artificial intelligent and security workshop (pp. 3-14).

Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z. B., & Swami, A. (2016). The limitations of deep learning in adversarial settings. Proceedings of the 2016 IEEE European Symposium on Security and Privacy, 372-387.

Sarker, I. H., Kayes, A. S. M., Badsha, S., Alqahtani, H., Watters, P., & Ng, A. (2020). Cybersecurity data science: An overview from machine learning perspective. Journal of Big Data, 7(1), 1-29.

Yadav, T., & Rao, A. M. (2015). Technical aspects of cyber kill chain. International Journal of Computer Science and Engineering, 3(5), 81-85.

Last updated