The cybersecurity threat landscape

This section explores the evolving cybersecurity threat landscape and its key technological and sociopolitical influences

Learning objectives

• Describe the changing cybersecurity threat landscape • Describe technological and sociopolitical drivers of the change

This section explores cybersecurity threats at the individual, business, and societal levels and key underlying technological and sociopolitical factors.

Topics covered in this section

  • Cybersecurity threats at the societal and individual levels

  • Cybersecurity threats to business/industry

  • Technological drivers of change

  • Sociopolitical drivers of change

Cybersecurity threats at the societal and individual levels

Societal level threats

Cyberwarfare/cyberattacks on critical infrastructure such as the power grid, defence facilities, and health services: “As the number and variety of devices used to support, monitor, and control critical infrastructure become more interconnected, the likelihood of cyber threat actors disrupting critical infrastructure has increased” (CSE, 2018, p. 23).

Cyberwarfare/cyberattacks on public institutions and sensitive information: Cyber threat activity “against public institutions—such as government departments, universities, and hospitals—is likely to persist because of the essential nature of the services and the sensitivity of the information they manage” (CSE, 2018, p. 26).

Cyberwarfare: Targeted propaganda/misinformation/disinformation via social media platforms (e.g., to foment unrest/public discord against authorities/government).

Individual level threats

Cybercrime (personal information/identity theft): Canadians face a rising cyber risk of falling victim to cybercrime, especially identity theft. Theft of personal and financial information is lucrative for cybercriminals and is very likely to increase (CSE, 2018). Cybercriminals profit "by obtaining account login credentials, credit card details, and other personal information. They exploit this information to directly steal money, to resell information on cybercrime marketplaces, to commit fraud, or for extortion" (CSE, 2018, p. 11).

Political interference/cyber warfare (malicious online influence activity): Cyber threat actors can amplify or suppress social media content using botnets, which automate online interactions and share content with unsuspecting users (CSE, 2018). By spreading their preferred content among large numbers of paid and legitimate users, cyber threat actors can promote their specific point of view and potentially influence Canadians. (CSE, 2018, p. 15)

State and business surveillance: Cases in point are NSA's dragnet surveillance program, and Facebook’s Cambridge-Analytica data scandal.

Threat landscape in Canada

Table 1: Cybersecurity Threats Facing Individuals, Businesses, and Society (CSE, 2018)

Cybersecurity threats to business/industry

Businesses face an increasing risk of cybercrime, especially data breaches from commercial espionage, commercial data theft, and social engineering schemes (social engineering often combined with malware/ransomeware or phishing attacks).

Cybercrime (data breaches)

“Canadian businesses, especially those active in strategic sectors of the economy, are subject to cyber espionage aimed at stealing intellectual property and other commercially sensitive information.” This cyber threat activity “can harm Canada’s competitive business advantage and undermine our strategic position in global markets” (CSE, 2018, p. 19).

“Foreign and domestic adversaries target higher education institutions that have military and government contracts” (McNamara March 15, 2019).

Political interference/cyberwarfare

Cyber warfare can involve sabotage (e.g., Stuxnet attributed to the United States and Israel); Malware attacks on financial institutions (e.g., WannaCry and Petya ransomware attributed to North Korea).

Surveillance

Business surveillance: Innovation vs Privacy (the "duet of century"). The ability to extract value from surveillance data has made privacy and innovation “the duet of the century” (Bains, 2019).

Espionage/information theft: Cyber campaigns launched by hackers from one country targeting firms of another country resulting in the theft of business information “such as bid prices, contracts and information related to mergers and acquisitions” (Onag, 2018).

Risk in higher education (industry in focus)

According to EDUCAUSE, a U.S. based nonprofit association that helps higher education elevate the impact of IT, with community of over 100,000 members spanning 45 countries, information security was the number one IT governance issue in 2016. The top higher education information security risks that were a priority for IT in 2016 were 1) phishing and social engineering; 2) end-user awareness, training, and education; 3) limited resources for the information security program (i.e., too much work and not enough time or people); and 4) addressing regulatory requirements (Grama & Vogel, 2017).

The top higher education information security risks in the U.S. and Canada that are a priority for IT in 2016 (Grama & Vogel, 2017) are summarized as follows.

Information Security Risk in Higher Education (Adapted from EDUCAUSE, 2019)

Risk
Impact

1) Phishing and Social Engineering

“Over the past two decades, phishing scams have become more sophisticated and harder to detect.” While traditional phishing messages “sought access to an end user’s institutional access credentials (e.g., username and password),” today “ransomware and threats of extortion are common in phishing messages, leaving end users to wonder if they have to actually pay the ransom.”

2) End-User Awareness, Training, and Education

End-user awareness, training, and education “is critical as campuses combat persistent threats and try to make faculty, students, and staff more aware of the current risks.” While “the majority of U.S. institutions (74%) require information security training for faculty and staff, those programs tend to be leanly staffed with small budgets.”

3) Limited Resources for the Information Security Program

The 2015 EDUCAUSE Core Data Service survey covering all US higher education institutions showed that about 2 percent of total central IT spending is allocated for information security and that there is 0.1 central IT information security FTEs per 1,000 institutional FTEs (full time equivalents). About 55% of surveyed respondents said the security awareness budget for 2016 was less than 5K; and about 25% said they do not know; 15% said between 5-25k; and 7% said between 25-50k; and less than 1% said between 50 and 100K. “With limited resources, higher education institutions must be creative and collaborative in addressing information security awareness needs.”

4) Addressing Regulatory Requirements

The regulatory environment impacting higher education IT systems is complex. Data protection in higher education IT systems is governed by a patchwork of different federal and/or state laws rather than by one national data protection law. Student data are traditionally protected by the Family Educational Rights and Privacy Act of 1974 (FERPA) “although some types of student data, when it is held in healthcare IT systems, may be protected by the Health Insurance Portability and Accountability Act of 1996 (HIPAA).” In addition, some types of student and institutional employee financial data may be protected by the Gramm Leach Bliley Act (GLBA). State laws may have data-breach notification requirements, and contractual agreements may have their own list of security technological controls that must be implemented and validated in IT systems. (Grama & Vogel, 2017)

Information Security Risk in Higher Education (Adapted from EDUCAUSE, 2019)

Technological drivers of change

• Social digitization

• Digital convergence of communications channels

• AI/ML

• Internet of things (IoT)

• A growing spyware industry

Social digitization

Kool, Timmer, Royakkers, and van Est (2017) of the Dutch Rathenau Instituut argue that the digitization of society has entered a cybernetic phase, thanks to a host of emergent technological innovations in computing and communications together generating a new wave of digitization. The concept of digitization refers to a large cluster of digital technologies such as robotics, the Internet of Things, artificial intelligence and algorithms, and big data. Artificial intelligence is becoming ubiquitous, increasingly finding its way into more and more software applications, and involves giving computer systems a form of intelligence, such as learning and autonomous decision making, and thus supports a myriad of emerging and disruptive technological innovations (e.g., smart environments, robotics, and network monitoring). “Urgent Upgrade: Protect Public Values in Our Digitized Society” explores the ethical and societal challenges of digitization and the challenges of the governance landscape in the Netherlands. “We investigated which technologies are expected to shape digital society in the coming years, and which social and ethical challenges they will bring” (p. 116). The analysis involved an examination of the role of the scientific community and knowledge institutions, institutions responsible for protecting human rights, civil society, and “the roles of policy makers and politicians in agenda setting, in political decision making, and in the implementation of policy” (p. 11).

The analysis investigated the ethical and social issues that arise in the material, biological, socio-cultural and digital worlds and focused on eight technology areas that “best illustrate a wide range of the impact of the new wave of digitization” (p. 23)--that is, IoT and robotics; biometrics and persuasive technology; digital platforms, augmented reality, virtual reality and social media; and artificial intelligence, algorithms and big data (see Table 2).

Table 2: Technology Areas in The Four Worlds (Kool et al., 2017, p. 45)

Although “digitization has been going on for decades,” recently it has become “easier to intervene real time in the physical world at an increasingly detailed level.” This “ushered in a new phase in the development of the digital society; a phase in which a cybernetic loop exists between the physical and the digital world” (p. 44). This means,

processes in the physical world are measured, the resulting data is analysed, and then real time intervention takes place based on that data analysis. The impact of the intervention can subsequently be measured, analysed and adjusted, before rejoining the following cybernetic loop cycle. (p. 44)

Kool et al. (2017) see “a return to the so-called ‘cybernetic thinking’ that attracted interest in the 1950s and 1960s.” In cybernetics “biological, social and cognitive processes can be understood in terms of information processes and systems, and thus digitally programmed and controlled” (p. 44). Based on the various phases in the cybernetic loop--collection, analysis, and application--the authors “see various ethical and social issues emerging” related to the development of technology that require attention in the coming years. The new wave of digitization is “leading to a world in which continuous feedback and realtime management and control are increasingly important principles for a range of services.” This exerts “a strain on important public values” such as privacy, equity and equality, autonomy and human dignity. These values are clustered into seven topics (see Table 3: Overview of Ethical and Societal Issues Related to Digitization). Analysis of the scientific literature on technologies revealed several recurring themes – “privacy, autonomy, security, controlling technology, human dignity, equity and inequality, and power relations” (Kool et al., 2017, p. 47).

Table 3: Overview of Ethical and Societal Issues Related to Digitization (Kool et al., 2017, p. 8)

Kool et al. (2017) argue that while initially digitization processes consisted of “the large-scale collection of data on the physical, biological and social world,” a new wave of digitization characterized by continuous, cybernetic, feedback loops is focused on the large-scale analysis and application of that data. Nowadays “we can analyse this data on a large scale and apply the acquired knowledge directly in the real world” (p. 43). On one hand, real-time intervention and cybernetic (re)directing can be beneficial to society in various sectors--e.g., self-driving cars that update their digital maps through experience (learning). On the other hand, “Take for example social media users’ newsfeeds, which social media companies are now ‘customizing’ based on their monitoring and analysis of these same users’ surfing behaviour” (Kool et al., 2017, p. 25). Surveillance capitalism “commodifies personal clicking behavior” -- “it unilaterally claims private human experience as a free source of raw material” (Thompson, 2019). Social media sites are “calibrated” for user engagement and interaction. Surveillance can influence user behavior in complex ways, including unconsciously--hitting either the information security or the political autonomy of citizens. Data surveillance “can unconsciously influence a user’s identity, and lead to ‘filter bubbles’, in which the system only suggests news, information and contacts that match the user’s previous behaviour, choices and interests” (Kool et al., 2017, p. 10).

Kool et al. (2017) conclude that “the far-reaching digitization of society is raising fundamental ethical and societal issues.” Government and society “are not adequately equipped to deal with these issues” (p. 26). The governance system “needs to be upgraded if it is to “safeguard our public values and fundamental rights in the digital age now and in the future.” This upgrading “requires that all parties – government, business and civil society – take action to keep digitization on the right track” (p. 26).

Digital convergence of communications channels

The increasing digitization and convergence of communications channels—spanning personal, business, and government domains—with telecommunications and broadcast industries have significantly expanded the cybersecurity threat landscape. As these traditionally separate sectors migrate toward IP-based networks and cloud platforms, previously isolated systems now interconnect, creating new vulnerabilities (ENISA, 2023). Cyber adversaries exploit this convergence, using weaknesses in one sector (e.g., telecom infrastructure) to attack others (e.g., enterprise VoIP systems or emergency broadcast networks). Threats like SS7/Diameter protocol exploits, large-scale DDoS attacks, and supply chain compromises now propagate more easily across converged digital ecosystems (Kshetri, 2023).

The shift to digital broadcasting and IP-based services has further introduced novel risks, including deepfake-driven disinformation and ransomware attacks targeting live media streams (NIST, 2022). Meanwhile, telecom providers—now acting as hybrid IT and communications operators—face heightened targeting by nation-state actors seeking to disrupt critical services or intercept sensitive data. Legacy systems, such as traditional PSTN networks, remain operational alongside modern 5G and IoT infrastructures, creating security gaps that attackers actively exploit (Lewis, 2023). Addressing these challenges requires adaptive policies, cross-sector collaboration, and updated regulatory frameworks to secure increasingly interconnected digital environments.

Related: Concentration of media ownership.

AI/ML

The integration of artificial intelligence (AI) and machine learning (ML) into cybersecurity has significantly altered the threat landscape, introducing both defensive advancements and sophisticated offensive capabilities. On the defensive side, AI/ML enhances threat detection by analyzing vast datasets to identify anomalies, predict attacks, and automate responses (Buczak & Guven, 2016). For instance, ML algorithms can detect previously unknown malware by recognizing behavioral patterns rather than relying on signature-based methods (Yadav & Rao, 2015). However, adversaries have also leveraged AI to develop more evasive threats, such as polymorphic malware that adapts to bypass traditional security measures (Miller et al., 2020). This dual-use nature of AI/ML has created an ongoing arms race between cyber defenders and attackers.

One of the most concerning developments is the use of AI-driven social engineering attacks, such as deepfake phishing and automated spear-phishing campaigns. Attackers employ natural language processing (NLP) to craft highly personalized messages, increasing the success rate of deception (Aburaddad et al., 2021). Additionally, adversarial machine learning techniques enable attackers to manipulate AI systems by injecting malicious data or exploiting model biases (Biggio & Roli, 2018). For example, evasion attacks can fool ML-based intrusion detection systems by subtly altering input data to avoid classification as malicious (Papernot et al., 2016). These advancements underscore the need for robust, adaptive security frameworks that account for AI-augmented threats.

Despite these challenges, AI/ML also offers promising solutions to enhance cybersecurity resilience. Autonomous response systems powered by reinforcement learning can mitigate attacks in real time, reducing the window of vulnerability (Sarker et al., 2020). Furthermore, AI-driven threat intelligence platforms improve situational awareness by correlating global attack patterns and predicting emerging threats (Mohammed et al., 2021). However, the effectiveness of these defenses depends on continuous model retraining and adversarial testing to prevent exploitation (Carlini & Wagner, 2017). As AI/ML continues to evolve, policymakers and security professionals must prioritize ethical guidelines and collaborative frameworks to mitigate risks while harnessing its defensive potential.

Internet of things (IoT)

The Internet of Things (IoT) has significantly expanded the cybersecurity threat landscape by introducing a vast array of interconnected devices, many of which lack robust security measures. Unlike traditional computing systems, IoT devices often prioritize functionality and cost-efficiency over security, making them vulnerable to exploitation (Kolias et al., 2017). Attack surfaces have grown exponentially as IoT deployments span critical sectors such as healthcare, smart cities, and industrial control systems, providing adversaries with new entry points for breaches, data theft, and large-scale attacks like Distributed Denial-of-Service (DDoS) (Antonakakis et al., 2017). Furthermore, the heterogeneity of IoT ecosystems complicates security standardization, leaving gaps that cybercriminals can exploit.

The proliferation of IoT has also amplified the scale and sophistication of cyber threats. Compromised IoT devices are frequently weaponized in botnets, enabling attacks that disrupt critical infrastructure—exemplified by the Mirai botnet, which harnessed vulnerable IoT devices to launch devastating DDoS attacks (Antonakakis et al., 2017). Additionally, the convergence of IoT with emerging technologies like 5G and edge computing introduces new attack vectors, such as man-in-the-middle attacks and firmware exploits (Kolias et al., 2017). As IoT adoption continues, organizations must prioritize security-by-design principles, threat monitoring, and regulatory frameworks to mitigate these evolving risks.

A growing spyware industry

Spyware is software designed to secretly monitor, collect, and transmit a user’s activities, personal data, or sensitive information to a third party—often without consent. While commonly associated with cybercriminals, spyware can also be commercially developed and sold for surveillance purposes, blurring the line between malicious hacking and lawful monitoring.

Types of Spyware

1. Malicious Spyware (Illegitimate Use)

  • Keyloggers – Records keystrokes (e.g., passwords, credit card numbers).

    • Examples: Hawkeye, Spyrix

  • Adware with Spyware – Displays ads while secretly harvesting data.

    • Examples: Search Marquis, Fireball

  • Trojans – Disguised as legitimate software but installs spyware.

    • Examples: Emotet, Zeus

  • Mobile Spyware – Infects smartphones (e.g., stalkerware like FlexiSPY).

2. Commercial Spyware (Legal but Controversial)

Developed by private firms and sold to governments, law enforcement, or private entities for surveillance—often marketed as "lawful intercept" tools.

  • NSO Group (Israel) – Known for Pegasus spyware, which infects smartphones, extracts messages, calls, and even activates cameras/microphones.

    • Criticism: Used against journalists, activists, and dissidents.

    • Sanctions: Added to U.S. export blacklist in 2021.

  • Paragon (Israel) – Sells Graphite, a spyware tool targeting iOS and Android.

    • Controversy: Allegedly used for unauthorized surveillance.

How Spyware Infects Your Computer

  1. Pirated Software/Cracks – Fake downloads bundle spyware.

  2. Malicious Ads/Pop-ups – Redirects to infected sites.

  3. Phishing Emails – Infected attachments (PDFs, Word files).

  4. Fake Updates – Disguised as Flash Player or browser updates.

  5. Bundled Freeware – Installs spyware if "Advanced" settings are skipped.

  6. Zero-Click Exploits (Advanced Spyware like Pegasus) – No user interaction needed.

Signs Your Device Has Spyware

  • Slow performance, frequent crashes

  • Unusual pop-ups & browser redirects

  • Increased data usage (spyware transmitting data)

  • Unknown programs in Task Manager

  • Antivirus suddenly disabled

The Legal & Ethical Debate on Commercial Spyware

Defenders Argue:

  • Helps governments combat terrorism and crime.

  • Used for lawful surveillance with warrants.

Critics Counter:

  • Enables human rights abuses (targeting activists, journalists).

  • Sold to authoritarian regimes with poor oversight.

  • Should face stricter bans (e.g., U.S. blacklisting NSO Group).

Spyware is evolving—from criminal hacking tools to government-grade surveillance software. While some uses are legal, the lack of regulation raises serious privacy and human rights concerns. Staying informed and practicing good cybersecurity hygiene is crucial in this shifting landscape.

Mercenary spyware

Several companies, often referred to as "cyber-mercenaries" or "private-sector offensive actors" (PSOAs), specialize in developing and selling advanced spyware to governments, law enforcement, and private entities. Many operate in legal gray areas, facing criticism for enabling surveillance abuses.

Notable firms include NSO Group (Israel), Paragon (Israel), Candiru (Israel), Circles (Israel), Intellexa (Greece/Cyprus), Cytrox (North Macedonia, part of Intellexa), FinFisher (Gamma Group, UK/Germany), Wintego (Spain), RCS Labs (Italy, now part of Cy4Gate), BellTroX (India), Zerodium (France/US), and DarkMatter (UAE).

NSO Group’s Legal Survival Tactics & Rebranding Efforts

  • Known for Pegasus spyware, which infects smartphones, extracts messages, calls, and even activates cameras/microphones.

    • Criticism: Used against journalists, activists, and dissidents.

    • Sanctions: Added to U.S. export blacklist in 2021.

  • Bankruptcy & Reinvention: After U.S. sanctions, NSO Group shifted ownership (backed by U.K. firm) and rebranded as "Dream Security" (2024), claiming a focus on "defensive cybersecurity."

  • Lobbying & PR: Hired ex-NSA officials to lobby Western governments, arguing spyware is "essential" against encrypted apps (e.g., WhatsApp, Signal).

  • Ongoing Lawsuits:

    • Apple’s Lawsuit (2023): Accused NSO of violating U.S. laws by targeting iPhone users.

    • Meta (WhatsApp) Lawsuit: Settled in 2024 after NSO allegedly hacked 1,400 phones via WhatsApp calls.

Paragon’s Stealthy Business Model & "No Trace" Claims

  • Ghost Infrastructure: Paragon allegedly uses front companies (e.g., "Itervest") to obscure its sales, making accountability difficult.

  • "Forensic Disappearance" Feature: Its Graphite spyware, a spyware tool targeting iOS and Android, reportedly self-destructs if detected, leaving minimal traces—a selling point for clients avoiding exposure.

  • Controversy: Despite claims of "vetted government clients," leaks suggest deployments in Kazakhstan and Mexico, where spyware targeted opposition figures.

Sociopolitical drivers of change

• U.S. - China competition for technological and geopolitical dominance

• Expansion of the military-industrial complex (collusion between Western governments and the dominant media companies)

Key takeaways

• Cybersecurity threats at the societal level include cyberattacks on critical infrastructure • Cybersecurity threats at the individual level include identity theft • Cybersecurity threats at the business level include data breaches • Technological drivers of change include social digitization, digital convergence, and AI/ML • Sociopolitical drivers of change include U.S.-China rivalry and collusion between Western intelligence agencies and dominant media companies

References

Aburaddad, J., et al. (2021). AI-powered cyber threats: A review of offensive AI in cybersecurity.

Antonakakis, M., et al. (2017). "Understanding the Mirai Botnet." USENIX Security Symposium.Computers & Security, 102, 102152.

Biggio, B., & Roli, F. (2018). Wild patterns: Ten years after the rise of adversarial machine learning. Pattern Recognition, 84, 317-331.

Buczak, A. L., & Guven, E. (2016). A survey of data mining and machine learning methods for cybersecurity intrusion detection. IEEE Communications Surveys & Tutorials, 18(2), 1153-1176.

Carlini, N., & Wagner, D. (2017). Towards evaluating the robustness of neural networks. IEEE Symposium on Security and Privacy.

ENISA. (2023). Threat landscape for converged communications networks. European Union Agency for Cybersecurity. https://www.enisa.europa.eu/publications/threat-landscape-for-converged-communications-networks

Kolias, C., et al. (2017). "DDoS in the IoT: Mirai and Other Botnets." IEEE Computer Society.

Kool, L., Timmer, J., Royakkers, L. M. M., & van Est, Q. C. (2017). Urgent upgrade: Protect public values in our digitized society. The Hague, Rathenau Instituut.

Kshetri, N. (2023). Cyberthreats in digital convergence: Risks and responses. Telecommunications Policy, 47(4), 102476. https://doi.org/10.1016/j.telpol.2023.102476

Lewis, J. A. (2023). The geopolitics of converged telecommunications. Center for Strategic and International Studies (CSIS). https://www.csis.org/analysis/geopolitics-converged-telecommunications

Miller, B., Kantchelian, A., Afroz, S., Bachwani, R., Dauber, E., Huang, L., ... & Goldberg, A. (2020). Adversarial active learning. USENIX Security Symposium.

Mohammed, N., et al. (2021). AI-based threat intelligence: A systematic review. Computers & Security, 105, 102258.

NIST. (2022). Security challenges in digital broadcasting (NIST Special Publication 1800-32). National Institute of Standards and Technology. https://doi.org/10.6028/NIST.SP.1800-32

Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z. B., & Swami, A. (2016). The limitations of deep learning in adversarial settings. IEEE Symposium on Security and Privacy.

Sarker, I. H., et al. (2020). Cybersecurity data science: An overview from machine learning perspective. Journal of Big Data, 7(1), 1-29.

Yadav, T., & Rao, A. M. (2015). Technical aspects of cyber kill chain. International Journal of Computer Science and Engineering, 3(5), 81-85.

Last updated