What the Salt Typhoon Hack Means for the Future of Global AI
The New AI Arms Race Affects AI Regulatory Systems, Laws, and Treaties
Election fatigue has caused me to spend less time on the news, so I felt blind-sided today as I listened to The Daily’s® podcast about the Salt Typhoon Breach. I highly recommend listening to the episode. If you don’t have time for that, and you feel as blindsided as I was, here’s a brief rundown of the facts based on The Daily podcast, and other news sites, such as Reuters and the Business Insider. For additional information and background, check out this article by the Congressional Research Services.
Highlights of the Salt Typhoon Hack
Chinese Ministry of State Security and hackers working for them hacked into America’s telecommunication systems. Companies like AT&T, Verizon, and smaller systems have been hacked into.
The breach is not new. The Chinese hackers have been able to roam around the U.S. telecommunications systems for perhaps two (2) years. The telecom systems were unaware of the hack until Microsoft advised them of it.
The U.S. telecommunications systems and utilities grids are somewhat cobbled together; 40-year-old technology is being paired with new technology.
As David Sanger, The New York Times’ National Security correspondent, put it, “they looked for the seams between that old equipment and the new equipment because they knew the older equipment was gonna be their way inside.”
Hackers could intercept and open unencrypted texts, such as from an iPhone to an Android phone or from an Android phone to an android phone.
The hackers were able to find out who the U.S. government got warrants on to listen to their conversations and access text messages.
This also allowed the Chinese Government to figure out who the U.S. was targeting as Chinese spies.
The Chinese Government targeted specific national security officials and politicians, including President-elect Trust.
It appears the Chinese hackers were not able to easily decrypt encrypted calls and messages. For example, they could not easily decrypt a call made by someone using WhatsApp to someone on an iPhone; or someone on Signal calling someone using WhatsApp. They could determine that a call was taking place but not listen in.
The Chinese hackers were able to take advantage of antiquated systems that are decades old, slipping in, wandering around, and then removing their code and slipping back out.
China has stepped up its technology game since Xi Jin Ping became president in 2012. They have been investing millions of dollars and many hours training hackers and building up capabilities.
A different set of Chinese-government-backed hackers named Volt Typhoon have been working on getting into utilities that power Guam and Hawaii. We learned of this last year in 2023.
The Salt Typhoon hackers are still lurking in U.S. telecommunication systems.
U.S. officials seem to be opining off the record that the Chinese technologists are as accomplished at hacking as the U.S. National Security Administration.
Although the hackers almost certainly aren’t targeting everyday citizens, U.S. officials recommend using encryption apps or systems to make calls and send texts.
What this Means For the Future of AI
From the beginning of the podcast, I considered what this news meant for AI and GenAI. I started writing this article before the podcast mentioned AI, but it was my first thought. The Salt Typhoon Hack serves as a wake-up call. In the U.S. we have yet to adopt a federal AI law, leaving our AI development ecosystem vulnerable to foreign exploitation.
The hack highlights a critical advantage that nations like China and Russia have in AI development: unrestricted access to training data. While U.S. companies face legal challenges over data usage - with ongoing lawsuits against companies like OpenAI over copyright infringement - our adversaries face no such constraints. This creates an asymmetric playing field where U.S. companies must either limit themselves to synthetic data (which produces lower-quality AI models) or risk legal exposure, while state-sponsored actors in other countries can freely harvest data - including through cyber intrusions like the Salt Typhoon hack.
This disparity in data access could have serious implications for national security. As AI becomes more central to military and intelligence operations, the quality and quantity of training data will increasingly determine technological superiority. When combined with China's Military-Civil Fusion strategy, their unrestricted access to data - both legally acquired and stolen - could accelerate their AI capabilities beyond those of the United States unless we find ways to balance innovation, legal compliance, and national security.
AI as a Cybersecurity Weapon
These breaches highlight an emerging reality: cybersecurity is not just a technical issue but a geopolitical battleground where AI serves as both sword and shield. The U.S. must view such incursions as part of a larger strategy by adversaries to assert dominance in AI and digital infrastructure. With China now demonstrating sophisticated cyber capabilities, integrating AI into their cyber operations dramatically raises the stakes.
AI-enhanced cyber operations represent a fundamental shift in both scale and sophistication. While traditional cyber attacks might target specific vulnerabilities, AI systems can continuously analyze vast networks to identify and exploit weaknesses across multiple systems simultaneously. These AI systems can adapt their attack patterns in real time, learning from defensive responses and evolving to bypass security measures. More concerning, AI enables attackers to conduct precise, targeted operations at an unprecedented scale - what once required teams of human operators can now be automated and multiplied across thousands of targets.
The marriage of AI and cyber warfare creates particularly insidious threats to national security. Advanced AI systems can analyze intercepted communications to understand not just the content, but the patterns and relationships they reveal. This capability enables adversaries to map out organizational structures, identify key personnel, and predict strategic decisions. Furthermore, AI-powered systems can generate highly convincing spoofed communications or manipulated data, potentially undermining the integrity of military and intelligence operations.
Most concerning is how AI can be used to develop "low-and-slow" attacks that remain undetected for months or years. These AI systems can learn to mimic normal network behavior while gradually expanding their access and control. By the time such intrusions are detected, adversaries may have already mapped entire networks, extracted sensitive data, or established permanent backdoors. The Salt Typhoon hack demonstrates this capability - the attackers maintained access for years while evading detection by some of the world's most sophisticated security systems.
The New Arms Race
The Daily’s® David Sanger brought up the idea that this pulls us into a new arms race. China and Russia are working together as cyber security global powers, as we witnessed in Russia’s war against Ukraine. We’re operating by our rules, and China and Russia by an entirely different set of rules. To mitigate the threat of China leveraging cybersecurity offensively against the United States, a multi-faceted approach is required, blending technology, diplomacy, and encompassing defense strategies.
China's AI Strategy: Military-Civil Fusion
Understanding China's Military-Civil Fusion strategy is crucial for grasping the full implications of the Salt Typhoon hack. Unlike the United States, China requires its civilian companies and military to work in close coordination. Any technological breakthrough made by a Chinese company must be shared with the military if it could benefit defense or intelligence operations. This is especially significant for artificial intelligence, where Chinese tech giants like Baidu, Alibaba, and Tencent are required to share their AI advances with military and intelligence agencies.
This strategy gives China's cyber operations a unique advantage. When Chinese hackers access U.S. telecommunications systems, the data they collect can be immediately shared with Chinese AI companies for analysis and AI model training. For example, intercepted communications could help train more sophisticated AI systems for pattern recognition, behavior prediction, or language processing. The strategy ensures that every piece of stolen data can potentially enhance both civilian AI products and military capabilities.
AI makes everything faster and broadens the scope of its reach. With China now a cyber security superpower, this recent display of these capabilities raises questions about how the Chinese government will use AI in cyber security going forward. AI isn’t just used by the good guys to protect computers and networks—it can also be used by hackers and cybercriminals to make their attacks smarter and harder to stop.
Phishing and Deepfakes
One way AI is used offensively is by creating fake messages or emails, called phishing attacks, that trick people into sharing their passwords or other private information. AI can study how people write and talk, so these fake messages look real and are more convincing. For example, an AI could send an email that looks like it’s from your boss or someone you trust, asking you to click on a link or share a secret code.
In some cases, AI can also create deepfake videos or audio recordings. These are fake but very realistic videos or voices that can trick people into believing something that isn’t true. For example, a deepfake could be used to create a video of someone important, like a CEO or government leader, saying something they never actually said. This can create confusion, damage reputations, or even cause panic. Offensive uses of AI in cybersecurity are a serious threat, which is why experts are working hard to stop them before they cause harm.
Stealthy Attacks
AI can also help hackers hide their attacks. Normally, when malware (harmful software) is put into a system, security programs can often spot it because it behaves in a suspicious way. But with AI, the malware can track patterns to “learn” to act like a normal program, making it harder to detect. This makes it significantly easier for hackers to steal data, damage systems, or even just passively surveil, without being detected right away.
Another deeply concerning thing about offensive AI is how it can be used to attack many systems at once. Hackers can use AI to look for weak points in thousands of computers or websites at the same time. Once the AI finds these weak spots, the hackers can break in quickly before anyone even realizes there’s a problem. This can cause significant disruptions, like shutting down websites or stealing information from lots of people all at once.
A Strong Offense is the Best Defense
The U.S. and other like-minded countries must focus on offensive cyber capabilities as a deterrent. A well-publicized capacity to respond proportionally to cyberattacks can discourage adversaries like China from acting aggressively in cyberspace. This includes developing capabilities to disrupt or disable malicious activities before they reach U.S. targets, as well as conducting controlled demonstrations of strength to signal readiness and resolve. Through a combination of defense, collaboration, and strategic deterrence, the U.S. can mitigate the risks posed by China's cyber strategies and protect its national security.
Stronger Cyber Infrastructure
The U.S. must prioritize strengthening its cyber infrastructure to withstand sophisticated attacks. This involves adopting advanced threat detection systems powered by artificial intelligence and machine learning, which can identify and neutralize threats in real time. Further, implementing quantum-resistant encryption methods can ensure that critical communications and data remain secure, even against emerging quantum computing capabilities that adversaries like China may develop.
International Collaboration
International collaboration is another vital component. The U.S. can work with like-minded countries and global partners to establish collective defense mechanisms and share intelligence on cyber threats. Organizations like NATO's Cooperative Cyber Defence Centre of Excellence and the Five Eyes Intelligence Alliance can facilitate the development of coordinated responses to potential Chinese cyber operations. By fostering global norms and treaties discouraging offensive cyber activities, like-minded countries can also apply diplomatic pressure to isolate and penalize nations that exploit cybersecurity for harmful purposes.
Regulatory Frameworks for AI Security
The Salt Typhoon hack exposes critical gaps in our current regulatory framework for AI security. While the U.S. has powerful regulations for traditional national security threats, our regulatory approach to AI security remains fragmented and inadequate at best. We need comprehensive federal legislation that addresses both defensive and offensive AI capabilities, particularly where they intersect with critical infrastructure. This legislation should establish clear security standards for AI systems deployed in sensitive sectors, mandate regular security audits, and create reporting requirements for AI-related security breaches.
Beyond domestic regulation, we need new international frameworks that address the unique challenges of AI security in an interconnected world. Current international agreements on cybersecurity and intelligence gathering were largely written before the rise of sophisticated AI systems. We need updated treaties and agreements that specifically address AI-enabled cyber operations, establish clear rules for the use of AI in intelligence gathering, and create mechanisms for international cooperation in defending against AI-enhanced cyber threats. These frameworks must also address the challenges posed by nations like China, where the line between civilian AI development and military applications has been deliberately blurred through strategies like Military-Civil Fusion.
Public-Private Partnership
Investing in public-private partnerships is equally critical. Many of the technologies and infrastructures at risk such as utilities grids and electricity power grids, and telecommunications are owned by private companies, making their cooperation essential in fortifying cybersecurity defenses. The US government can offer incentives and guidelines to encourage businesses to adopt best practices, such as zero-trust architectures and regular penetration testing. To address the most terrifying potential breakdowns, collaboration between sectors to address these issues comprehensively is critical.
When telecommunications systems are compromised, they become a potential goldmine for AI development in ways that go far beyond traditional espionage. Access to vast amounts of real-world communications data could dramatically enhance an adversary's AI capabilities. For instance, intercepted conversations and messages can be used to train large language models to better understand natural language patterns, regional dialects, and cultural nuances - making their AI systems more sophisticated at tasks ranging from translation to impersonation. This kind of extensive, real-world data is particularly valuable because it captures authentic human communication patterns that are difficult to synthesize.
Even more concerning is how compromised telecom infrastructure could enable targeted collection of specialized communications. Hackers could potentially intercept technical discussions between AI researchers, communications about AI system architectures, or details about model training techniques. This technical intelligence could help adversaries understand and replicate cutting-edge AI developments. Furthermore, by monitoring communication patterns between AI companies, research institutions, and government agencies, adversaries could map out the entire AI development ecosystem of a target country, identifying key players, partnerships, and technological capabilities.
The infrastructure itself also presents a strategic vulnerability for AI deployment. Modern AI systems often rely on distributed computing and real-time data processing across telecommunications networks. If these networks are compromised, adversaries could potentially interfere with AI model training, inject poisoned data, or even manipulate the behavior of deployed AI systems. For example, autonomous systems that depend on real-time communications for decision-making could be fed manipulated data, leading to erroneous or harmful actions. Maybe this sounds a bit far-fetched. And perhaps it is, but for most of his life, cell phones or “watch phones” like comic character Dick Tracy had were science fiction to my dad, and we now take them for granted.
These possibilities create a complex challenge where protecting AI systems requires not just securing the models themselves, but also ensuring the integrity of the entire communications infrastructure they operate on.
The infrastructure itself presents a critical strategic vulnerability for AI deployment. Modern AI systems rely heavily on distributed computing and real-time data processing across telecommunications networks, creating multiple potential points of failure. If these networks are compromised, adversaries could interfere with AI model training, inject poisoned data, or manipulate the behavior of deployed AI systems with potentially catastrophic consequences. Consider autonomous defense systems that depend on real-time communications for decision-making - if fed manipulated data, these systems could make dangerously incorrect assessments or take harmful actions. This creates a complex challenge where securing AI systems requires protecting not just the models themselves, but the entire communications infrastructure they operate on. The integrity of our AI systems is only as strong as the telecommunications backbone that supports them. Which is a scary thought in light of how easily the Salt-Typhoon hackers reached into our telecommunications systems for a period of years, without being detected.
Quantum Computing: The Next Frontier in AI Security
Rather than tanks and missiles, the US government needs to prioritize investments in quantum computing and quantum-resistant systems. Quantum computing represents both a transformative opportunity and an existential threat to our current AI and security infrastructure. To understand the stakes, imagine a computer so powerful it can solve complex mathematical puzzles that would take today's fastest computers millions of years to complete. This isn't science fiction - it's quantum computing, the next evolution in computing technology.
On the defensive side, we urgently need quantum-resistant authentication - new encryption methods based on mathematical problems that remain difficult even for quantum computers to solve. Unlike current encryption, which quantum computers could potentially crack in minutes, quantum-resistant encryption uses different classes of mathematical problems that would remain secure even against quantum computing capabilities. This is crucial for protecting everything from military communications to AI systems from future quantum-enabled attacks.
Current encryption methods could become obsolete when quantum computers reach their full potential. An adversary’s sufficiently powerful quantum computer could crack current encryptions in minutes, potentially exposing sensitive government data, AI model architectures, and critical infrastructure controls. This vulnerability extends to our AI systems, where quantum computers could potentially decrypt proprietary AI models that took years and billions of dollars to develop, or tamper with AI systems without detection.
However, quantum computing also offers powerful new defensive capabilities. Quantum-enhanced AI could perform threat detection and system monitoring at unprecedented speeds, identifying sophisticated cyber threats that currently go undetected. More importantly, quantum-resistant cryptography could protect AI systems from both classical and quantum attacks through new encryption methods that even quantum computers can't crack. These quantum-resistant protocols would secure AI model architectures, protect training data, and ensure safe communications between distributed AI systems.
Perhaps most promising is how quantum computing could revolutionize secure AI development. Quantum machine learning algorithms might be able to train on encrypted data without ever decrypting it, allowing organizations to develop AI systems that process sensitive information while maintaining complete privacy and security. This capability would be especially valuable for government agencies and contractors handling classified information, enabling them to harness the power of AI while maintaining the highest levels of security.
Liability and Privacy Implications
The Salt Typhoon hack raises complex questions about liability and accountability in AI-enabled cyber breaches. Who bears liability when hackers compromise telecommunications systems and potentially access AI training data or model architectures? Telecommunications companies may face lawsuits from customers whose data was exposed, but are they truly liable? And the ripple effects extend much further. AI companies whose systems were compromised through telecom vulnerabilities could face claims from shareholders, clients, and users. Traditional theories of liability may prove inadequate when dealing with AI systems that make autonomous decisions based on compromised data or tampered algorithms.
Privacy implications are equally challenging. Current privacy frameworks like GDPR and CCPA weren't designed with AI-enabled cyber threats in mind. Moreover, the use of quantum-resistant encryption and quantum computing in AI systems will require updates to privacy laws that currently assume classical encryption methods. As AI systems become more sophisticated and potential threats more complex, our legal frameworks for privacy and liability must evolve accordingly.
Conclusion
As AI becomes even more integrated into our infrastructure and defenses, nations must balance innovation with security. By investing in quantum-resistant systems, fostering international collaboration, and maintaining a strong offensive deterrent, the U.S. and its allies can succeed in this complex and rapidly evolving digital landscape.
© 2024 Amy Swaner. All Rights Reserved. May use with attribution and link to article.