What D-Day Can Teach Us About AI and Cyberattacks
This anniversary of the Allied invasion of Normandy should remind us that warfare, especially cyberwarfare, is always evolving. The good news? Surprise attacks aren’t always so surprising.
(This article was originally published on June 6, 2023. It was updated June 6, 2024.)
When I think about how artificial intelligence (AI) will impact cyberwarfare, I think about the Allied invasion of Normandy. Or at least I should, as I was reminded this morning on this, the 80th anniversary of that fateful day.
The thing that has always struck me about that sneak attack in France on June 6, 1944, is that it wasn’t so surprising. The U.S. had announced its entry into the war years before and had been steadily building up our munitions and mobilizing troops. The assault itself involved nearly 160,000 Allied troops (73,000 from the U.S., 83,000 from Great Britain and Canada, among other troops). The Germans had known the attack was coming for months. They just weren’t sure where or when.
The same can be said for AI on the battlefield. And by battlefield, I mean any attackable beach, hill, Maginot Line, munitions depot, or computer network near you. Like the national security threats President Biden referred to in his D-Day remarks Thursday – the “autocrats of this world” watching for cracks in the transatlantic NATO defense alliance – cyber attackers are lurking, and AI enables their methods.
Such attacks could be launched by a nation-state, cyber gang, or some former employee with a gripe. And it’s up to everyone – military experts, politicians, enterprise leaders, and anyone with a laptop or cellphone – to be ready.
Just as Normandy marked the start of U.S. active engagement in World War II, heralding the eventual defeat of the Nazi regime and the start of a new world order, the entrance of AI into our military planning will have an extraordinary impact.
“Today we are undergoing the most significant and most fundamental change in the character of war,” said Army Gen. Mark A. Milley, then–chairman of the Joint Chiefs of Staff, on a podcast for the Eurasia Group Foundation last year. “This time,” he added, “[it’s] being driven by technology.”
Like Biden today, Milley delivered a speech in 2023 at a service held at the Normandy American Cemetery and Memorial in France, where more than 9,000 service members are buried just above Omaha Beach. Both spoke of the valiant sacrifice made by the soldiers who stormed those cold, chaotic beaches, and the new yet similar era we find ourselves in today.
What they didn’t mention is that the coming change in today’s fight will affect businesses as much as bunkers.
Do we need a “Geneva Convention” on AI?
AI – whether you’re fearing it or drooling over it – is top of mind for legislators, business leaders, and scientists alike. A consortium of scientists called for a pause in AI development last year, concerned that technologies like OpenAI’s ChatGPT are being dumped into the public domain in a mad dash for profits and power before we fully understand their potential. That letter sparked controversy – and not an iota of change – though even its critics admitted we don’t really get exactly how AI works and how it might (make that will) evolve of its own accord. More recently, the European Union adopted the world’s first law governing AI, Congress has debated banning TikTok, and President Biden issued an AI executive order that seeks to set guardrails while revving up AI analysis at federal agencies that often lag in tech development.
Today we are undergoing the most significant and most fundamental change in the character of war [and it’s] being driven by technology.
The safe speed of AI’s development and deployment, of course, is made all the more complicated by the pedal-to-the-metal approach of competitors like China, and the fact that “AI could be as dangerous as nuclear weapons,” wrote U.S. Rep. Seth Moulton (D-Mass.), a former Marine who served in Iraq (and was twice decorated for valor), in an attention-getting op-ed that ran in the Boston Globe last year.
AI could also be dangerous with nuclear weapons, as a sobering new report in Foreign Affairs points out. According to the summary of various war games utilizing large language models (LLMs), the LLMs “exhibited a preference toward arms races, conflict, and even the use of nuclear weapons.” As one LLM explained to researchers, “A lot of countries have nuclear weapons. Some say they should disarm them, others like to posture. We have it! Let’s use it.”
Personally, I can’t stop staring at that exclamation point.
Moulton has called for a “Geneva Convention on AI,” so that world leaders can try to establish some rules of the road before this nascent technology becomes a reality. He discussed this in a “Future of Defense Task Force Report” he co-authored in 2020, and he’s frustrated that the Pentagon “has done almost nothing” in the years since.
[Read also: What businesses need to know about Biden’s national cybersecurity strategy]
He has also chided the Defense Department’s overall sluggish approach to AI, noting China’s aggressive investments in AI tech. “China’s doing this today,” he said at a recent hearing of the U.S. House Armed Services Committee. “They have their top AI companies working on their military problems. So this is another thing where we’ve got to have the urgency to get this done.”
Whether or not world leaders conduct a Geneva Convention on AI, there are ways that enterprise leaders can fortify their own technological infrastructure, starting now.
Turing, technology, and what enterprise leaders should know
Technology played a key role in the plans for D-Day. The Axis Powers’ state-of-the-art code machine, Enigma, once considered unbreakable, was the preeminent machine for communicating military plans in secret code – until Alan Turing and a cohort of Polish and British experts cracked the code in a manner that laid the foundation for the modern computer.
[LLMs] exhibited a preference toward arms races, conflict, and even the use of nuclear weapons.
That code-cracking – depicted in the 2014 film The Imitation Game, with Benedict Cumberbatch as Turing – came early in the war, but in a controversial move, the Allies kept it a secret. That allowed them to listen in on German and Japanese military officials, decrypting vital information. German codes intercepted in the run-up to D-Day gave the Allies precise and real-time visibility into the locations of German fighting units in and around Normandy. And on D-Day itself, Allied commanders listened to German communications, which gave quicker and more accurate info on their troops’ progress than our own communication channels did.
That kind of visibility and the evolving mindset it ushered in are essential for today’s enterprise leaders who are combating a tech war all their own.
First, visibility, as in endpoint visibility: Knowing the number and location of all desktops, laptops, tablets, servers, and other endpoints, and the speed with which they are being (or need to be) patched, is a key element in any robust cybersecurity strategy, as important as knowing the location of soldiers on every hill in France was back in the 1940s. It’s your starting point. You can’t move on to more sophisticated strategies without that first step.
[Read also: Here’s your Benchmarking 101, or why it pays to know how your cybersecurity stacks up]
As for mindset? The myth of Enigma’s unbreakability, and the hubris that fueled that misguided assumption, played as much a role in Germany’s eventual defeat as all the tanks, bombs, and bullets. No technology is impregnable. Thinking otherwise (see also “Titanic, unsinkable”) is a recipe for disaster.
That’s a useful reminder for today’s C-suite execs and board members who are (or should be) hearing their security leaders talk about the need for increasing tech budgets and amping up defenses. Some enterprise chiefs might still think we can absolutely prevent cyberattacks if we just find the right technology. Savvy business leaders now accept that it’s not a matter of if but when attacks will happen.
But there are effective ways – through robust threat hunting and incident-response plans, to name just two – to limit the damage.
Slow down
Learning to walk before we run, when it comes to implementing AI and smart AI governance into business systems, is also prudent. But slowing down doesn’t mean stopping.
China’s doing this today. They have their top AI companies working on their military problems. So this is another thing where we’ve got to have the urgency to get this done.
Moulton, for instance, endorses a pause but not anything like across-the-board interference.
“There’s a lot of AI development that we don’t want to slow down because we want to get cures for cancer as quickly as we can,” Moulton said in a radio interview for NPR’s Morning Edition.
But regulations are essential, he notes, if we’re considering AI’s use in warfare, or its ability to promulgate disinformation at an alarming rate. “These are the places where I think Congress needs to focus its regulatory oversight, not to just try to regulate AI overall but just prevent the worst-case scenarios from happening,” he said.
So how likely is a Geneva Convention for AI?
“We had a lot of nuclear arms agreements during the Cold War,” Moulton told Politico in a 2023 interview. “The Geneva Conventions were negotiated with a lot of tensions in the world. I think that this is hard, but it’s absolutely worth trying.”
President Biden unintentionally underscored that idea in his 80th anniversary remarks at Normandy, noting “America’s unique ability to bring countries together.” So did Secretary of Defense Lloyd J. Austin III in his D-Day speech last year.
“We must meet today’s challenges with our full strength – soldier and civilian alike,” said Austin, referring to the vulnerability of democracy. “If the troops of the world’s democracies could risk their lives for freedom, then surely the citizens of the world’s democracies can risk our comfort for freedom now.”
They weren’t speaking of AI, per se. But… they kind of were.