A Comprehensive Guide to Disinformation: From Definitions to the Best Defense Strategies
Now armed with GenAI, a wave of bad actors – cyber gangs, nation-states, bored teens, overzealous celebs and influencers – have the power to inflict swift and serious damage to a business’s reputation and bottom line. Here’s how to identify and shore up your weak spots.
It’s time for organizations to get informed – and proactive – about disinformation. That’s the bottom-line message announced by the World Economic Forum last month, when they cited disinformation and misinformation as the top global risk for the immediate future.
Their annual “Global Risks Report” tallies insights from a survey of more than 900 experts worldwide, prioritizing current trends and challenges for businesses and organizations. For the second year in a row, it gave the top spot of concern to disinformation (lies deliberately told) and misinformation (honest misunderstandings that still spread like wildfire), noting that both can be used by foreign entities to sway voter intentions, sow doubt about activities in conflict zones, and “tarnish the image of products or services from another country, [which] could lead to more frequent consumer boycotts of products.”
It’s that last one that has a growing number of enterprise leaders concerned, so much so that in September 2024, the global public relations and marketing powerhouse Edelman launched a Counter-Disinformation Unit to address these risks and stem the spread of fake information about its clients.
“We believe our Counter-Disinformation Unit is the first of its kind, from several angles – the focused team, our fully-integrated capabilities, and our approach to technology,” says Dave Fleet, head of global digital crisis at Edelman. He thinks other PR firms will follow suit as the disinformation problem grows. Edelman’s CDU is already expanding quickly, with the company signing leadership for a dedicated operation in the EMEA region in January.
Focal Point has covered disinformation and misinformation as a unit (both in terms of the enterprises most at risk and the old-school cybersecurity tactics that can fight back). In recent months, however, disinformation has been arguably edging out misinfo as the topmost concern, given the onslaught of AI-fueled technology that cyber gangs are using to spread lies faster than ever, and social media companies relinquishing their fact-checking and overall oversight duties at the worst possible time.
“The technology has become so much better that you can’t keep up,” says Jevin D. West, associate professor in the Information School at the University of Washington. Not only has social media allowed rumors to spread more quickly, but its algorithms, designed to engage audiences at all costs, feed them the most outrageous and shocking content they crave – whether it’s true or not.
With that in mind, we thought it was time to turn up the spotlight on disinformation – providing a brief history, today’s most pressing challenges, and the most effective ways to prep your org for this hostile new form of competition.
- What is disinformation? Definitions and past examples
- Why disinformation may be (slightly) more threatening than misinformation
- Disinformation is coming for the corporate sector
- How does AI impact and fuel the spread of disinformation?
- What puts you most at risk of disinformation?
- How to fight disinformation with fact-checking – keep it real
- How to fight disinformation with…lawyers? Tread carefully
- The ultimate antidote to disinformation: trust
What is disinformation? Definitions and past examples
First, let’s further clarify some key terms.
We believe our Counter-Disinformation Unit is the first of its kind.
DISINFORMATION is false or inaccurate information that is amplified intentionally by bad actors looking to manipulate public opinion, gain influence, or make money – and often all three.
MISINFORMATION refers to false, inaccurate, or incomplete content generally spread by those who don’t realize that the info they’re sharing is incorrect.
MALINFORMATION is the most nuanced – content that may have a kernel of truth to it but is skewed, exaggerated, or taken out of context in order to mislead others.
Fake news about companies isn’t new. In 1977, Life Savers had to publish a full-page rebuttal in the New York Times dispelling rumors of spider eggs in its Bubble Yum chewing gum. Two years later, General Foods battled stories that its Pop Rocks carbonated candy would cause children’s stomachs to explode. Both Wendy’s and McDonald’s fought allegations that they used worm meat as filler in their burgers (they don’t). And let’s not forget accusations 10 years later that McDonald’s funded the IRA Irish terrorist group. This turned out to be misinformation – a misunderstanding of a cable news clip that praised the company for contributing to its employees’ Individual Retirement Accounts.
[Read also: Here’s why enterprises – big and small – need to take this threat seriously]
Those mistruths spread like a physical virus, at the watercooler and social gatherings. Almost half a century later, the internet wants you to hold its beer. Lies can now spread free of physical constraints.
Back to table of contents
Why disinformation may be (slightly) more threatening than misinformation
We know falsehoods travel farther, faster, and deeper than the truth. A groundbreaking study by MIT researchers in 2019 looking at more than a decade’s worth of data found that false info was 70% more likely to be retweeted on Twitter than the truth, and false news reached 1,500 people about six times faster than the truth.
While misinformation is troubling, disinformation is particularly nasty because “it’s meant to cause damage,” says Tiffany Kwok, a policy analyst at Toronto Metropolitan University’s Dais public policy and leadership think tank.
Disinformation-as-a-service is also on the rise – and remarkably affordable: Just $15 to $45 to create a 1,000-character article, $65 to contact a media source to spread the article, and $10 per post on a news story, notes a review of costs conducted by PwC.
A June 2024 Dais report about the effect of disinformation targeting Canadian companies highlights several forms it can take. These include impersonations of executives or the businesses themselves. We’ve seen these in fraudulent cryptocurrency giveaways where scammers impersonate the likes of Elon Musk. Misattributing statements to company executives can also cause damage. In 2016, PepsiCo faced consumer boycotts after false reports that then-CEO Indra Nooyi had told Donald Trump’s supporters to “take their business elsewhere.”
Bored, disaffected teens can target companies with disinformation, as can activist groups looking to spread a narrative – a particular issue that companies must increasingly beware of in a politically charged environment.
Other bad actors include nation-states looking for political leverage. In 2022, researchers identified a disinformation campaign targeting U.S. midterm elections, engineered via China’s Dragonbridge disinformation network, which includes fake social media accounts.
Back to table of contents
Disinformation is coming for the corporate sector
But it’s more than just politics – it’s increasingly a profits and influence issue, as cybercriminals and brand competitors use disinformation to sully company reputations or manipulate stock prices.
That same China-backed group, Dragonbridge, instigated a disinformation campaign targeting U.S., Canadian, and Australian mining companies that were looking to break into the rare-earth metals market, which is currently dominated by China. The disinformers flooded Facebook and Twitter with posts that linked environmental and health concerns to proposed new mining projects by China’s competitors.
In another instance, a leading semiconductor giant’s acquisition of a tech company was delayed – and stocks for both firms fell – when a forged U.S. Department of Defense memo alleged the deal posed national security concerns. And the S&P 500 took a dip after images of an attack on the Pentagon surfaced on social media in 2023.
Sometimes, the enemy is within. In 2018, Ryan Air sacked six workers for staging a photo of themselves being forced to sleep on the ground, a potent reminder that insider threats remain a vexing problem for enterprises in an increasingly viral world.
Back to table of contents
How does AI impact and fuel the spread of disinformation?
The fact that all these groups can now access generative AI to fuel more powerful and faster-spreading campaigns makes disinformation more dangerous than ever before, warns Fleet, of Edelman.
“Generative AI is likely to cause a step change in this area,” says Fleet. Aside from business email compromise attacks, where scammers impersonate executives to trigger fraudulent payments, deepfake attacks on corporations are expected to rise exponentially. In the recent past, deepfakes have been limited to impersonations of well-known figures, like fake Elon Musks shilling scam crypto coins, or a fake Tom Cruise narrating a faux documentary designed to scare people away from attending the Paris Olympics.
Deepfakes are now surging in the corporate realm as fraudsters target lesser-known executives at large and small enterprises alike. In the past year, deepfakes have impersonated executives at the luxury car company Ferrari, major UK energy supplier Octopus, the world’s biggest advertising group WPP, and – lest you think only the big guns are targets – a high school in Baltimore, where a principal was put on leave for making racist comments in an audio recording. (The recording was an AI fake created by a colleague, who was later arrested.)
What puts you most at risk of disinformation?
Dealing with disinformation is complex, says Fleet. He advises a multi-layered approach to disinformation defense, beginning with assessing your weak spots.
Map out where you’re vulnerable reputationally. Identify the chinks in your armor and be proactive in building resilience around those areas.
One weakness could lie in your product category. For example, the public might not understand how complex products work, opening up a vacuum for conspiracy theories like the one alleging a global elite was using smart meters as directed energy weapons to cause wildfires or that governments were using 5G towers to spread coronavirus as part of a mass depopulation exercise.. Technology companies might conclude that difficulty in educating the public about complex technology topics is a potential vulnerability that needs more investment.
Geopolitical vulnerability could be another weak spot. If you’ve got a strong presence in a tense region, it’s worth gaming out disinformation scenarios by assessing who might target you with disinformation if a significant geopolitical event occurred there.
“Map out where you’re vulnerable reputationally,” Fleet says. “Identify the chinks in your armor and be proactive in inoculating and building resilience around those areas.” For example, pharmaceutical companies or personal protective equipment (PPE) suppliers might keep a close eye on anti-vax disinformation – most of which researchers say was spread by 12 individuals in the early days of the pandemic. Disinformation about the efficacy or risks of your vaccine could damage your revenue and disrupt your contribution to public health efforts.
Companies must look everywhere, including social media, for early signs of disinformation, Fleet advises. Finding it as early as possible is critical to monitoring its spread.
The speed and scope of fake news depends on several factors, warns West. “It depends on which platform you’re on, what kind of rumor, and what the event is.” It also depends on what else is happening at the time; a rumor stands a greater chance of going viral on a slow news day.
Organizations should also watch for amplifiers. Disinformation about a company posted by someone who doesn’t have much of a following might die on its own, unless an influencer with a big platform and a narrative to support picks it up. That can trigger a viral uptake.
Cases in point: Celebrities including John Cusack and Woody Harrelson, rapper Wiz Khalifa, and boxer Amir Khan, among others, amplified those spurious claims in 2020 linking 5G wireless technology to COVID-19; in the UK, that conspiracy theory-run-amok triggered more than 30 acts of arson and vandalism against wireless telecom towers and some 80 other incidents of telecom technicians being harassed on the job. Roseanne Barr has accused Monsanto of spraying chemicals from the back of jets (for the record, contrails are clouds of humidity formed in an aircraft’s wake) and rap megastar Nicki Minaj spread lies about the COVID vaccine causing male impotence (again, no).
Back to table of contents
How to fight disinformation with fact-checking – keep it real
When disinformation does go viral, fact-checking it with your own statement is a wise choice. But keep in mind you’re competing with the engaging energy of a disinformation campaign, Fleet warns.
[Disinformation velocity] depends on which platform you’re on, what kind of rumor, and what the event is.
“They can be very dramatic, and they travel fast because of that,” Fleet says. “Comparatively, the truth is usually pretty boring.” Social media algorithms exacerbate the problem because they drip-feed drama. “What we’re trying to share when it comes to fact-checking is encumbered by that reality, which means it doesn’t travel as far or as fast,” Fleet continues. “It never actually catches up with the lie.”
Companies must respond to that by creating richer, more engaging content, he advises, including videos, infographics, and most importantly, real human stories.
Companies that know a disinformation offensive is likely might get ahead of the game, “prebunking” or inoculating audiences against it before it emerges, advises Kwok. This is where understanding your vulnerabilities in advance is key.
Back to table of contents
How to fight disinformation with old-school cybersecurity tactics
Like cybersecurity attacks, disinformation campaigns may soon be unavoidable. But the pain they inflict can be mitigated with some traditional cybersecurity tools, including:
- Incident response playbooks: Dust off and update security guides by adding strategies to quell disinformation. From the chief communications officer down through marketing, public relations and social media departments, employees must be well-versed in social engineering techniques and media manipulation tactics. They should implement third-party monitoring, craft different narratives to counter various forms of attack, and practice the recovery plans.
- Cybersecurity staff training: Don’t stop at phishing tutorials. Provide employees with detailed and engaging explanations of deepfakes and the ways cybercriminals are leveraging AI to create remarkably lifelike images and recordings.
- Employee experience improvements: Employee satisfaction is a significant measure of a company’s overall brand reputation, and consumers will be more skeptical of disinformers’ claims if the firm is renowned for the way it treats its workers. How it handles AI and data also counts. Smart GenAI governance and transparency in how the firm takes data privacy seriously both go a long way to boosting consumer confidence, too.
How to fight disinformation with…lawyers? Tread carefully
Companies might consider a more aggressive (as in legal) action. However, mounting a case against a misinformation or disinformation campaign is fraught with dangers, warn experts. For one thing, you have to find the “superspreaders” who are circulating the disinformation systemically. Then, you must combat legal regimes not always built with such cases in mind.
[Fighting disinformation in court] is very much a large corporation or rich person’s pursuit.
In the U.S., for example, the First Amendment makes it more difficult to prosecute someone for spreading disinformation, points out Steve Kuncewicz, partner and head of creative, digital, and marketing for UK-based legal firm Glaisyers.
Even in the UK, where libel laws are stronger, legal cases against disinformation spreaders are expensive. “It’s very much a large corporation or rich person’s pursuit,” Kuncewicz says.
A legal suit also invites the Streisand effect. In this phenomenon, an attempt to block information generates more interest in it, amplifying the information instead of silencing it. It’s named after a case where music legend Barbra Streisand’s lawyers attempted to block a picture of her Malibu home from publication. As news of the attempt spread, people reacted by seeking out the picture.
The last thing a company wants is the disinformation it is addressing to become an online meme, so if you’re taking a legal approach, think carefully about the subject matter and your legal target, Kuncewicz warns. If the offending disinformation spreader is a malcontent with a small following and shallow pockets, think twice.
[Read also: This CISO has a cure for boring cybersecurity training]
Conversely, if you’re Dominion Voting Systems, upset that Fox News alleged your voting machines were rigged to steal the 2020 election, a lawsuit might be advisable. In 2023, Fox settled, paying the company $787.5 million and acknowledging that it had spread false information.
Back to table of contents
The ultimate antidote to disinformation: trust
The disinformation problem will probably get worse before it gets better. As companies grapple with the issue, one currency stands out above all others: trust.
“Trust is a function of whether people you know believe and anticipate that you’re going to do the right thing in the future,” says Fleet. “So every action you take is going to reinforce or detract from that belief.”
Companies that have consistently proven to be transparent and non-manipulative with their stakeholders stand the best chance of retaining their goodwill as the fake news flows. If your industry has been proven to lie about the effects of tobacco on human health or the effects of fossil fuels on the climate, you’re already in a trust deficit.
Data suggests that authority figures at large have some work to do here. Edelman’s annual Trust Barometer, which measures the global trust landscape and was released last month, found a declining trust in business leaders this year, with 68% believing that they lie to us, up from 56% in 2021. For companies wanting to stay ahead of the disinformation deluge, now is the time to build solid, meaningful relationships with stakeholders.