It can be tempting to think that the fear-inducing, hyper-partisan, misleading, and outright false content pervasive today is an exclusively modern problem. Yet for thousands of years, our Jewish and Christian ancestors have taught that deception is as old as humanity itself. In Genesis 3, the serpent manipulates Eve through a series of misleading and half-true statements to eat the forbidden fruit, then makes Adam do the same by offering him the choice through a trusted source. Sound like anything that has crossed your social media feed recently?
As Christians, we are not called to a life of half-truths and deception. We are called to follow a God who is “the way, the truth, and the life” (John 14:6). The Prayer Book also teaches that among our duties to our neighbors is “to be honest and fair in our dealings” and to “speak truth, and not mislead others by our silence.” (pg. 848) Let us therefore examine our own conduct to limit the spread of deceitful information and call upon our leaders to work towards the same.
The rapid expansion of digitalization and online platforms has enabled deceitful content to spread more rapidly and disguise itself more effectively. The nonprofit First Draft News has excellent language describing what information manipulation looks like today:
"The term ‘fake news’ doesn’t begin to cover all of this. Most of this content isn’t even fake; it’s often genuine, used out of context and weaponized by people who know that falsehoods based on a kernel of truth are more likely to be believed and shared. And most of this can’t be described as ‘news’. It’s good old-fashioned rumors, it’s memes, it’s manipulated videos and hyper-targeted ‘dark ads’ and old photos re-shared as new.
At First Draft, we advocate using the terms that are most appropriate for the type of content; whether that’s propaganda, lies, conspiracies, rumors, hoaxes, hyperpartisan content, falsehoods or manipulated media. We also prefer to use the terms disinformation, misinformation or malinformation. Collectively, we call it information disorder."
Table of Contents
- Understanding Information Disorder and Disinformation Campaigns
- On Elections
- On the U.S. 2020 Census
- What Can I Do?
- Other Things to Consider
- Further Resources
- Resolutions by General Convention and Executive Council
Information Disorder: a term coined by First Draft News to encompass the spectrum of misinformation, malinformation, and disinformation
Some disinformation is entirely false and fabricated, like this “news” article claiming Pope Francis has coronavirus. As this twitter user points out, the domain was registered several years ago in China and suddenly changed a couple of days previously.
Bots can be used to amplify fringe messages to mainstream audiences. Russian trolls, sophisticated bots, and “content polluters” tweet about vaccination and anti-vaccine messages like this one at significantly higher rates than average users. An estimated 25% of climate denial tweets are spread by bots.
One particular example of concern involves Russian intelligence services using paid advertisements during the 2016 U.S. election that sent different audiences different targeted messages. While national governments have long-used misinformation against enemies, social media has fundamentally changed the scope and reach of these campaigns. The goal of the ads was to widen existing divisions in the U.S., not simply to promote contradictory messages. Notable use of inflammatory language and images and deliberately-misleading names of Facebook pages contributed to the confusion—nothing shows these ads were paid for by foreign actors.
Understanding Information Disorder and Disinformation Campaigns
Roughly 4 in 10 Americans say they often come across made-up news and information. Although the emerging field of social cybersecurity is just now starting to gauge how information disorder affects individuals and society, we have a fairly good understanding of how manipulated information is spread.
Disinformation campaigns are deliberately crafted to spread false or misleading information. However, it may not be the case that the campaign message itself is the actual goal. A common tactic is to first identify two pro/con groups on a divisive issue (abortion, vaccinations, climate change, and political ideology are prime examples). An effective disinformation campaign would infiltrate both sides, backing group leaders, and helping to develop echo chamber qualities in the group. In echo chambers, group members sideline outside information, pass internal information extremely quickly, and make decisions based on emotion and “what everyone knows.” Campaigns use this emotion-based decision-making to incite feelings like dismay or excitement in both groups, then pit the two sides against each other. Ultimately, both suffer from a lack of cross-issue communication and lose even more trust in “the other,” in short, enlarging the divide between the two sides.
Researchers are extremely concerned that disinformation campaigns undermine democratic processes by fostering doubt and destabilizing the common ground that democratic societies require. “[It’s like] listening to static through headphones,” says Dr. Kate Starbird, professor at the University of Washington. “It is designed to overwhelm our capacity to make sense of information, to push us into thinking that the healthiest response is to disengage. And we may have trouble seeing the problem when content aligns with our political identities.”
Elections and politics have always involved disinformation and manipulation. Often, a politician’s ability to effectively use and counter such strategies is a mark of political competency. Consider Odysseus, “the man of twists and turns,” whose cleverness and trickery was praised by men and gods in the Greek epics The Iliad and The Odyssey. Yet democratic societies rely on fair and free elections to ensure that government derives its authority from the will of the people. Disinformation campaigns aimed at voters undermine the ability of a country to hold fair and free elections. There are number of tactics used for this goal.
Microtargeting of communities is particularly concerning: how can an election be fair if one community receives highly targeted, misleading messages urging them to vote for or against a candidate? Or worse, what happens when targeted messages advertise the incorrect time, place, or method of voting to a particular group, like the “Text to Vote for Hillary” ads? Even the threat of such actions undermines confidence in democratic systems.
We now know that for the past few years targeted international digital campaigns in the U.S. and around the world have worked to spread intentionally inaccurate content, undermine faith in election procedures, and widen existing fissures in multiple countries. Yet even U.S. domestic organizations are increasingly using these same disinformation techniques for short-term election or politically-motivated gains. Ultimately, election disinformation pushed by all actors weakens the democratic system.
The Episcopal Church recognizes the process of voting and political participation is an act of Christian stewardship, and that such processes must be fair, secure, and just (see resolutions EC022020.16 and 2018-D096). Since misinformation threatens this process, The Episcopal Church calls upon all its members to be vigilant when engaging with online information and encourages the use of fact-checking and source identification to limit misinformation’s spread. Further, we urge Episcopalians to hold government officials accountable for limiting the spread of information that is false and designed to cause harm.
On the U.S. 2020 Census
Every 10 years, the US government undertakes a massive effort to count all individuals living in the country. This count is critically important: it determines representation in Congress, is used to allocate federal funds for the next decade, and provides valuable information for state and local community officials, service providers, and private businesses. Misinformation about the census is easily spread and incredibly damaging. Communities where census misinformation is most rampant are often ones with “hard-to-count” subgroups who have the most to gain from accurate population counts.
Targets of census misinformation often include:
- Data privacy and financial scams. What you need to know: The Census Bureau will never ask for your Social Security number, credit card or bank account numbers, or a financial donation.
- In-person census takers. What you need to know: During the spring and summer of the 2020 Census, in-person census takers will visit homes to follow up with individuals who have not yet responded. All workers carry an ID badge with their photograph, a U.S. Department of Commerce watermark, and an expiration date. If you have questions about their identity, you can call +1-844-330-2020 to speak with a Census Bureau representative.
- Data privacy and protection guarantees. What you need to know: Under Title 13 of the U.S. Code, census data may ONLY be used for statistical purposes. The Census Bureau cannot release any identifiable information about you, your home, or your business, even to law enforcement agencies.
You can learn more about Census Misinformation and how to counter it on the official U.S. Census website. Also, don’t miss the Office of Government Relation’s Census Series and census engagement toolkit!
What Can I Do?
Misinformation often spreads faster than real news and reaches a wider audience. It’s also becoming increasingly difficult to identify. The first step in addressing misinformation is acknowledgement: all of us contribute to the problem, and we must all take ownership to stop it. As long as misinformation remains an issue for “the other” to solve—Gen Z, Boomers, Facebook, Millennials, in-laws—it will persist.
We won’t catch all the misinformation streaming past us. But before you re-share that tweet, or tell a friend about that surprising headline you saw, ask yourself three questions:
- Where’s it from? Look for the source and be careful of fake copycat websites.
- What’s missing? Do the headline and article match? Are other news organizations talking about it?
- How do you feel? If a headline or article sparks an intense emotion like fear, anger, or vindication, be watchful. That’s a common tactic from someone trying to manipulate you, not from someone trying to spread reputable news.
Other things to consider:
- Learn who to trust. An unfortunate consequence of disinformation vigilance can be censorship through noise. If vigilance leads us to distrust every headline, then those promoting disinformation are succeeding. This means we are less likely to receive information that is accurate and informative. Learning who generally produces accurate information is as important as carefully examining unknown sources.
- Genre matters. It’s not just satirical Onion articles that get shared as a “news.” Be mindful of the differences in presentation, fact-checking protocol, and accountability standards between peer-reviewed research, fact-checked news articles, personal opinion pieces and talk shows, and various forms of satire, propaganda, and gossip.
- One effective way to end disinformation campaigns is to label them. While you might not want to engage with the comment thread debates on social media, consider making a comment or sending a private message to friends and family members when they share a post that you suspect is false or misleading. And be responsive to the same feedback from others!
- Communicate to elected officials that protection from disinformation campaigns is important to you.
- Consider asking your members of Congress to support election security. Bills debated in the 116th Congress include the DETER Act S. 1060, Honest Ads Act S.1356/H.R.2592, and SHIELD Act H.R. 4617.
- Develop a nuanced understanding of the relationship between free speech and disinformation. Consider: Does (or should) the Constitution offer paid commercial or political ads the same free speech protections as individuals? Does freedom of speech also include the freedom to receive information? If so, does disinformation threaten that right? Who (if anyone) should be responsible for tracking/tagging false information? Should there be limits to web anonymity or author disclosure requirements?
If you want to learn more about information disorder, here are some recommendations:
- Understanding Information Disorder – a thorough but very readable examination of the modern information disorder landscape
- The Full Fact Toolkit – How to quickly spot possible misinformation before sharing it. Also includes links to numerous fact-checking websites
- Foreign Interference and the 2020 Election
- Bot Sentinel – Resource that lets you see what topics are trending on suspected bot twitter accounts by day/hour. You can also put in any twitter handle and get a score on whether it looks like a suspected bot account.
- Quizzes to see how well you spot misinformation!
Resolutions by General Convention and Executive Council
- MB 016 – Misinformation and Elections
- EXC062016.07 - Support for Campaign Finance Reform
- Resolution 2018-D096 - Urge Advocacy for Good Governance and Fair Participation
Work on this resource was led by Rebecca Cotton, policy intern, Office of Government Relations