Fake news, including online misinformation and disinformation, is dangerous in a myriad of ways both in the United States and abroad.
People got sick(Opens in a new window) and avoided vaccinations(Opens in a new window) based on false information, while groups like QAnon have spread viral misinformation in recent years, sometimes leading to violence(Opens in a new window).
More recently, the Justice Department and local officials have expressed concern(Opens in a new window) on potential voter intimidation at the ballot box ahead of the 2022 US midterm elections.
“We are deeply concerned about the safety of individuals exercising their constitutional right to vote and lawfully delivering their early ballot to a mailbox,” Arizona officials said in a statement(Opens in a new window) after two gunmen in tactical gear opened a shop at a ballot box in Mesa. “Do not put on body armor to intimidate voters as they are legally returning their ballots.”
The Dropbox effort isn’t limited to Arizona New York Times reports(Opens in a new window)and conspiracy theories circulate about them(Opens in a new window) Online for a few years. People are falsely told that “voting mules” fill drop boxes with fake ballots or fiddle with the boxes themselves. So far there is no evidence(Opens in a new window) this is the case.
We know we have been wary of misleading information on the internet for years, but it remains a problem – especially at election time. Why is it spreading further? How can we do better not share it ourselves? There are best practices, but no easy solutions.
How to check online information
Online misinformation abounds, especially in an election year. At Yu(Opens in a new window)Professor at Syracuse University’s School of Information Studies, urges people to slow down before sharing.
“When I share something on my social network, the first thing I train myself to think is, ‘Is this information useful for my friend or relative?’ I think it’s a very good experience for myself and I also find that I share less afterwards,” she says.
In the moments before you share an article or social media post, consider the following:
1. Be aware of strong emotional triggers
Do you feel outraged by the content? scared? upset? Chances are it was designed exactly for you to share and spread the word without even thinking about it. In a video titled “This video will make you angry(Opens in a new window)‘ YouTuber CGP Gray likens emotionally charged messages to ‘mind seeds’ looking for new brains to infect. Sharing a viral post spreads the message of that post like a sneeze spreads the flu.
Research shows that emotional messages become more widespread(Opens in a new window) within our networks because they get more engagement. When these messages relate to divisive issues — like gun control, abortion, or COVID — they tend to stay within networks of people who believe the same thing, creating an echo chamber of increasingly extreme rhetoric. It also means that people who need to see fact checks the most can be cut off from them almost entirely(Opens in a new window).
Angry messages are widely shared, but so are “feel good” posts designed to play on impulses other than outrage. This is because of the reasons people share bad information(Opens in a new window) ranging from indignation to upgrading one’s own self-image to informing others. So we might share a story that seems warm and fuzzy because we want others to know about it or because we think it makes us seem like better people to those in our network. But these contributions can be much more complex(Opens in a new window) as they appear.
2. Be careful who is sharing the information
The Interactive Media Bias Chart shows where your preferred medium sits on the political spectrum. (Image credit: Ad Fontes Media)
Look beyond what is being shared to see who does the sharing. Just because you trust the person sharing a piece of news doesn’t mean they’ve done their due diligence. Before you click share, double-check the information, especially if it’s particularly controversial or outrageous.
This applies to individuals and media. Everyone makes mistakes, but when inaccurate information is posted, do the outlets you share issue corrections or duplicate bad information? The media bias chart(Opens in a new window) can be a useful tool to see where a market is on the political spectrum if you are unsure.
3. Try to find corroborating stories
If something seems particularly crazy, seek additional coverage and read multiple sources. If the article you see has only one source, dig deeper into the publication. There is a possibility that the story is not true. Do a web search for the author of the article. Read the publication’s About page. Look up the site’s publisher to see what they think of it. You may find strong evidence of bias, either in the article itself, in the site information, or both.
4. Exclude satire or parody
Check if the article is satire. You don’t want to be the person sharing something from The Onion as if it’s a fact. Check the site’s page and comments for clues as to whether an article that sounds particularly ridiculous or outrageous is by a comedy writer rather than a journalist. Also, keep an eye out for these parody Twitter accounts.
TrustServista (Image credit: Lance Whitney)
There are a number of free tools you can use to examine the veracity of a story. For example, install a browser extension like TrustServista, which uses artificial intelligence and other analytics to measure the trustworthiness of a news article.
Alex Mahadevan(Opens in a new window)Lead of Poynter’s MediaWise project, also recommends MediaWise en Español(Opens in a new window) and Factchequeado(Opens in a new window) for the Spanish-language fact-check because “disinformation targeting Latino communities is a big problem that still doesn’t get enough attention,” he says.
You can also go one step further to brush up on your critical thinking skills. Take a class like Calling Bullshit(Opens in a new window) or How to Spot Misinformation Online(Opens in a new window) by Poynter. There are even games like Fakey(Opens in a new window) designed to help you spot a fake message.
Recommended by our editors
Is content moderation a losing battle?
Social media platforms have built-in systems for tagging content, but it can be like playing slap the mole. Ban a word or phrase and people will come up with another term(Opens in a new window). Throw a group off a platform and another will soon take its place. Flag a tweet or post as false or misleading and the account owner will scream censorship.
That is, these are billions and trillions of dollars(Opens in a new window) companies we are talking about. You have the resources to address the issue. Critics argue that giants like Facebook and Instagram are putting profit over safety, which the companies deny, but much of the action has been reactive and aimed at avoiding lawmakers’ crosshairs.
However, experts are skeptical that allowing social media companies to monitor themselves will ever work. “I believe in separating content moderation from the platform because I believe platforms doing their own moderation pose a conflict of interest,” says Syracuse Professor Yu. “I think it should be done by a third party.”
Cailin O’Connor, co-author of The age of misinformation(Opens in a new window), agrees. She says we need an outside body to regulate the platforms we use every day — like an “EPA for the Internet.”
“Social media platforms are removing endless amounts of bots and sock puppets… but I think we need regulation to give them that extra step,” says O’Connor. Especially when it comes to the accounts “that have a massive amount of engagement,” meaning social platforms “have an incentive to leave.” [them] even though they are misleading.”
There is no magic bullet that will completely wipe bad information off the internet, but we are not helpless. The same experts believe that more friction should be added to the process of sharing information. Social media companies may agree. When Twitter delivered a prompt asking people to read stories before retweeting them, it resulted in 40% more articles being opened, the company said in 2020. Facebook tested something similar last year.
But as social media expert Max Eddy of PCMag recently said, Twitter in particular knows a lot about the misinformation problem, but it’s not necessarily equipped to deal with it.
Coordinated disinformation efforts will always change tactics to evade detection, and we must adapt. “Many solutions don’t last forever,” says O’Connor. “Maybe the big picture is that we’re constantly trying to solve this problem, and that’s fine.”
Do you like what you read?
Sign up for tips Newsletter for expert advice to get the most out of your technology.