Fake news has been a buzzword for a while now, but when it comes to actually combating the associated culture of misinformation, the phrase doesn’t prove very useful.
After all, it’s easy to label anything you don’t agree with as “fake,” and a dichotomy of “fake” versus “true” ignores the complexities that make this such a difficult problem in the social media age.
Over the last three years I’ve encountered quite a bit of this complexity first-hand as I’ve adapted my approach to this crucial issue of information literacy.
As misinformation continues to grow more sophisticated in its ability to mislead and divide, we need solutions that actually work.
As an academic librarian, I teach sessions for college students on how to evaluate media in the age of fake news. This has led me to pore over a variety of philosophies and lesson plans.
Faculty in librarianship, journalism, communication, and other fields, as well as entities like the social media platforms themselves, have given plenty of guidance on “how to combat fake news.”
But is any of it helping?
Weaknesses of “Checklist” Strategies for Evaluating Sources
When I first taught one of these fake news sessions, I gave what I thought to be a good presentation on the business of fake news, how stories spread through social media, and a variety of techniques for evaluating information.
I led an interesting discussion with students who seemed eager to contribute their thoughts. The course instructor also liked it and asked me to repeat it for other classes.
But privately I wondered, “will this have any lasting impact?”
We were respectfully discussing these issues within the laboratory context of a college classroom, but would those lessons carry over to the “real world,” when we see provocative headlines, vouched for by our families and closest friends, that speak to the core of our political beliefs and worldviews?
Techniques I covered in my class sessions included evaluating the author, the sponsoring organization, and the content itself. Determining the purpose of the information, whether to educate, persuade or sell. Looking for clues in the domain name. Checking if the main claims are backed up with evidence. And so on.
Most of these criteria are similar to the CRAAP Test, a well-known framework among academic librarians, which asks students to evaluate each source’s Currency, Relevance, Authority, Accuracy, and Purpose, each with specific sub-questions.
Slight variations have also given us RADCAB, ESCAPE, and even CARS to evaluate sources for quality.
Does “Fake News” Instruction Have Any Impact Beyond the Classroom?
But despite whatever these specific acronyms stand for, the larger problem is that although each criterion on its own is valid, collectively they form a dizzying array of questions to ask of every source you encounter, which, as you might guess, doesn’t actually work.
Who’s going to pull up a list of 20 or more things to check when scrolling through social media on their phone?
And even if you do, these exhaustive lists of conflicting criteria overload us, causing us to take shortcuts and make even worse decisions about source quality than if we hadn’t used them at all.
So if following a checklist isn’t the answer, what is?
A big issue with the checklist approach is that it takes strategies originally developed for print books and journals and directly applies them to today’s social web where the stakes are much higher and tech-savvy bad actors are everywhere. So it follows that an evaluation strategy for fake news needs to be specifically suited to these online settings.
Back in 2017, Facebook, under fire for its role in the spread of fake news ahead of the 2016 Presidential Election, launched a media literacy campaign titled “Tips to Spot False News.”
But it, like CRAAP and other checklists, failed to get to the root of the problem in the web environment.
Mike Caulfield, head of the Digital Polarization Initiative, argued that its suggestions to read about pages and study site layouts, among other tips, are not only unhelpful, but actually make the problem worse:
This Facebook advice? It’s indistinguishable, for the most part, from what you would have told students in 1995. And beyond the ineffectiveness of it, it has potential to do real harm. It was precisely these impulses — to judge resources by look and feel and what they say about themselves — that propagandists played on so expertly in 2016.
Fact Checking Methods for Identifying Misinformation
Research from Stanford University suggests a more effective approach to verifying information online.
Their study examined the ability of three groups–history faculty, Stanford undergrads, and professional fact checkers–to complete several tasks assessing the reliability of online sources. Due to the fact that all subjects were “skilled” information users, the researchers hypothesized that all three groups would do well.
In reality, the fact checkers quickly and effectively distinguished the legitimate sources, while both the students and professors were tricked by misleading websites.
But beyond the surprise at how both the students and professors were so easily fooled, the larger point is how the fact checkers arrived at their conclusions.
Co-authors Sam Wineburg and Sarah McGrew explain:
Landing on an unfamiliar site, the first thing checkers did was to leave it. If undergraduates read vertically, evaluating online articles as if they were printed news stories, fact-checkers read laterally, jumping off the original page, opening up a new tab, Googling the name of the organization or its president … They don’t evaluate a site based solely on the description it provides about itself. If a site can masquerade as a nonpartisan think tank when funded by corporate interests and created by a Washington public relations firm, it can surely pull the wool over our eyes with a concocted “About” page.
Wineburg and McGrew also made a comparison to navigating in the physical world:
Dropped in the middle of a forest, hikers know they can’t divine their way out by looking at the ground. They use a compass. Similarly, fact-checkers use the vast resources of the Internet to determine where information is coming from before they read it.
The professors and students in the study were something of unfortunate scapegoats for the widespread failures of the checklist approach: they were duped by criteria easily faked by bad actors, “such as official-looking logos and domain names.”
One of the tasks in the study was to determine the credibility of two pediatrics websites.
Over half of the students rated an article from the American College of Pediatricians, a fringe organization “that ties homosexuality to pedophilia and which the Southern Poverty Law Center labeled a hate group” as more reliable than an article from the American Academy of Pediatrics, which has 66,000 members and publishes the flagship journal Pediatrics.
But, crucially, “even students who preferred the entry from the American Academy of Pediatrics never uncovered the differences between the two groups,” the researchers continued.
“Instead, they saw the two organizations as equivalent and focused their evaluations on surface features of the websites. As one student put it: ‘They seemed equally reliable to me. … They are both from academies or institutions that deal with this stuff every day.'”
Caulfield’s work not only advocates for this idea of lateral reading, but also demonstrates that it is much faster than checklist approaches.
He illustrates how a laundry list of criteria like CRAAP fail in the real world, similar to the findings of the Stanford study, with potentially dangerous consequences:
When you push students to apply it in a real world situation they get overloaded and apply it mechanically in these reductive ways. So you give students a link to a natural healing center proposing to cure your cancer with baking soda IVs instead of drugs. And you ask hey what do we think about this? Well, it’s a dot org, they say, which is good. The person writing it has an NMD, that’s probably a doctor’s degree, that’s good. It’s a medical center, that’s good. Now if you do a thirty second search on this center you’ll find that the center has been criticized directly by multiple academics as being quackery, that the American Cancer Society has debunked the treatment, that the inventor of the treatment was sentenced to five years for manslaughter after a patient died in his care.
Caulfield has distilled this approach into an actionable teaching plan, which he refers to as SIFT (Stop, Investigate the source, Find better coverage, and Trace claims and media to the original context).
At first glance that may sound an awful lot like a new acronym in the style of CRAAP. But the important difference is that it is a short list of things to do, not a long list of criteria to look for and weigh against each other.
Applying Lateral Reading to the Misinformation Classroom
Armed with these findings, I knew that my own “fake news” instruction needed some tweaks.
I didn’t figure that lateral reading was a phrasing that would really resonate with my students, so I encouraged them to “get off the page” to verify a claim or source.
I incorporated Caulfield’s SIFT methodology and presented them as habits or “shortcuts to credibility” that save time as well as improve your evaluative skills.
Even in my original class sessions I did advocate Googling authors and verifying organizations on Wikipedia, but I interspersed that advice with too many other tips and criteria so they weren’t the emphasis.
By incorporating Caulfield’s work as well as the findings of the Stanford study, I’m much more confident that my students are now taking away something that will actually stick in the “real world.”
Of course, the problem of “fake news” also runs deeper than individual sources that pop up and create chaos.
To give one example, bad actors take advantage of what researchers term a “data void” and encourage people to search for very specific loaded keywords used only by conspiracy theorists, white supremacists, or whatever group is peddling a specific brand of misinformation.
The danger here is that the messaging appeals to critical thinking by encouraging the searchers to make their own conclusions, but the whole set of results is stacked with only the same kind of garbage.
Fortunately, the habits of going off the page and cross-checking information in other sources work as general heuristics that can be applied to new contexts, and we just need to be sure our specific approaches to instruction stay flexible in order to incorporate the latest strategies of fake news purveyors who are constantly innovating.
Leave a Reply