04_ZivBene

Preparing College Students for a Digital Age: A Survey of Instructional Approaches to Spotting Misinformation

Misinformation has become a regular feature of the Internet. Research suggests that everyone, including young people who have grown up with digital devices, struggles to differentiate fact from fiction online because they read closely rather than turning to external sources. We analyzed the resources students find when they seek advice offered by college or university websites on evaluating the credibility of online information. A random sample of 50 universities indicated that, for nearly all institutions, students are advised to engage in close reading to determine credibility. We conclude by recommending that institutions overhaul how they teach students to evaluate online sources.

Introduction

Today’s college students are often referred to as digital natives: their fluency in operating devices is also assumed to imply fluency in sorting through the information these devices provide.1 The truth is more complicated.2 Studies have shown that college students struggle to search for and evaluate the credibility of online information. In a study of 1,060 first-year college students, Hargittai demonstrated that digitally wired students are less than digitally savvy.3 They use the order of search results to determine trustworthiness, unaware that Google’s algorithm does not always elevate credible sources to the top of the Search Engine Results Page (SERP).4

After selecting a website, college students are often unable to effectively evaluate it. When assessing credibility, they rarely consider the source of the website or scrutinize the author’s credentials.5 Students typically rely on heuristics such as site design and relevance to search needs to decide whether to trust a website.6 A study of 7,804 middle school, high school, and college students showed that they evaluate websites using superficial features such as site design, logos, a dot-org top-level domain, and whether a website has references—even if those references are to sources that do not support the claims being made.7

Colleges and universities are designed, in part, to help students meet the challenges they will encounter beyond graduation. The internet’s centrality in modern life has added a new role for colleges and universities: how to provide students with the tools needed to safely navigate the web and reach sound decisions. With this in mind, we set out to examine the instructional resources students find when seeking guidance from their college and university websites on how to evaluate online information.

Conceptual Background

Most web users, including college students, employ heuristics to assess a website’s credibility.8 Two early theories suggested that the prominence of information on a website is the primary factor in a user’s evaluation. Information-foraging theory argued that internet users choose information based on what they notice and its relevance to their search.9 B.J. Fogg’s prominence-interpretation theory further posited that prominence, defined as the likelihood that information on a website will be observed by users, directly affected how people judged that information.10

People mainly employ surface features such as length, references, and writing style to quickly evaluate whether a website is professional.11 In typical web evaluations, users remain on the website they are investigating to determine credibility and often rely on their background knowledge to assess whether a website should be trusted.12 In sum, people often trust their ability to spot misleading information through a website’s surface-level features and their preexisting knowledge. They also evaluate credibility based on the relevance of a given site to their search needs.

Popular news and media literacy approaches designed to support students in becoming better fact-checkers are consistent with the strategies that college students use. One such digital literacy approach roots itself in propaganda inoculation developed in response to print texts. It asks readers to analyze a page’s content to determine the author’s purpose and biases.13 This kind of close reading carries over to how people currently approach web evaluation.14

The Checklist Approach

College librarians have employed numerous checklists designed to help students evaluate content on the internet. These resources often incorporate advice originally conceived for print sources. The CRAAP test, an acronym that stands for Currency, Relevance, Authority, Accuracy, and Purpose, was developed at California State University, Chico and has been adopted by librarians across the country.15 Mike Caulfield, a research scientist at the University of Washington, traces the ubiquitous CRAAP checklist to 1978, where it was initially developed as a tool to select library materials.16

Checklists largely focus on a website’s internal features. These include the presence or absence of a contact person, whether a website has references with working links, and the grammatical correctness of a website, among other criteria. The underlying assumption checklists share is that students can judge credibility by carefully inspecting the target site they are investigating.17 Checklist approaches ask students to determine whether a website is trustworthy after they have spent considerable time on it.

Signaling theory describes how signals mediate a relationship between signaler and receiver: “Signals are any observable features of an agent which are intentionally displayed for the purpose of raising the probability the receiver assigns to a certain state of affairs.”18 In other words, a signaler intentionally presents a signal to increase the likelihood that the receiver of that signal will act in a certain way. On the internet, the signaler is the website creator and the receiver is the user. The internet user benefits if and only if they find credible information. The website creator benefits by gaining support, votes, adherence, or (in the case of Russian disinformation) confusion. Thus, website creators have an incentive to signal to users that their website is credible, whether or not this is the case.

In the early days of the internet, signals such as banner ads, misspellings, and amateurish graphics indicated unprofessionalism and cast doubt on a website’s reliability. Lower barriers to the production of information have democratized the internet and empowered marginalized voices. But they have also made it easier to spread misleading information. With little effort, website creators can intentionally infuse weak signals of credibility to increase the likelihood that a user will spend more time on their website and trust it.19

The website of the Employment Policies Institute, or EPI (epionline.org), illustrates the ease with which signalers can manipulate weak signals to deceive users. The website is designed to seem professional and unbiased: it has a dot-org domain, a heading that supposedly answers research questions with evidence, and an About page that describes the organization’s managing director as an esteemed researcher who worked for the Bureau of Labor Statistics and has been published by Forbes and The Washington Post. In reality, EPI is funded by the restaurant industry. It offers misleading information about the risks of raising the minimum wage. But in one study, 90 percent of students were unable to identify EPI’s source of funding and why it might be problematic, even though such information could be found with a quick Google search.20 The students who struggled were the ones who stayed on the webpage and evaluated its internal signals carefully. Meola argues that this kind of approach, which he identifies as common to checklists, “rests on faulty assumptions about the nature of information available through the Web.”21

The Networked Approach

The networked approach to determining a website’s credibility begins with different assumptions from the checklist approach. The checklist approach is in many ways a carryover from traditional analog-based vetting of texts, designed when sources were scarce, and therefore each had to be carefully mined and checked.22 In contrast, the networked approach was designed in our current age of the internet: Sources today are abundant and, in many cases, overabundant. Each source is part of a network of information. To understand a single node in the network, one must place it in the context of other networked sources. On the internet, an individual website—a node—is best understood in relation to what other internet sources have to say about it. To uncover the connection one node has to others on the web, a user enters keywords from the website, such as the name of its sponsor, into a search engine. The resulting SERP reveals the node in context: how other nodes relate to it and, thus, how it can be best understood. Therefore, the networked approach harnesses the power of the web to help internet users evaluate the credibility of a given website.

The Checklist and Networked Approaches Compared

Checklist and networked approaches mainly differ in how they approach the moment when a web user decides whether to engage with a website. The networked approach separates assessments of credibility into two decisions. First, is the website worth further examination? Second, if so, how should one interpret the information on the site? The networked approach recognizes that it is not worthwhile, and in fact actively harmful, to engage with information prior to determining that a site is worthy of further examination. For example, spending time on a misleading website may result in indoctrination into conspiracy theories. 23 On the other hand, the checklist approach considers credibility assessments as a continuous process: One determines trustworthiness through close reading rather than making an intentional choice about whether such attention is warranted.

Simon argues that “a wealth of information creates a poverty of attention” and necessitates decisions about how to allocate that attention among numerous sources. 24 Such decisions are especially important in today’s saturated information environment. Kozyreva et al. say that, “to manage information overload, one must ignore a large amount of incoming material and separate useful information from noise, false news, or harmful advice.”25 Therefore, in the context of the modern-day internet, a user makes a critical decision when they determine whether a website is sufficiently trustworthy to merit further consideration. A networked approach focuses on this moment of engagement.

Research has shown that a networked approach leads to substantially different web evaluation strategies and outcomes. Wineburg and McGrew found that professional fact-checkers unanimously came to the right answer on tasks with which students struggled.26 What did they do differently? They turned to the broader web. Instead of remaining on the website they were investigating for a prolonged period of time, fact-checkers opened up new tabs at the moment of engagement to determine whether the original site should be trusted. Only after determining the credibility of a website did fact-checkers return to the original site to glean information from it, a strategy called “lateral reading.” Educational interventions teaching students to read laterally have yielded positive results.27

To illustrate, the background of the Employment Policies Institute (EPI) can be ascertained quickly through the networked approach. Using lateral reading, a web user would open up a new tab and search “Employment Policies Institute.” Skipping the first link on the SERP, which is often to the organization being investigated, one would find multiple sources flagging EPI’s bias as a front group for the restaurant lobby, a group with a vested interest in keeping the minimum wage low. In other words, by harnessing the power of the internet, the user can map the way this particular node—epionline.org—connects to many other nodes, thus revealing its character. Checklists, on the other hand, prompt the user to undertake a careful examination of the website to determine credibility. As mentioned above, students who carefully examined EPI were also the ones who came to the wrong conclusion.

Among advice offered by librarians, checklists appear more frequently than suggestions to read laterally.28 Lim’s recent investigation found that, in a largely purposive sample of academic library guides, checklists were the most common tool librarians used to address fake news.29 The ubiquity of checklists, along with the increasing evidence that a networked approach is more effective in an era of information overabundance, prompted us to examine the prevalence of checklist versus networked approaches when students search for advice from their institution on how to evaluate online sources.

Lim distinguishes checklists by their purpose such as evaluating academic resources versus evaluating news sources. We classify guides by their general approach to initial assessments of credibility on the web. Thus, we focus on the distinction between internal evaluation of a website’s signals and external evaluation via situating a website in a broader matrix of information. As suggested earlier, recent studies point to internal versus external evaluation as determinative of student success in evaluating online information. Students who stay on a webpage struggle.30 Students who leave that webpage, open a new tab, and see what other, credible sources have to say arrive at better answers in a fraction of the time. Our study, therefore, prioritizes process-oriented aspects of information literacy such as turning toward external sources prior to close reading.

Having tools to interpret information, such as data analysis skills or an ability to spot bias, is critical. But it is important to know when and where to apply those tools. Just as one must choose a restaurant to dine at before the skill of using utensils becomes relevant, so too must one choose a source of information to consume before interpretive information literacy becomes valuable. We are concerned with this initial choice of consumption or whether a source of information deserves further interpretation. Wherever we use the phrase “evaluate content” we specifically refer to this kind of initial determination of credibility rather than the more comprehensive view of credibility, which includes information interpretation and deeper analysis. The process of turning to external sources to evaluate credibility is consistent across social media, traditional webpages, and more. We therefore treat these subcategories as united under the umbrella of “online information.” For the purposes of this paper, online information can be thought of as any piece of information a student encounters on the internet in any medium of whose veracity a student is unsure.

Overall, given what the research suggests about the difference in effectiveness of networked and checklist approaches, we asked the following research question: When students try to find advice from their academic institution on how to approach information on the internet, to what extent do they find networked versus checklist approaches? We then analyzed the distribution of networked versus checklist approaches in light of universities’ role in preparing students for an increasingly digital age.

Methodology

Sampling Strategy

This study sampled the websites of 50 leading colleges and universities in the United States, equally dividing our sample between 25 private and 25 public institutions. We restricted the sample to public student-facing resources, excluding advice about web credibility specifically aimed at college instructors.

We only included institutions that provided web credibility advice on a library dedicated page, general university guide, or integrated advice in a course guide easily accessible through Google. In setting these criteria, the guiding principle was the visibility of the resources to students and the relevance of the resources to web credibility. Most of the sample consists of libraries’ websites. However, we did not preclude other sources of advice because our aim was to examine the prevalence of networked versus checklist approaches among institutions rather than solely among libraries. Harvard University, for example, was included in the sample even though the guide came from the college’s writing program. We made this choice because this advice was the most visible resource students would find when searching for guidance on the open web. Overall, our sample indicates that librarians are the ones who most frequently provide advice on evaluating information. However, this was not exclusively the case.

Generating the Sample

We generated the sample by copying into an Excel spreadsheet the names of the top 100 ranked private and top 100 ranked public universities from the Times Higher Education/Wall Street Journal (2019) rankings.31 We applied a randomization algorithm to choose data points from the list and repeated the process until we had 25 unique private and 25 unique public institutions (see table 1 for the final list of institutions and the appendix for links to their resources). If we were unable to find that an institution included information specifically for evaluating internet sources, it was excluded. For example, while Williams College offered an “Evaluating Sources Page” with advice on “what to think about when assessing your sources,” there was no indication that this guide applied to evaluating web sources.32 It was thus excluded.

TABLE 1

Institutions Included in Final Data Sample

Private Colleges

Public Colleges

Yale University

Rutgers University

Carleton College

Stony Brook University

Brandeis University

Pennsylvania State University

Washington University in St. Louis

Stockton University

University of Richmond

Binghamton University, State University of New York

Boston College

University of Delaware

Northwestern University

Virginia Commonwealth University

Cornell University

Indiana University (Bloomington)

Drexel University

The College of New Jersey

Dickinson College

University of Colorado Denver

Stanford University

University of Texas at Austin

Wesleyan University

University of Washington—Bothell

University of Notre Dame

Rowan University

Creighton University

University of Pittsburgh

University of Denver

University of Wisconsin

Grinnell College

San Diego State University

Middlebury College

Miami University

Bucknell University

United States Military Academy (West Point)

Massachusetts Institute of Technology

Temple University

Saint Louis University

George Mason University

Hamilton College

University of California, San Diego

Duke University

Oregon State University

Santa Clara University

University of Tennessee

Princeton University

University of California Santa Barbara

Harvard University

University of Cincinnati

We used a large random sample to gain a broader picture of the advice students find from their academic institution when they seek guidance for how to evaluate online information. The size of our sample (n = 50) and method of random sampling make it more likely that our results are representative and free from third-variable influence than smaller samples that are obtained mostly purposively. In addition, we focus on the kind of advice students most easily find from their institution rather than trying to catalogue the entirety of that university’s resources.

We used multiple strategies to find what information an institution provided students about web credibility. We first searched the name of the institution with the key phrase “source evaluation.” Often these keywords returned relevant search results with links from the given college or university about online source evaluation. However, we varied terms as needed when there were no relevant search results, replacing source evaluation with fake news, how to evaluate sources, or source credibility. In the cases where keyword manipulation still did not lead us to relevant resources, we went directly to the university’s library website and navigated within the site itself. Out of the sample of 50 institutions, 43 were provided by libraries as general guides, four were integrated into course guides, and three were published by the English or Composition departments of an institution.

Our focus on student-facing guides meant we did not reach out to any institution to request resources. It is possible that, in at least some cases, we did not find what an institution would describe as its best or what is objectively the most recent advice it gives on web credibility. However, resources that are not easily surfaced via Google or the institution’s web page are also less likely to be seen and used by students: Research shows that internet users tend to look at the first link on the Google SERP.33 We mainly conceived of visibility as the highest link on the SERP that provided an institution’s advice on evaluating online sources. When we navigated within an institution’s web page to find a guide, visibility meant choosing the highest relevant link from an internal site search or accessing resources that were prominently displayed on the library homepage. Our prioritization of visibility and accessibility to students in such cases does not discount the potentially great resources offered by institutions elsewhere. Rather, it recognizes that students cannot be properly guided by advice that they cannot easily find.

Coding Scheme and Reliability Testing

After an initial survey of the institutions, we developed a coding scheme that focused on internal versus external evaluation via an adaption of open coding.34 Any kind of advice that directed students to look at a website’s internal features prior to external examination would qualify as “internal evaluation.” This included but was not limited to: advice to evaluate a website’s design, domain, About page, or links and references. External evaluation was any kind of advice that directed students to leave the website they were evaluating to ascertain its credibility. This included but was not limited to: advice to see what other sources have to say about the organization being investigated, advice to investigate the author’s reputation, as well as advice to search for more information on the specific claims being made. We also coded for common resources. All institutions were evaluated between May 2019 and April 2020. As such, our study offers a snapshot of the resources students would find in this period of time. Backups of institutions’ advice can be found through Internet Archive or via screenshots taken by the authors.

In the process of developing a coding scheme, two coders underwent two practice rounds of coding to test reliability, sharpen coding criteria, and discuss border cases. One round involved five institutions from within the sample of 50. Another round involved five institutions that were chosen randomly and not included in the final sample of the study. Following these two practice coding rounds, two coders independently evaluated 20 percent (n = 10) of the sample for a formal reliability test, reaching 100 percent agreement on characterizing the type of web credibility and its consistency or inconsistency.

Results

Nearly every institution (48/50, or 96%) featured checklist approaches either on their landing page or in links to other sites. Checklist approaches shared a common orientation toward the nature of online information and web credibility, namely an emphasis on internal evaluation of a website’s signals. However, they differed in the amount and types of resources offered.

Our coding scheme focused on the extent to which college and university resources in our sample offered networked and checklist advice for how to initially approach an unfamiliar site. Institutions that featured solely checklist or networked approaches for the moment of engagement were in the minority. Most colleges and universities featured a combination of both approaches. When both networked and checklist advice was present, we examined whether institutions differentiated when to employ which approach.

TABLE 2

Summary of Results Comparing Networked and Checklist Approaches

Category

Percentage of Sample within Category

Consistent Checklist

40%

Inconsistent (Checklist and Networked)

56%

Consistent Networked Approach

4%

Consistent Checklist Approach

Forty percent of college and university websites only provided students with checklist strategies to determine a website’s credibility.

For example, Northwestern University’s library website presented students with two checklists to use in evaluating sources. The first checklist, ACT UP (Authority, Currency, Truth, Unbiased, Privilege), offered 15 questions a student should consider. These ranged from “who (person, organization, company) created the source?” to “does the point of view appear objective or biased?” and “is there a bibliography?”35 Northwestern stated that these criteria “work for all formats,” including books, websites, articles, and more.

Yale University also adopted a checklist approach. Their checklist, drawing on content from the University of Maryland and University of Dallas, had students check off the domain of the website they were investigating, such as dot-com (a company), dot-edu (academic institution), and dot-org (nonprofit organization). Other criteria directed students to the site’s design, the organization of the webpage’s features, the frequency of updates, and whether the site provides “any contact information or means of communicating with the author or webmaster.” Yale did not provide a rubric to translate the above features into a final credibility assessment.36

Consistent checklist institutions generally adopted a similar approach as Northwestern and Yale. While the precise wording of the questions might differ, checklists across institutions emphasized on-the-page evaluation.

Dangers of the Internet

Besides offering checklists for students to evaluate the credibility of information online, some institutions emphasized the dangers of the internet. Harvard University, in a section titled “What’s Wrong with Wikipedia?” urged students to be leery of the free encyclopedia because “information on Wikipedia is contributed by anyone who wants to post material,” and that instead, “Harvard librarians can point you to specialized encyclopedias in different fields.”37

Yale, too, exhorted students to use databases and print resources. At the top of their guide on web credibility, Yale contrasted library databases with the open web, pushing students to rely on the former. Yale suggested that using the internet to search was more trouble than it was worth, leaving a student with “lots of junk to wade through.” A database, however, would give students prevetted results, a valuable resource that was available free of charge for the remainder of their time at the institution.38

West Point contrasted how print sources “go through an extensive publication process that includes editing and article review,” while online, “anyone with a computer and access to the Internet can publish a Web site or electronic document.” West Point’s “Online Sources” tab did not include specific strategies for determining a particular website’s credibility in the kind of information landscape that they warn about.39 This is similar to Northwestern’s ACT UP checklist, which advised students to consider “How accurate is the information?” but did not explain to students how to make this determination on the Web.40

Relevance vs. Reliability

Finally, some checklists, such as Northwestern’s version of CRAAP, suggested that the relevance of information to a student’s research project is a key consideration in determining credibility. Similarly, Wesleyan University provided a list of 29 bulleted questions to help students “evaluate how relevant and reliable [their sources] are.”41 Wesleyan was coded as inconsistent because it offered some networked advice, but the conflation of relevance and reliability fits under the umbrella of the checklist approach.

Inconsistent Approach

The majority of institutions in our sample (56%) provided students with a combination of networked and checklist approaches. Some colleges and universities presented both approaches on their landing pages or within a checklist. For others, the inconsistency was the result of mixed messages provided by advice on their landing page and the resources to which they linked.

Several institutions presented conflicting approaches on the same page. The University of Texas at Austin’s landing page explained lateral reading and reminded students that “sometimes you can find out more about a website by leaving the site itself” and that “just because a website looks credible doesn’t mean that it is.”42 The librarian provided students with the key points from Wineburg and McGrew’s scholarly article.43 However, below the section on lateral reading, UT Austin’s page offered “Evaluation Criteria” containing the CRAAP test.

Some institutions had checklists that contained networked advice within the checklist. Wesleyan University, for example, presented students with a checklist similar to the CRAAP test.44 The checklist tells students to examine the URL of the page to identify the type of site and thereby make an inference as to its credibility. At the same time, it included two questions that prompted students to leave a website to determine its trustworthiness. Wesleyan’s section “Who is the author?” included the following question: “For more information on an author, ask your professor, do an Internet search, or look in the database Contemporary Authors or some other biographical reference source.” This question was embedded within a checklist, but it incorporated a networked orientation to the Web. That said, networked questions represented only two of 29 questions in the checklist.

Colleges and universities that were consistent in offering a checklist or a networked approach on their landing page often linked to strategies inconsistent with their chosen approach. The College of New Jersey’s landing page coached students to “look at the top-level domain” and the author/About Page to determine a website’s credibility.45 However, they also linked to a resource from the International Federation of Library Associations (IFLA) that presented a networked approach to web evaluation (see “Other Resources” section). The graphic directed students to navigate away from the initial website to “investigate the site, its mission, and its contact info” as well as to determine the author’s credentials.

Another institution, Middlebury College, featured Caulfield’s SIFT technique (Stop, Investigate the Source, Find better coverage, Trace claims, quotes, media to original context), a networked approach, at the top of its “Techniques for Evaluating the Web” landing page.46 But there was also a link to the checklist-style CRAAP Test from CSU Chico. Colleges and universities in this category did not offer guidance on when to use SIFT or lateral reading versus when to use CRAAP.

Consistent Networked Approach

A networked approach gave students strategies on how to leverage the broader web to evaluate credibility. For an institution to be considered consistently networked, it had to provide exclusively networked advice on its landing page and in links to other institutions. Among the sample of 50, only two, Rowan University and the University of Tennessee, Knoxville, offered students consistently networked advice.

Drawing on SIFT and lateral reading, Rowan’s guide, created by Andrea Baer and Dan Kipnis, helped students determine if a source was worth their time before they read it carefully.47 Rowan positioned lateral reading as the necessary precursor to the close, interpretive reading that forms the bulk of traditional information literacy advice. It advised students to investigate an unknown site by checking for previous work, finding the original source of information, reading laterally, and circling back. After giving advice on how to determine if a site is worth their time, Rowan offers students tools to engage with the information that the site provides. Rowan differentiates between information literacy strategies to use when landing on an unfamiliar site versus the ones to use after that site’s credibility has been determined externally. UT Knoxville qualified for a consistently networked approach by embedding the IFLA Infographic on its source evaluation page.48

Most Used Resources

The diversity and breadth of resources that institutions incorporated demonstrate a couple of facts: 1) universities compiled a mixed bag of resources, both checklist and networked, often without differentiating when to use which; and 2) these resources were often dated.

Mixed Bag

A total of 18 percent of institutions linked to or embedded the IFLA graphic, which offers networked advice and is one-third as long as CSU Chico’s CRAAP test. However, the context in which this graphic was presented mattered. For example, some institutions, such as the University of Tennessee, Knoxville, offered IFLA as the primary source of advice for students. Others, such as The College of New Jersey, offered IFLA alongside numerous other sources of information.49 Nearly one-third (32%) of institutions linked to fact-checking resources such as PolitiFact or Snopes. Rarely, however, were these resources prioritized. For example, Cornell University included links to “Four Reliable News Fact-Checking Sites” under a tab on how to “Be an Active News User.” However, this was one of 16 tabs on the landing page, each of which contained multiple links.50 While CRAAP checklists and the IFLA infographic were often central to a college or university’s instructional approach, fact-checking resources were frequently supplemental.

Outdated Resources

Only 14 percent of institutions in our sample linked to the CRAAP test developed by a librarian at Meriam Library at Cal State University, Chico. CRAAP and CSU Chico’s influence are likely greater than our sample revealed. Many colleges and universities featured near-identical checklists without direct attribution. Stockton University, for example, had a modified version of the CRAAP Test without any citations to CSU Chico.51 Other colleges that were frequently linked include Cornell, Berkeley, and Stony Brook.

This happened with resources other than CRAAP as well. Colleges and universities often embedded sources from other organizations or institutions onto their landing pages, sometimes without attribution. For example, Stony Brook University and the University of Delaware featured an identical screenshot of a webpage with tips on source evaluation that was developed by Indiana University East. But only the University of Delaware had a citation. An absence of citations means the influence of all of the resources in this section may be greater than the percentage discovered in our sample.

Eight percent of our sample included links to or citations of dated research articles from the internet’s early days that reinforce a checklist approach. A Princeton University guide to web credibility, for example, cited a 1998 article by Jim Kapoun entitled “Teaching Undergrads WEB Evaluation: A Guide for Library Instruction” as the basis for its suggestions.52

Sixteen percent of institutions linked to Melissa Zimdars’ “False, Misleading, Clickbait-y, and Satirical ‘News’ Sources,” an effort to develop lists of sites that offer poor-quality information.53

Zimdars, a professor of communication and media at Merrimack College, encouraged students to check the URL of the site they are investigating against an extensive—though not necessarily exhaustive—list of fake news sites. She broke down fake news into several categories and gave a table of more than 100 sites of which students should be wary. Last updated in 2016, several of the sites in her list like “abcnews.com.co” or “70news.wordpress.com” are no longer active, which Zimdars acknowledges.

In several cases, such as Binghamton University, Oregon State University, and Hamilton College, links to other resources were broken.54

TABLE 3

Summary of Most Used Resources

Name of Resource

Percent of Sample that Links to Resource

IFLA Infographic

18%

Fact-Checking Websites

32%

CRAAP Test from Meriam Library at Cal State University

14%

Dated Research Articles

8%

Melissa Zimdars’ “False, Misleading, Clickbaity-y”

16%

Discussion

The internet is an indispensable feature of college life, but ample research shows that many college students need help distinguishing quality from dubious information.55 Our results indicate that the web credibility advice students find often does not reflect emerging best practices of turning to external sources before engaging in close reading.

Most institutions in our sample prompted students to determine credibility by evaluating a website internally. However, internal signals such as website domain, contact information, or design can be manipulated. Advice to read a website closely reflects longstanding approaches to evaluating print sources. While many institutions acknowledged the unique dangers of the internet, not all of them offered students specific strategies for navigating those dangers.

Overall, our research suggests that colleges and universities need to do more to help students learn how to evaluate the credibility of online information. In particular, advice about web evaluation should differentiate between initial assessments of credibility via external sources and subsequent close reading. Institutions must ensure the resources they create for students communicate when each approach is warranted. More than half (56%) of institutions included networked approaches. But only 4 percent advised students to look externally before close reading.

Internet-Specific Advice

Colleges and universities frequently tried to tailor their advice to unique aspects of the internet. For example, several institutions offered students guidance on the use of Wikipedia. The University of Colorado, Denver told students that “there’s no ranking system which lets certain authors edit some pages and not others.”56 In fact, Wikipedia maintains different kinds of protected pages—the most trafficked and subject to vandalism can only be changed by the highest-ranking Wikipedia editors.57 Professional fact-checkers frequently turn to Wikipedia as a resource to determine the credibility of a particular website or organization.

Students are also taught to imbue trust in dot-org websites. Many institutions suggested students should pay attention to a site’s domain. Some, including Harvard University and Yale University, said dot-orgs are nonprofits, which could have the unintended consequence of making students think they are trustworthy. In reality, anyone can acquire a dot-org domain, including 49 percent of hate groups. Nor does nonprofit status guarantee that an organization provides credible information.58

Students who internalize that they can trust dot-orgs on faith make mistakes with serious consequences. A recent study asked a nationally representative sample of 3,446 high school students to evaluate co2science.org, a site that denies human-induced climate change and is funded by the fossil fuel industry. Nearly all (96%) of them failed to uncover the site’s ties to its corporate sponsors.59 In many cases, the power of the dot-org domain swayed their decisions. “This page is a reliable source to obtain information from, you see in the URL that it ends in .org as opposed to .com,” one student wrote.60 Other signals that influenced students’ decisions were a site’s design and graphics, the presence or absence of contact information, and the accuracy of spelling and grammar. In 2021, however, these signals are easily manipulated. Anyone can create a professional looking website that is easy to use and features contact information. Therefore, teaching these signals as a metric for credibility does not help students make good decisions on the internet.

Dated Information

Research has shown that the web demands a new kind of reading that prioritizes external verification over the internal close reading employed when evaluating traditional print sources. Institutions in our sample did not always convey this new approach. For example, one of the pieces of advice in an extensive guide offered by Massachusetts Institute of Technology was that evaluating the credibility of information online requires close reading skills: “While you may not feel qualified to judge research in areas that are unfamiliar to you, evaluating information involves little more than being critical of what you read and using a little common sense.”61 However, even very smart people can be fooled by dwelling too long on an unfamiliar website.62 Asked to determine which of two sites gave better advice on adolescent bullying, the nationally recognized American Academy of Pediatrics or the fringe, anti-LGBT American College of Pediatricians, 64 percent of Stanford undergraduates thought the College gave better advice. Fully 40 percent of PhD historians, drawn from five different institutions, equivocated when trying to make a determination. These intelligent people didn’t struggle because they failed to read closely. They struggled because they did.63

Professional fact-checkers, however, unanimously identified the American Academy of Pediatrics as credible and the American College of Pediatricians as suspicious. Unlike the historians or students, professional fact-checkers turned to the network, leaving the organizations’ landing pages and opening new tabs across the horizontal axis of their browser window to see what other trustworthy sites had to say about each group. They leveraged Wikipedia as a resource. And they did not automatically click on the first link on the Google SERP, instead engaging in “click restraint” and making an intentional and intelligent choice about which resources to open first.64

Networked interventions using Caulfield’s Four Steps, click restraint, lateral reading, and encouraging the use of Wikipedia yielded substantial improvements in students’ ability to evaluate the credibility of information online. An experimental curriculum run by the Digital Polarization Initiative resulted in “significant gains in [the use of] fact-checking strategies, including greater use of Wikipedia to verify information” compared to a control group.65 A study that the Stanford History Education Group ran at San Jose State University, comparing students who were taught a networked approach to a control group, achieved similar gains.66 In our review of existing research, we were unable to surface any comparable interventions that improved students’ online reasoning based on the CRAAP test and other checklist-based approaches.67

Instructional Design, the Problem with Checklists, and Librarians’ Role

We found that the majority of colleges and universities (56%) combined checklist approaches with networked ones without saying when to use which. When networked advice is presented alongside close reading strategies, it becomes difficult for a student unfamiliar with best practices to know the appropriate time to employ a given approach. The same is true when resources such as fact-checking websites are included among dozens of other links. Instructional design should help students understand the purpose and limitations of the different resources colleges and universities provide.

Limitations and Areas for Future Research

Our study has limitations that constrain the scope of the conclusions drawn. It consisted exclusively of institutions within the United States drawn from the Times Higher Education/Wall Street Journal (2019) rankings of top private and public institutions. Without international comparisons or a broader sample, it is not possible to conclusively determine whether the same trends apply to colleges and universities writ large. Nor do we have data on the usage of these web-based materials or the extent to which college students internalize advice if and when they interact with them. Finally, by limiting our study to student-facing resources available on the open web, we leave out other curriculum interventions students may receive on evaluating the credibility of information on the internet, as well as resources that are not easily found through a Google search.

Opportunities for future studies include researching how professionals make decisions about what to include in guides for students, in particular how they adapt to evolving best practices and update their guides over time. In addition, it would be worthwhile to observe how students interact with these online resources, seeing what they pay attention to in the hopes of clarifying how updated content can be combined with effective instructional design to produce useful guides. Finally, updates to the advice students find from institutions suggest it may be valuable to redo a similar study down the line to identify trends in data over time.

Conclusion

By virtue of their inclusion in our sample, all of the colleges and universities we studied made some attempt at preparing students to sort fact from fiction online. But our results suggest that the status quo of web credibility instruction needs to be reimagined.

The internet is where students turn for the information they use to make personal, familial, and political decisions. Their ability to evaluate credibility on the web should therefore be a priority—especially as a global pandemic forced an even stronger pivot toward technology in every aspect of students’ lives.

It is encouraging that a majority of institutions featured some sort of networked advice, even if that advice was presented in conjunction with checklist-style approaches. Librarians, teaching faculty, and every member of the university community need to collaborate to ensure that the next generation of leaders have the tools they need to be effective consumers of online information. To this end, there are several immediate steps that institutions and librarians might take to help students better discern fact from fiction online.

First, institutions should remove advice that is either incorrect or no longer applies to the internet of 2022. Suggesting that a dot-org domain indicates social good is not sound advice. Nor is it wise to examine a site’s design in an age when it is easy to produce a good-looking website. These kinds of directives must be removed to avoid misleading students.

Second, institutions should follow the example of Rowan University and sequence networked and checklist approaches. Institutions should make clear that initial, external evaluation of a website’s credibility must precede internal, close-reading evaluation. Both approaches—external evaluation and close reading—are important. But they are only effective when properly sequenced.

Lateral reading, the key mechanism of external evaluation, is an effective and flexible heuristic. It does, however, presume that when students conduct an internet search, they know which sources they can trust to triangulate. Therefore, preparing students for a digital age will require instruction on what makes a source credible.

There may also be room for pedagogical experimentation by librarians. We know that the networked approach is superior in terms of outcomes in evaluating information. Recent studies have shown that small interventions in the classroom setting can yield substantial improvements in students’ digital savvy. However, there is less certainty about best practices in teaching the networked approach via online resources as well as the durability of improvements from targeted interventions. Librarians can take up this still-emerging field of research to try different approaches to teaching students the networked approach, adjusting instructional time, format, follow-up, and more. For example, researchers could examine how students interact with and learn from resource guides online. This experimentation may prove crucial in helping bridge a research-based understanding of the networked approach to practical applications in colleges and universities.

Acknowledgments

We would like to express our deep appreciation to Professor Sam Wineburg for his continuous, invaluable mentorship along every step of this paper. We would not have been able to complete this project without him.

APPENDIX. Links to Institutions in Sample

Note on Links: The nature of the internet means data is prone to rapid change. Most of the sites remain the same or nearly the same as when we collected them. However, several are no longer up or have been revised since the time of data collection. Internet Archive WayBack Machine links are provided when possible to illustrate the data included in the sample.

APPENDIX

Links to Institutions Included in Final Data Sample

Institution

Link

Yale University

https://web.archive.org/web/20200414172811/https://www.library.yale.edu/researcheducation/pdfs/Searching_Evaluating_Resources.PDF

Carleton College

http://web.archive.org/web/20200407210856/https://gouldguides.carleton.edu/currentevents

Brandeis University

http://web.archive.org/web/20200407210932/https://guides.library.brandeis.edu/evaluatinginfo/web-and-social-media

Washington University in St. Louis

https://libguides.wustl.edu/c.php?g=46980&p=301909

University of Richmond

http://web.archive.org/web/20200407211040/http://libguides.richmond.edu/c.php?g=260944&p=1743264

Boston College

http://web.archive.org/web/20200407211210/https://libguides.bc.edu/c.php?g=44018&p=279570

Northwestern University

http://web.archive.org/web/20200407211523/https://libguides.northwestern.edu/evaluatingsources

Cornell University

http://web.archive.org/web/20200318124050/http://guides.library.cornell.edu/c.php?g=32334&p=203767; http://web.archive.org/web/20200422112431/http://guides.library.cornell.edu/evaluate_news/fakenews; http://web.archive.org/web/20200418084251/http://guides.library.cornell.edu/critically_analyzing

Drexel University

http://web.archive.org/web/20200407211732/https://libguides.library.drexel.edu/fake_news

Dickinson College

http://web.archive.org/web/20200407211806/http://libguides.dickinson.edu/researchprocess/websiteeval

Stanford University

http://web.archive.org/web/20200407211827/https://www.youtube.com/watch?v=bZ122WakNDY

Wesleyan University

http://web.archive.org/web/20200407211937/https://libguides.wesleyan.edu/c.php?g=393439&p=2672641

University of Notre Dame

http://web.archive.org/web/20200407212100/https://potofgold.library.nd.edu/evaluating/

Creighton University

http://web.archive.org/web/20200407212125/http://www.creighton.edu/reinert/researchtoolbox/tutorialsandguides/thefivews/

University of Denver

http://web.archive.org/web/20200408003843/http://libguides.du.edu/c.php?g=622586&p=4336814

Grinnell College

https://www.grinnell.edu/academics/libraries/students/doing-research?v2node

Middlebury College

https://middlebury.libguides.com/internet/techniques-web

Bucknell University

http://web.archive.org/web/20200408004115/https://researchbysubject.bucknell.edu/evaluatingnews

Massachusetts Institute of Technology

http://web.archive.org/web/20200408004317/https://libguides.mit.edu/c.php?g=382302&p=2590435

Saint Louis University

https://libguides.slu.edu/c.php?g=185593&p=1227639

Hamilton College

http://web.archive.org/web/20200408005148/http://libguides.hamilton.edu/c.php?g=622975&p=4339597; http://web.archive.org/web/20200414184027/https://libguides.hamilton.edu/c.php?g=622975&p=4339599

Duke University

https://guides.library.duke.edu/c.php?g=902788&p=6497823

Santa Clara University

http://web.archive.org/web/20200408004520/https://scufactchecking.wixsite.com/home

Princeton University

http://web.archive.org/web/20200408004728/https://libguides.princeton.edu/c.php?g=84018&p=664970

Harvard University

http://web.archive.org/web/20200408004752/https://usingsources.fas.harvard.edu/evaluating-web-sources

Rutgers University

http://web.archive.org/web/20200408005513/https://libguides.rutgers.edu/fake_news

Stony Brook University

http://web.archive.org/web/20200408005552/https://guides.library.stonybrook.edu/fakenews/resources

Pennsylvania State University

http://web.archive.org/web/20200408005722/https://libraries.psu.edu/services/research-help/evaluating-information

Stockton University

http://web.archive.org/web/20200408005740/https://library.stockton.edu/c.php?g=830109&p=5926889

Binghamton University, State University of New York

http://web.archive.org/web/20200408005847/https://www.binghamton.edu/libraries/research/guides/web-page-checklist.html

University of Delaware

http://web.archive.org/web/20200408005722/https://libraries.psu.edu/services/research-help/evaluating-information

Virginia Commonwealth University

http://web.archive.org/web/20200408010127/https://guides.library.vcu.edu/evaluate

Indiana University (Bloomington)

http://web.archive.org/web/20200408010132/https://iupui.libguides.com/howtoresearch/evaluate-sources

The College of New Jersey

http://web.archive.org/web/20200408010156/https://libguides.tcnj.edu/evaluate

University of Colorado Denver

http://web.archive.org/web/20200408010415/https://guides.auraria.edu/evaluatingsources

University of Texas at Austin

http://web.archive.org/web/20200408010441/https://guides.lib.utexas.edu/c.php?g=539372&p=6876271

University of Washington—Bothell

http://web.archive.org/web/20200408010545/https://guides.lib.uw.edu/bothell/evaluatingsources

Rowan University

http://web.archive.org/web/20200408010600/https://libguides.rowan.edu/c.php?g=942045&p=6792400

University of Pittsburgh

http://web.archive.org/web/20200408010623/https://www.library.pitt.edu/evaluating-web-resources

University of Wisconsin

http://web.archive.org/web/20200408010847/https://cms.library.wisc.edu/www/wp-content/uploads/sites/2/2020/03/Evaluation_Tip_Sheet.pdf; http://web.archive.org/web/20200408010901/https://mediaspace.wisc.edu/media/Identifying+Fake+News/1_30oihj1f/26292342; https://web.archive.org/web/20200816232706/https://researchguides.library.wisc.edu/c.php?g=640444&p=4485002

San Diego State University

http://web.archive.org/web/20200408010959/https://library.sdsu.edu/research-services/news/evaluate-your-sources

Miami University

http://web.archive.org/web/20200408011001/https://www.ham.miamioh.edu/library/start-researching/research-tips/evaluating-websites/

United States Military Academy (West Point)

http://web.archive.org/web/20200408011030/https://usma.libguides.com/workingwithsources/evaluatesources

Temple University

http://web.archive.org/web/20200408011036/https://guides.temple.edu/c.php?g=646455&p=4534968

George Mason University

http://web.archive.org/web/20200408011133/https://vle.credoreference.com/george-mason/evaluating-sources

University of California, San Diego

http://web.archive.org/web/20200408011153/https://ucsd.libguides.com/preuss/webeval

Oregon State University

http://web.archive.org/web/20200408011222/https://guides.library.oregonstate.edu/c.php?g=286081&p=1904942

University of Tennessee

http://web.archive.org/web/20200408011353/https://libguides.utk.edu/c.php?g=988050&p=7156151

University of California Santa Barbara

http://web.archive.org/web/20200408011459/http://transcriptions-2008.english.ucsb.edu//resources/guides/learning/evaluating_citing.asp

University of Cincinnati

http://web.archive.org/web/20200408015625/https://guides.libraries.uc.edu/engl1001/evaluate; http://web.archive.org/web/20200408015654/https://guides.libraries.uc.edu/c.php?g=222564&p=1472911

Notes

1. Marc Prensky, “Digital Natives, Digital Immigrants Part 1,” On the Horizon (2001), https://doi.org/10.1108/10748120110424816.

2. Sarah McGrew et al., “Can Students Evaluate Online Sources? Learning from Assessments of Civic Online Reasoning,” Theory and Research in Social Education (2018), https://doi.org/10.1080/00933104.2017.1416320; Urs Gasser et al., “Youth and Digital Media: From Credibility to Information Quality,” SSRN Electronic Journal (2012), https://doi.org/10.2139/ssrn.2005272.

3. Eszter Hargittai, “Digital Na(t)Ives? Variation in Internet Skills and Uses among Members of the ‘Net Generation,’” Sociological Inquiry (2010), https://doi.org/10.1111/j.1475-682X.2009.00317.x.

4. Axel Westerwick, “Effects of Sponsorship, Web Site Design, and Google Ranking on the Credibility of Online Information,” Journal of Computer-Mediated Communication (2013), https://doi.org/10.1111/jcc4.12006; Kristin Yates et al., “Trust Online: Young Adults’ Evaluation of Web Content,” International Journal of Communication 4, no. 1 (2010): 468–94.

5. Jamie Bartlett and Carl Miller, “Truth, Lies, and the Internet: A Report into Young People’s Digital Fluency,” Demos (2011); Sarit Barzilai and Anat Zohar, “Epistemic Thinking in Action: Evaluating and Integrating Online Sources,” Cognition and Instruction (2012), https://doi.org/10.1080/07370008.2011.636495; Miriam J. Metzger, Andrew J. Flanagin, and Ryan B. Medders, “Social and Heuristic Approaches to Credibility Evaluation Online,” Journal of Communication (2010), https://doi.org/10.1111/j.1460-2466.2010.01488.x; Amber Walraven, Saskia Brand-Gruwel, and Henny P.A. Boshuizen, “How Students Evaluate Information and Sources When Searching the World Wide Web for Information,” Computers and Education (2009), https://doi.org/10.1016/j.compedu.2008.08.003; Yates et al., “Trust Online.

6. Denise E. Agosto, “A Model of Young People’s Decision-Making in Using the Web,” Library and Information Science Research (2002), https://doi.org/10.1016/S0740-8188(02)00131-7; Barzilai and Zohar, “Epistemic Thinking in Action”; Marie K. Iding et al., “Web Site Credibility: Why Do People Believe What They Believe?” Instructional Science (2009), https://doi.org/10.1007/s11251-008-9080-7; Heidi Julien and Susan Barker, “How High-School Students Find and Evaluate Scientific Information: A Basis for Information Literacy Skills Development,” Library and Information Science Research (2009), https://doi.org/10.1016/j.lisr.2008.10.008; Walraven, Brand-Gruwel, and Boshuizen, “How Students Evaluate Information and Sources When Searching the World Wide Web for Information.

7. Stanford History Education Group et al., “Evaluating Information: The Cornerstone of Civic Online Reasoning,” Stanford Digital Repository (2016); McGrew et al., “Can Students Evaluate Online Sources?”

8. Andrew J. Flanagin and Miriam J. Metzger, “The Role of Site Features, User Attributes, and Information Verification Behaviors on the Perceived Credibility of Web-Based Information,” New Media and Society (2007), https://doi.org/10.1177/1461444807075015; B.J. Fogg, Persuasive Technology: Using Computers to Change What We Think and Do (2003), https://doi.org/10.1016/B978-1-55860-643-2.X5000-8; Peter Pirolli and Stuart Card, “Information Foraging in Information Access Environments,” in Conference on Human Factors in Computing Systems: Proceedings (1995), https://doi.org/10.1145/223904.223911.

9. Flanagin and Metzger, “The Role of Site Features, User Attributes, and Information Verification Behaviors on the Perceived Credibility of Web-Based Information”; Fogg, Persuasive Technology; Pirolli and Card, “Information Foraging in Information Access Environments.”

10. B.J. Fogg, Persuasive Technology.

11. Teun Lucassen and Jan Maarten Schraagen, “Factual Accuracy and Trust in Information: The Role of Expertise,” Journal of the American Society for Information Science and Technology (2011), https://doi.org/10.1002/asi.21545.

12. Renee Hobbs and Richard Frost, “Measuring the Acquisition of Media-Literacy Skills,” Reading Research Quarterly (2003), https://doi.org/10.1598/rrq.38.3.2; Seth Ashley, Adam Maksl, and Stephanie Craft, “Developing a News Media Literacy Scale,” Journalism and Mass Communication Educator (2013), https://doi.org/10.1177/1077695812469802.

13. Hobbs and Frost, “Measuring the Acquisition of Media-Literacy Skills”; Ashley, Maksl, and Craft, “Developing a News Media Literacy Scale.”

14. Sam Wineburg and Sarah McGrew, “Lateral Reading and the Nature of Expertise: Reading Less and Learning More When Evaluating Digital Information,” Teachers College Record (2019).

15. Sarah Blakeslee, “The CRAAP Test,” LOEX Quarterly (2004).

16. Mike Caulfield, “A Short History of CRAAP,” Hapgood (December 17, 2019), https://hapgood.us/2018/09/14/a-short-history-of-craap/.

17. “Latest News from The Media Education Lab,” Welcome | Media Education Lab (March 2, 2021), https://mediaeducationlab.com/.

18. Diego Gambetta, “Signaling,” in The Oxford Handbook of Analytical Sociology (2017), 170, https://doi.org/10.1093/oxfordhb/9780199215362.013.8.

19. Laura Sbaffi and Jennifer Rowley, “Trust and Credibility in Web-Based Health Information: A Review and Agenda for Future Research,” Journal of Medical Internet Research (2017), https://doi.org/10.2196/jmir.7579; J. Breakstone et al., “Students’ Civic Online Reasoning: A National Portrait,” Stanford History Education Group & Gibson Consulting (2019).

20. Sam Wineburg et al., “Educating for Misunderstanding: How Approaches to Teaching Digital Literacy Make Students Susceptible to Scammers, Rogues, Bad Actors, and Hate Mongers” (Working Paper A-21322, Stanford History Education Group, Stanford University, Stanford, CA, 2020), https://purl.stanford.edu/mf412bt5333.

21. Marc Meola, “Chucking the Checklist: A Contextual Approach to Teaching Undergraduates Web-Site Evaluation,” portal (2004), 331, https://doi.org/10.1353/pla.2004.0055.

22. Mike Caulfield, “Yes, Digital Literacy. But Which One?” Hapgood (December 22, 2016), https://hapgood.us/2016/12/19/yes-digital-literacy-but-which-one/.

23. Charlie Warzel “Don’t Go Down the Rabbit Hole,” The New York Times (2021), https://www.nytimes.com/2021/02/18/opinion/fake-news-media-attention.html.

24. Herbert Simon, “Designing Organizations for an Information-rich World,” in Computers,

Communications, and the Public Interest, ed. M. Greenberger (Baltimore, MD: Johns Hopkins University Press, 1971), 40–41.

25. Anastasi Kozyreva et al., “Citizens versus the Internet: Confronting Digital Challenges with Cognitive Tools,” Psychological Science in the Public Interest (2020), 135, https://doi.org/10.1177/1529100620946707.

26. Wineburg and McGrew, “Lateral Reading and the Nature of Expertise.”

27. Sarah McGrew, “Learning to Evaluate: An Intervention in Civic Online Reasoning,” Computers and Education (2020), https://doi.org/10.1016/j.compedu.2019.103711; Angela M. Kohnen, Gillian E. Mertens, and Shelby M. Boehm, “Can Middle Schoolers Learn to Read the Web like Experts? Possibilities and Limits of a Strategy-Based Intervention,” Journal of Media Literacy Education (2020), https://doi.org/10.23860/jmle-2020-12-2-6; Mike Caulfield, “Web Literacy for Student Fact-Checkers,” Pressbooks (2017).; Sam Wineburg et al., “Preparing Students for Civic Life in a Digital Age: A Curriculum Intervention in High School Classrooms” (Working Paper A-202101, Stanford History Education Group, Stanford University, Stanford, CA, 2021), https://ssrn.com/abstract=3936112.

28. Sook Lim, “Academic Library Guides for Tackling Fake News: A Content Analysis,” Journal of Academic Librarianship 46, no. 5 (2020): 102–95, https://doi.org/10.1016/j.acalib.2020.102195.

29. Lim, “Academic Library Guides for Tackling Fake News.”

30. Breakstone et al., “Students’ Civic Online Reasoning: A National Portrait.”

31. “Wall Street Journal/Times Higher Education College Rankings 2019,” Times Higher Education (THE) (August 1, 2019), https://www.timeshighereducation.com/rankings/united-states/2019#!/page/0/length/25/sort_by/rank/sort_order/asc/cols/scores.

32. “Evaluating Sources: Overview,” Research Guides (October 1, 2020), https://libguides.williams.edu/evaluating-sources.

33. Matt Southern. “Over 25% of People Click the First Google Search Result,” Search Engine Journal (2020), https://www.searchenginejournal.com/google-first-page-clicks/374516/.

34. Kathy Charmaz, Constructing Grounded Theory: A Practical Guide through Qualitative Analysis (Thousand Oaks, CA: SAGE Publications, 2006).

35. “Evaluating Sources (Archive): Getting Started,” Research Guides (April 16, 2020), https://libguides.northwestern.edu/evaluatingsources.

36. “Searching and Evaluating Resources,” https://www.library.yale.edu/researcheducation/pdfs/Searching_Evaluating_Resources.pdf.

37. “What’s Wrong with Wikipedia?” Harvard Guide to Using Sources, https://usingsources.fas.harvard.edu/whats-wrong-wikipedia [accessed 20 August 2020].

38. “Searching and Evaluating Resources,” https://www.library.yale.edu/researcheducation/pdfs/Searching_Evaluating_Resources.pdf [accessed 24 April 2020].

39. “Working with Sources,” in https://usma.libguides.com/workingwithsources/evaluatesources [accessed 20 August 2019].

40. “Evaluating Sources (Archive): Getting Started,” Research Guides (April 16, 2020), https://libguides.northwestern.edu/evaluatingsources/gettingstarted.

41. “How to Find and Evaluate Sources: Evaluating What You Find,” How to Find and Evaluate Sources (October 20, 2020), https://libguides.wesleyan.edu/c.php?g=393439&p=2672641.

42. “Evaluate Sources: Lateral Reading,” Lateral Reading (August 5, 2020), https://guides.lib.utexas.edu/c.php?g=539372&p=6876271.

43. Wineburg and McGrew, “Lateral Reading and the Nature of Expertise.”

44. “How to Find and Evaluate Sources: Evaluating What You Find,” How to Find and Evaluate Sources (October 20, 2020), https://libguides.wesleyan.edu/c.php?g=393439&p=2672641.

45. “Website Evaluation: Purpose,” Website Evaluation: Purpose (December 6, 2020), https://libguides.tcnj.edu/c.php?g=351730&p=2371780.

46. “Website Evaluation: Purpose,” Website Evaluation: Purpose (December 6, 2020), https://libguides.tcnj.edu/c.php?g=351730&p=2371780; Mike Caulfield, “Web Literacy for Student Fact-Checkers,” Pressbooks (2017).

47. “Evaluating Online Sources: A Toolkit: Evaluating Online Sources: Initial Moves,” Research Guides (December 2, 2020), https://libguides.rowan.edu/c.php?g=942045&p=6790649.

48. “Consumer Literacy and Responsible Consumption: Evaluating Source Credibility,” Research Guides (July 20, 2020), https://libguides.utk.edu/c.php?g=988050&p=7156151 [site discontinued].

49. “Website Evaluation: Purpose,” Website Evaluation: Purpose (December 6, 2020), https://libguides.tcnj.edu/c.php?g=351730&p=2371780.

50. “Fake News, Propaganda, and Misinformation: Learning to Critically Evaluate Media Sources: What Is Fake News?” (December 9, 2020), https://guides.library.cornell.edu/evaluate_news/fakenews.

51. “Student Resources: Evaluating Information,” Richard E. Bjork Library, https://library.stockton.edu/c.php?g=830109&p=5926889 [accessed 14 May 2019].

52. “Evaluating Websites: Philosophy: Research Guides at Princeton University,” Princeton University (The Trustees of Princeton University, October 9, 2020), https://libguides.princeton.edu/c.php?g=84018&p=664970.

53. Melissa Zimdars, “False, Misleading, Clickbait-y, and/or Satirical ‘News’ Sources” (2016), https://d279m997dpfwgl.cloudfront.net/wp/2016/11/Resource-False-Misleading-Clickbait-y-and-Satirical-%E2%80%9CNews%E2%80%9D-Sources-1.pdf.

54. “Webpage Checklist,” Accessed May 14, 2019, https://www.binghamton.edu/libraries/research/guides/web-page-checklist.html [site discontinued]; “LibGuides: PS 300: Research Design: Evaluating Sources,” Evaluating Sources—PS 300: Research Design—LibGuides at Oregon State University (December 10, 2020), https://guides.library.oregonstate.edu/c.php?g=286081&p=1904942; “Help With ~ Evaluating Sources: Fake News,” Guides (November 2, 2020), https://libguides.hamilton.edu/c.php?g=622975&p=4339599.

55. Sam Wineburg and Nadav Ziv, “Op-Ed: Why Can’t a Generation That Grew Up Online Spot the Misinformation in Front of Them?” Los Angeles Times (November 6, 2020), https://www.latimes.com/opinion/story/2020-11-06/colleges-students-recognize-misinformation; Wineburg et al., “Educating for Misunderstanding.”

56. “Evaluating Sources: Wikipedia vs. Google vs. Databases,” Research Guides (October 2, 2019), https://guides.auraria.edu/evaluatingsources/google.

57. “Protection Policy,” Wikipedia (Wikimedia Foundation, January 7, 2021), https://en.wikipedia.org/wiki/Wikipedia:Protection_policy.

58. Sam Wineburg and Nadav Ziv, “The Meaninglessness of the .Org Domain,” The New York Times (December 5, 2019), https://www.nytimes.com/2019/12/05/opinion/dot-org-domain.html.

59. Jay Mathews, “Perspective | You Can’t Believe Everything You Read Online. Many Students Don’t Seem to Know That,” The Washington Post (WP Company, November 19, 2019), https://www.washingtonpost.com/local/education/you-cant-believe-everything-you-read-online-many-students-dont-seem-to-know-that/2019/11/17/06a171f2-0670-11ea-ac12-3325d49eacaa_story.html.

60. Wineburg and Ziv, “The Meaninglessness of the .Org Domain.”

61. “6.UAR: Undergraduate Research: Evaluate Information,” Evaluate Information, https://libguides.mit.edu/c.php?g=382302&p=2590435 [accessed 14 May 2019].

62. Katy Steinmetz, “How Your Brain Tricks You into Believing Fake News,” Time (August 9, 2018), https://time.com/5362183/the-real-fake-news-crisis/.

63. Wineburg and McGrew, “Lateral Reading and the Nature of Expertise.”

64. Wineburg and McGrew, “Lateral Reading and the Nature of Expertise.”

65. Jessica E. Brodsky et al., (2019, October 3–5). “Teaching College Students the Four Moves of Expert Fact-Checkers” [Paper presentation], Technology, Mind, & Society, an American Psychological Association Conference, Washington, DC, United States.

66. Sarah McGrew et al., “Improving University Students’ Web Savvy: An Intervention Study,” British Journal of Educational Psychology 89, no. 3 (2019): 485–500, https://doi.org/10.1111/bjep.12279.

67. Sarah McGrew and Virginia L. Byrne, “Who Is behind This? Preparing High School Students to Evaluate Online Content,” Journal of Research on Technology in Education (2020), https://doi.org/10.1080/15391523.2020.1795956; McGrew, “Learning to Evaluate”; Kohnen, Mertens, and Boehm, “Can Middle Schoolers Learn to Read the Web like Experts?”

* Nadav Ziv and Emma Bene are part of the Stanford History Education Group in the Graduate School of Education at Stanford University; emails: nadavziv@stanford.edu and ebene@stanford.edu. ©2022 Nadav Ziv and Emma Bene, Attribution-NonCommercial (https://creativecommons.org/licenses/by-nc/4.0/) CC BY-NC.

Copyright Nadav Ziv, Emma Bene


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

Article Views (Last 12 Months)

No data available

Contact ACRL for article usage statistics from 2010-April 2017.

Article Views (By Year/Month)

2022
January: 0
February: 0
March: 0
April: 0
May: 0
June: 0
July: 0
August: 0
September: 0
October: 0
November: 1062
December: 57