06_BobkowskiYounger

News Credibility: Adapting and Testing a Source Evaluation Assessment in Journalism

This paper discusses the development of a source evaluation assessment, and presents the results of using this instrument in a one-semester information literacy course for journalism students. The assessment was developed using the threshold concept perspective, the “authority is constructed and contextual” frame, and an established source evaluation rubric. As formative assessment, the instrument showed that students’ source evaluations lacked evidence and included ritualized language. As summative assessment, it showed that students used a greater range of indicators of authority than they used initially, and used evidence more frequently. The assessment can measure students’ source evaluations across the disciplines.

Introduction

Source evaluation is a fundamental information literacy skill for all undergraduate students and is indispensable for aspiring journalists and other communication professionals. A journalist’s credibility and livelihood depend on their ability to locate, evaluate, verify, and accurately report credible sources,1 as illustrated by the fates of disgraced journalists like Jayson Blair, Stephen Glass, and Brian Williams, who fabricated or used inappropriate sources.2 Accreditation requirements for departments and schools of journalism include demonstrating that graduates can “evaluate information by methods appropriate to the communications professions in which they work” and “critically evaluate their own work and that of others for accuracy and fairness.”3 According to a survey of journalism faculty, most journalism students need greater proficiency in evaluating and selecting quality information sources.4

Although the literature contains a number of published information literacy assessments,5 journalistic writing uses unique sources and treats sources differently from many other academic disciplines, justifying the need for a specialized source evaluation assessment. Journalism students learn to use not only scholarly research as sources but also news reports, official statements, public records, and interview subjects, among others. Unlike traditional academic standards, journalists generally attribute their sources directly inside their articles, either by name or unnamed, and do not produce works cited or reference lists. Correspondingly, an Association of College and Research Libraries (ACRL) working group mapped ACRL’s Information Literacy Competency Standards (Standards) to undergraduate journalism education in 2011.6 This document includes the learning outcomes that journalism students and professionals should achieve to be able to evaluate the credibility of their sources. There has been little subsequent published work at the intersection of information literacy and journalism education, particularly since the revision of ACRL’s Standards to the Framework for Information Literacy for Higher Education (Framework).7 Likewise, there is, as yet, no published Framework-based source evaluation assessment that fits journalism education. Thus, despite a history of discipline-specific information literacy recommendations, collaborating librarians and journalism instructors do not have standardized and reliable assessment tools, such as rubrics, for assessing their students’ source evaluations under the Framework. In her assessment of high school students, for example, Sarah McGrew cautioned that evaluation checklists mislead students into using superficial features of websites, such as spelling and grammar, to judge the credibility of information.8 McGrew’s related rubric, however, was not based on the Framework.9 The need to fully integrate information literacy into a learner-centered journalism course motivated this article’s authors to develop the Framework-based source evaluation assessment presented here.

At the University of Kansas, the ability to evaluate and determine source credibility is a central learning outcome in a one-semester course titled Infomania: Information Management, which is required of all students majoring or minoring in journalism and mass communication. This is the second course that students take in the journalism sequence, following a large introductory survey course, and before or concurrently to taking a media writing course. The source credibility skills that students are expected to develop in this course are meant to prepare them to identify and use credible sources accurately in their writing. The course has been delivered in 30-student sections by four or five independent instructors each semester. Most instructors collaborated with the university’s librarians to deliver some of the course content. Prior to 2017, these collaborations were limited to Standards-based one-shot instructional sessions focused on using the library catalog or specialized databases accessible through the library website. When conducting instructional sessions in subsequent courses, however, the librarian observed inconsistencies in students’ abilities to identify indicators of credibility in information sources and to argue how these indicators contribute to or diminish the credibility of sources. The librarian and the lead instructor of the Infomania course—this article’s authors—thus determined to integrate information literacy instruction more uniformly in the Infomania course. The source evaluation assessment discussed here stands at the core of the resulting multisemester course redesign. The redesign eventually encompassed the development of an OER textbook, common assignments across all sections, and a shift in how the course and information literacy instruction are delivered. This article discusses the process used to develop the source evaluation assessment, as well as the initial results generated from its implementation.

The article’s literature review and research framework sections discuss research that predicated the development of the assessment and present the assessment’s conceptual framework. In short, the threshold concepts perspective shaped the authors’ understanding of the assessment’s role, and ACRL’s information literacy frame “authority is constructed and contextual” best described the source evaluation skills and thinking to be assessed.10 Aligning this frame with the journalistic concept of credibility, two learning outcomes were developed: 1) students can identify indicators of credibility in an information source; and 2) students can argue how these indicators contribute to or diminish the credibility of the source. Research questions articulated at the conclusion of the research framework section guided the deployment of the assessment as a formative and summative assessment tool11 and the analysis of the assessment’s scores.

The methods section details how the article’s authors adapted Erin Daniels’ assessment and rubric12 to the parameters and needs of the Infomania course, as well as the first instance they used it to measure students’ source evaluations. The assessment, which asks students to evaluate a news source as they will in future professional settings,13 generates scores on two dimensions, which align with the two learning outcomes. The assessment evaluates students on their ability to justify their source evaluations and prioritizes reasoning over “correctness,” allowing instructors to rate the degree of student understanding of credibility.14

The article’s results section presents the findings of the assessment’s initial deployment. As formative assessment, the results quantify the characteristics of students’ source evaluations at the beginning of the information literacy course. As summative assessment, end-of-semester results show both students’ progress and lack of progress over the duration of the course and thus quantify the effectiveness of source evaluation instruction in the course. In the article’s discussion, the authors reflect on these results and report how they informed modifications to information literacy instruction in the course. Despite its initial application in a journalism course, this assessment can be adapted across the disciplines to measure and track students’ source evaluation efficacy.

Literature Review

The path toward developing a source evaluation assessment began with a review of published research on college students’ source evaluation skills. University and college students’ shortcomings in evaluating the information they encounter are well documented. Alison Head and Michael Eisenberg of Project Information Literacy found that students struggle to evaluate the credibility of information, which they typically find using strategies seemingly “learned by rote” instead of through innovation, experimentation, and developmental approaches to seeking and evaluating information.15 Subsequent studies have shown that college students acknowledge the need to evaluate the credibility of the sources they use,16 but a majority evaluate sources against memorized norms like timeliness, author expertise, and a website’s top-level domain (such as .org, .gov, .com) or rely on the advice of instructors or friends with trusted expertise.17 Students generally do not navigate the internet effectively or efficiently to assess information credibility18 and are unable to determine the authorship of digital sources, assess the expertise of authors, and establish the objectivity of information.19 Students also admit to relying on superficial evaluation cues such as the graphic design of a digital source, their familiarity with the source, or its inclusion in a research database.20 To complicate matters further, the existence of fake news and the speed of the news cycle have negatively affected students’ ability to evaluate news credibility.21 Although students tend to be satisfied with their evaluation skills,22 in practice, many foreclose the source evaluation process in favor of easily available information.23

While information literacy research studies are calibrated to detect deficiencies in students’ source evaluations, these deficiencies can be obscured from disciplinary instructors because, ostensibly, students know the language of source evaluation.24 The following quote from one Infomania student’s source evaluation illustrates that, while a student may know to focus on the author of an information source and to seek evidence outside the source in question to determine the author’s authority, the student may lack the context and reasoning skills to fully evaluate this authority. This student wrote:

My conclusion about the article is that it is not credible because the author is not credible. She made valid points[;] however, she is not a journalist. She has a background in technology and not in writing. I looked her up online and she did not seem like a credible source. She has done research for three months[;] therefore[,] she is not an expert in this area.

An instructor may score this student well on a source evaluation because this student appears to know some source evaluation criteria. Reflecting prior research,25 however, the student applies these criteria superficially and incompletely in the evaluation. Specifically, the student knows to research the author but fails to critically argue why the author’s lack of journalism experience and background in technology negate her ability to write about the topic at hand. In other words, it appears that the student is well practiced in deploying buzz words such as “credible” and “expert” but does not yet fully understand how to critically and accurately apply these words in a source evaluation.

Because students such as this one use source evaluation language to mask their difficulties navigating and evaluating information, source evaluation skills can be classified as troublesome knowledge, and, more specifically, as ritual and conceptually difficult knowledge.26 Ritual knowledge is part of a social routine and is “rather meaningless,” while conceptually difficult knowledge results from a “mix of misunderstandings and ritual knowledge.”27 An effective source evaluation assessment can expose superficial and ritually foreclosed evaluations, identifying where in the evaluation process students are succeeding and falling short. This article’s authors used the threshold concept perspective as the organizing principle for developing such an assessment.

Research Framework

The threshold concepts perspective28 suggests that students fall back on ritualized language when completing source evaluation tasks because they have not crossed a key threshold that informs source evaluations. The threshold concept theory describes the moment a learner is transformed by a “shift in perception” that awakens them to a new way of thinking about a particular concept or even about an entire discipline.29 Having successfully crossed a conceptual threshold, students cannot unlearn their new knowledge but integrate it into and develop a deeper understanding of interrelated concepts.30

A threshold concept is not necessarily bound to a discipline and leads to new conceptual frontiers.31 Subject experts in several fields have identified and used threshold concepts to improve instruction and student learning.32 In electrical engineering, for instance, instructors correlated transparent instruction of threshold concepts with students’ improved comprehension and lower attrition.33 A business instructor showed that “power” as a threshold concept helped students better understand how political institutions and actors influence business knowledge, attitudes, and skills.34 A journalism instructor used a threshold concepts approach to increase students’ data confidence, quantitative literacy, and data journalism skills.35

In information and library science, ACRL based its six frames on threshold concepts and linked source evaluation with the “authority is constructed and contextual” frame.36 ACRL defines authority as “a type of influence recognized or exerted within a community.”37 The rationale supporting the notion of constructed authority is that “various communities” and their standards as well as the needs of the learner will have different standards for what constitutes a trusted source.38 This means that, in the process of determining the authority of a source, learners must “use research tools and indicators of authority to determine the credibility of sources” and understand “the elements that might temper this credibility,” as ACRL detailed in an example of a knowledge practice relative to this frame.39 The concept of authority is not bound to the field of information science;40 in journalism, it is analogous with the concept of credibility.41 Communications librarian Margy MacMillan, for example, argued that “authority is contextual and constructed” in the information literacy concept with which journalism students contend when they learn to fact-check information by consulting multiple sources.42

Approaching and crossing a threshold is not easy, however, and the “authority is constructed and contextual” concept can be troublesome for novices. While progressing toward a threshold, students can engage in mimicry instead of authentically embracing the threshold concept.43 While experts may detect authority by critiquing a source’s expertise and experience in light of the “societal structures of power” of time and place,44 novices often lack such a nuanced understanding of authority. Instead, they may rely on “basic indicators of authority, such as type of publication or author credentials.”45 Indeed, MacMillan acknowledged that the journalism students in her study of student source evaluation skills may have relied on some “performativity” that prevented a precise and “objective measure” of students’ abilities to evaluate sources.46

In sum, identifying a threshold concept that underlies source evaluation skills can facilitate the development of an assessment of these skills that detects students’ masking and mimicking language. The “authority is constructed and contextual” threshold concept was used as a foundation for an effective source evaluation assessment.

Threshold Concepts and Assessment

Following the introduction of ACRL’s Framework, library instruction and assessment expert Megan Oakleaf urged librarians and instructors to tackle information literacy frames with measurable learning outcomes, authentic assessment activities, and new or adapted rubrics.47 Following Oakleaf’s advice, this article’s authors reasoned that the frame “authority is constructed and contextual” suggests that evaluating a source entails understanding what constitutes authority or credibility within a discipline and accepting or challenging the constructed and contextual nature of this authority or credibility.48 The authors coupled Oakleaf’s guidance with extant research, particularly Lea Currie and colleagues’ call for course-integrated instruction to provide students with a sense of credibility criteria and a deeper understanding and context for evaluating information.49 This led to the formulation of the following two learning outcomes for credibility evaluation: 1) students can identify indicators of credibility in an information source; and 2) students can argue how these indicators contribute to or diminish the credibility of the source. In terms of assessment format, these learning objectives dictated using an open-ended assessment in which students demonstrate their reasoning rather than an adherence to a set of rules.50

To develop an assessment that matched these learning objectives and assessment characteristics, the authors reviewed several published information literacy assessment strategies and rubrics.51 A number of these tools proved unsuitable for this project because they are based on out-of-practice Standards, assess broadly the entire suite of information literacy outcomes, and would not facilitate the type of open-ended assessments that the learning objectives of the Infomania course necessitate.52 Other published rubrics are focused narrowly on specific information literacy elements like search or citation, not source evaluation.53 Several rubrics that do focus on source evaluation, meanwhile, evaluate the sources that students cite in their research papers or portfolios but do not use open-ended prompts to probe students’ arguments for selecting these sources.54

Erin Daniels’ assessment stands out among the reviewed assessments for being narrowly tailored to source evaluations and for its open-ended nature, which facilitates assessing students’ reasoning.55 This tool also aligns with the two learning outcomes of the Infomania course. Daniels’ assessment expects students to identify one or more credibility cues in an information source. In the language of this assessment, a credibility cue is any element of an information source (such as author, publisher, tone, or sources cited) that points to a source’s credibility. After identifying a cue, a student is expected to collect and present evidence about whether or not the cue contributes to a source’s credibility. A student’s response about an information source is assessed based on how well the student uses credibility cues and associated evidence to articulate an argument about the overall credibility of the information source.

Research Questions

Having identified an open-ended assessment focused on source evaluation, the authors proceeded to adapt it to the parameters of the Infomania course. In addition to aligning with the course learning outcomes, the authors aimed for the assessment to identify students’ abilities and difficulties with source evaluation at a single time point (in other words, at the beginning of a semester), and over a period between two time points (that is to say, over the course of a semester). The authors thus used the assessment as an instrument of formative and summative assessment.56 The summative assessment would yield information about the effectiveness of the course to advance students’ source evaluation knowledge and skills. The authors thus identified the following research questions:

RQ1: Early in the semester, (a) how well do students identify indicators of credibility in an information source, and (b) what indicators of credibility do they identify?

RQ2: Early in the semester, how well do students argue about the credibility of an information source?

RQ3: Late in the semester, compared to early in the semester, (a) how well do students identify indicators of credibility in an information source, and (b) what indicators of credibility do they identify?

RQ4: Late in the semester, compared to early in the semester, how well do students argue about the credibility of an information source?

Method

Adapting the Assessment

Daniels’s original assessment consists of evaluating students’ annotated bibliographies in which students are expected to judge the credibility of each source they list. Each annotation receives a score on a seven-point rubric (see table 1).57 This article’s authors implemented four modifications to the original assessment to address differences between its original context and how it would be used in the Infomania course. The first modification is discipline-specific. Recall that journalism students typically do not produce bibliographies but instead write news articles, broadcast scripts, or news releases that identify sources in text only. Instead of asking journalism students to compile annotated bibliographies, the revised assessment uses the discipline-appropriate strategy of asking students to determine the credibility of an article as a news source.58

TABLE 1

Levels and Definitions of Erin Daniels’s Original Seven-point Scoring Criteria

Level

Definition

1

Does not address credibility at all.

2

Uses terms related to credibility (such as reliable, biased, and so on), but the usage does not make sense.

3

Does not identify credibility cues, but still attempts assessment of credibility.

4

Identifies credibility cues, but does not attempt interpretation of those cues.

5

Identifies credibility cues, but makes generic assessments of credibility (as opposed to interpretation of specific cues).

6

Identifies credibility cues and attempts specific interpretation of those cues.

7

Identifies credibility cues, interprets those cues, including how the cues affect their understanding of the information source within the context of the topic being researched.

The second modification reflects the authors’ desire to compare assessment scores both at a single time point and between time points (that is, beginning and end of the semester). In the original assessment, student scores are not comparable because each student’s annotated bibliography features a different number of entries and a corresponding different number of scores.59 This is because Daniels’ original assessment functions “as a feedback mechanism to students … rather than as a firm grading system.”60 To generate comparable scores, the revised assessment asks all students to evaluate the same article, which is presented in the assessment prompt. Instead of evaluating a variable number of bibliography sources, as called for in the original assessment, students in the revised assessment evaluate only one article.

The next modification was motivated by the need for different raters to score students’ work with consistency. If the assessment was to be used in the Infomania course over successive semesters, it needed to be replicable by the instructors assigned to the course. The rating criteria were simplified to help each independent rater apply the scoring criteria the same way (that is, to increase the criteria’s reliability).61 The original scoring scheme was first divided into two dimensions: breadth and depth. Scoring a student’s response in the revised assessment proceeds as follows (see figure 1). A rater first identifies if a student’s response contains any credibility cues. Recall that a credibility cue is any element of an information source that indicates whether or not the information source is credible (such as publisher, author, date, or sources). The rater assigns a breadth score, representing the number of credibility cues in the evaluation (range: 0 to n, where n is the number of credibility cues identified in the response). If the evaluation does not identify any credibility cues, the breadth score is 0, and the depth dimension is not scored.

FIGURE 1

Illustration of the Assessment Coding Scheme, Showing the Range of Possible Breadth and Depth Scores for Each Credibility Cue

Figure 1. Illustration of the Assessment Coding Scheme, Showing the Range of Possible Breadth and Depth Scores for Each Credibility Cue

If the evaluation does contain one or more credibility cues, the evaluation receives a score on the depth dimension for each identified cue. Depth is scored using a three-point scale, which was derived from the original seven-point scale (see table 1), using a survey design best practice of asking about only one concept per question.62 The depth score criteria are as follows:

  • 1 means that the cue is identified in the evaluation, but that there is no evaluation argument associated with it (this corresponds to 4 in the original assessment)
  • 2 means that a cue is used to articulate an evaluation argument, but no evidence is provided to support this argument (this corresponds to 5 in the original assessment)
  • 3 means that a cue is used to articulate an evaluation argument, and evidence is presented that supports this argument (this corresponds to 6 in the original assessment)

For reliability and redundancy reasons, the revised rubric omits the original rubric’s last level.63 See table 2 for examples of statements scored at each of the three levels of depth.

TABLE 2

Definitions and Examples of Depth Scoring Criteria in the Revised Assessment

Evaluation

Depth

Definition of Depth Level

Example

1

Identifies this cue but does not attempt an evaluation of the cue.

The author made sure to cite his sources.

2

Evaluates the cue but does not provide evidence to support the evaluation.

Refinery 29 isn’t the most credible website, similar to BuzzFeed. The recommended articles to read next at the bottom of the page are more editorial and not so much reporting news.

3

Evaluates the cue and supports the evaluation with evidence.

First, I searched the author’s name, “Christopher Luu.” He seems like a professional writer with a BA degree, and he had been engaging in media writing for a decade since 2007. Also, I found he wrote many different types of articles for “Refinery 29.” Then, I searched for “Refinery 29,” and I find out it is a modern style media for women’s entertainment, like lifestyle. Even this article looks like a little bit off-topic about women’s fashion industry, it is relative to their main topic area. The author is a professional writer and editor, and I do find the information that was provided in the article, therefore, I think it is credible.

The last modification concerns the scores that each student’s evaluation receives. In the original assessment, each source in a student’s bibliography receives one score, regardless of how many credibility cues a student articulates for that source. This procedure potentially masks information when a student considers more than one indicator of authority for an information source. In the original assessment, students also receive as many scores as they have annotations in their bibliographies. In the revised assessment, students evaluate only one source, which is equivalent to one annotation in an annotated bibliography. For this one evaluation, however, a student receives two scores: a breadth score, which shows how many indicators of authority (that is to say, credibility cues) they consider in their evaluation; and a depth score, indicating how much evidence they use in their evaluation. Each student’s evaluation generally receives one breadth score and several depth scores. The depth scores can be averaged for analysis purposes.

Sample

Having modified the original assessment to fit the needs and goals of the course, the authors used the assessment as an in-class activity at two time points during the same semester to address this project’s research questions. The initial assessment took place in the second week of the semester, before instruction on credibility evaluation began. The end-of-semester assessment took place during the last week of class. A total of 152 students, out of 164 enrolled (93%), completed the assessment at both time points. These students’ classifications ranged from sophomore to senior.

Procedure and Materials

The authors introduced the assessment to students in the course of a regular class meeting. Students completed the assessment for class credit, and were given the option to participate in the research study for extra credit. Research participation consisted of allowing researchers to access and evaluate the assessment assignment. These procedures were approved by the university’s human subjects protection program. All but two students in the course consented to participate in the study.

Using the Qualtrics online platform, students were presented with a news article and asked to evaluate its credibility. Students were provided with an online link to the article, and a paper copy of it. The prompt read as follows:

In the space below, write an evaluation of the article’s credibility as a news source. You may use any source at your fingertips to evaluate this article. Your evaluation should include:
  1. Your overall conclusion about the article’s credibility;
  2. A list of the article elements you used in examining its credibility;
  3. Evidence about these elements that explains how you arrived at your conclusion.

To prevent a familiarity effect on the end-of-semester assessment, students did not evaluate the same article at the two timepoints. To ensure that the beginning- and end-of semester assessments consisted of similar conditions, two articles that were matched on the quality of their credibility cues were used. Both articles represented a genre of information that students likely come across in their social media feeds. Both articles were published by nonlegacy news sources (such as BuzzFeed, Refinery29), were recent to the date of each assessment, were written by individuals who were not staff writers at each publication, focused on timely topics (such as political echo chambers, Twitter verification process), used other news articles and social media as sources, cited these sources inconsistently, were written in a casual tone, and included both factual and opinion-based statements.

Coding

This article’s two authors trained together to apply the coding scheme using a set of responses from a previous class, in which the assessment was pilot-tested. The authors also developed a grid to score each student response (see figure 2). Each author then scored the same 20 percent of the responses, arriving at an acceptable level of intercoder reliability. Percent agreement between the two authors was 91 percent, meaning that each author scored about 9 out of every 10 response elements the same way. Because some of the agreements may have been due to chance, Cohen’s kappa, a metric that adjusts for such chance agreement, was also calculated.64 This value was .82, which falls in the “almost perfect” category of interrater agreement.65 Having established good reliability, each author then coded half of the remaining responses.

FIGURE 2

Assessment Grid Used to Score Each Student Response

Figure 2. Assessment Grid Used to Score Each Student Response

Results

The first two research questions informed formative assessment results early in the semester. RQ1a asked about how well students identified indicators of credibility in the article they were presented and asked to evaluate. Breadth scores (that is, the number of credibility cues that students identified, such as author or date) were used to address this question. On average, students evaluated 3.47 credibility cues in their early-semester responses. Overall, students’ responses featured between 1 and 6 credibility cues.

RQ1b asked about what indicators of credibility (that is, credibility cues) students identified in their evaluations. Figure 3 (light-colored bars) illustrates the percentages of students who identified each credibility cue. Early in the semester, most students identified as a credibility cue an article’s content (86%) or author (84%). Fewer, but still a majority, identified an article’s sources (66%) and publisher (57%). Just over a third of the students identified an article’s writing style (35%), and few identified its publication date (4%).

FIGURE 3

Percentages of Student Evaluations Containing Each Cue Category in Early- and Late-semester Assessments*

Figure 3. Percentages of Student Evaluations Containing Each Cue Category in Early- and Late-semester Assessments*

*Nonoverlapping error bars (95% confidence intervals) indicate statistically different proportions.

RQ2 asked about how well students argue about the credibility of an information source. The depth score, which is a measure of argument quality, was used to address this question. Students evaluated a majority (58%) of the cues they identified at level 2, which means that the students primarily relied on their personal opinions to support credibility arguments. They evaluated about a third (35%) of the cues at level 1, which means that they did not offer any evidence for their credibility arguments. Students evaluated only 7 percent of the cues at level 3, which means that they used little external evidence to support their credibility arguments. On average, students evaluated a credibility cue at a depth of 1.73.

The remaining research questions addressed summative assessment (that is, the differences in assessment scores between early and late in the semester). RQ3a concerned the difference in how well students identified indicators of credibility, indicated by how many credibility cues students identified early versus late in the semester. As figure 4 illustrates, on average, students identified 3.45 cues late in the semester, which was essentially equal to the number of cues they had identified early in the semester, which was 3.47 (see RQ1a). The range of the cues that students identified in their responses late in the semester was also the same as early in the semester: between 1 and 6 cues.

FIGURE 4

Average Breadth of Cues in Early- and Late-semester Assessments*

Figure 4. Average Breadth of Cues in Early- and Late-semester Assessments*

*Nonoverlapping error bars (95% confidence intervals) would indicate a statistical difference between these averages.

To evaluate statistically the summative assessment results, an independent-samples t-test with 95% confidence intervals was used. This test indicates whether there is a statistically significant difference between two averages. Each set of early- and late-semester scores was tested to determine if they were statistically different. There was no statistically significant difference on breadth—the number of cues that students identified—early and late in the semester, t(272) = .16, p = .88.

RQ3b asked about differences in the categories of cues that students used between early- and late-semester evaluations. The dark-colored bars in figure 3 illustrate the number of cues in each category at the end of the semester. A majority of the students evaluated an article’s sources (84%), which represented a statistically significant increase of 18 percent from early in the semester, t(272) = 3.42, p = .001. A majority of the students also identified the article’s author (69%), but this represented a significant decrease of 15 percent from early in the semester, t(272) = 3.02, p = .003. There was only a 2 percent increase in the proportion of students who evaluated the article’s publisher (59%) late in the course. This difference was not statistically significant, t(272) = .37, p = .72.

Significantly fewer students evaluated an article’s content late in the semester (55%), a 31% decrease, t(272) = 6.04, p < .001. Finally, about the same percentages of students evaluated the article’s visuals (29%), style (25%), and date (25%) at the end of the semester. These values represented significant increases for visuals (15%), t(272) = 3.00, p = .003; and date (20%), t(272) = 4.99, p = .001. The frequency with which writing style was evaluated late in the semester was not significantly different from early in the semester, t(272) = 1.85, p = .07.

RQ4 asked about the difference in how well students argued about the credibility of the source, that is, the depth of students’ evaluations. While a majority of the cues still were evaluated at level 2 late in the semester (43%), this was a decrease from early in the semester. Significantly more cues were evaluated at level 3 (34%), and significantly fewer cues were evaluated at level 1 (23%). As figure 5 illustrates, there was a significant increase of .40 in average evaluation depth, with the average cue being evaluated at a depth of 2.13 late in the semester, t(272) = 7.77, p < .001.

FIGURE 5

Average Evaluation Depth in Early- and Late-semester Assessments*

Figure 5. Average Evaluation Depth in Early- and Late-semester Assessments*

*Nonoverlapping error bars (95% confidence intervals) indicate a statistical difference between these averages.

Discussion

When viewed as an instrument of formative assessment, assessment results showed students’ baseline knowledge. As summative assessment, the results documented students’ progress in source evaluation over the semester. Positioned between the two assessments, the Infomania course functions as an intervention aimed at developing students’ abilities to identify, research, and contextualize markers of credibility. The authors used the assessment results to gauge how efficacious the course was in meeting these learning objectives, and to identify opportunities for tailoring instruction in subsequent iterations of the course. The following sections discuss the key insights that emerged from the two administrations of the assessment.

Formative Assessment

At the beginning of the semester, most students were novices at determining source credibility because they failed to offer evidence-based evaluations. Some students also showed ritual knowledge (that is, rehearsed evaluation language that did not match the source under consideration). In their evaluations, most students identified at least one of these four cues as indicators of credibility: author, article’s argument, publisher, or an article’s sources.

A majority of students (more than 80%; see figure 3) referenced the author of the article they were evaluating, and thus earned a point on breadth. Students then scored either 1 on the depth of their evaluation for indicating the existence of an author, 2 for voicing their opinion of the author’s credibility, or 3 if they supported their evaluation with evidence from other sources. Only 14 students (9%), however, reached the third level. Instead, students typically either mentioned the existence of the author or offered their opinion of the author without providing supporting evidence. The following excerpt from a student’s evaluation of an article about echo chambers written by a fellow in BuzzFeed’s Open Lab for Journalism, Technology, and the Arts illustrates the latter: “The writer’s job title is ‘BuzzFeed Open Lab Fellow,’ I am not sure what that position is or what is required to have that title, therefore this also takes away credibility.” The author’s title was presented in the article’s byline and was mentioned in the text of the article. This student’s evaluation, thus, indicates that while this student read the article, they did not advance an evaluation beyond, what appears to be, a superficial opinion.

A comparable majority of students scored a point on the breadth dimension for mentioning in their evaluations the argument presented in the article. Hardly any of these students, however, provided research as evidence of their credibility determinations. One student, for instance, wrote: “I agree with the author’s standpoint but I don’t believe it is a credible article.” This student, along with many peers, did not summarize or otherwise express their understanding of the author’s argument, that echo chambers limit a person’s diversity of information online, and thus scored 1 on the depth dimension.

The article that students evaluated at the beginning of the semester was published by BuzzFeed. Perhaps because BuzzFeed is a popular information source among undergraduates,66 students drew more on their own experiences with this website in their credibility calculations than they did for any other credibility cue. One student’s response exemplifies this practice:

While looking for credibility in anything I look where it came from, and who wrote it. In this case it comes from BuzzFeed, an online website with quizzes to tell what kind of cupcake you are, with the occasional reporting on big events happening around the world. I find it hard to separate fact from [opinion] on this site. I think that a majority of the content is biased journalism, instead of a trusted new [sic] source. This alone leads me to think it is not credible.

This student’s experience with BuzzFeed’s entertainment section colored their perception of BuzzFeed’s news section, highlighting an inability to disambiguate entertainment from news stories. Students rarely substantiated their claims of bias or trust by researching BuzzFeed’s editorial standards or publication processes. Instead, many students garnered a depth score of 2 for the publisher by stating the opinion that “anyone” could post to the website.

A majority of students (more than 60%) also scored a point on the breadth dimension by noting the existence of sources in the article. A mere 20 percent of these students, however, discussed validating or researching the sources that were either hyperlinked or mentioned in the article. Another 25 percent of the students who mentioned sources scored 2 on the depth dimension for offering their general opinions of citation practices or markers of credibility. The following excerpt represents a typical 2-point statement about sources: “The article used sources with facts and statistics and cited them correctly. Not only that but they cited it within the article using hyper links [sic] making it very easy for us to check the sources.” The student noticed the author’s use of statistics to discuss the 2016 U.S. presidential election results but apparently did not click on any of the hyperlinked sources to evaluate their credibility or relevancy to the article.

Students’ rare use of external evidence in their evaluations at the beginning of the semester reflected prior research findings. Studies indicate that undergraduates typically evaluate sources against established norms like timeliness, author expertise, and a website’s top-level domain67 but that they fail to validate a source’s claims, authorship, and sources of information.68 Students’ ability to do so may be even further complicated by undergraduates’ larger attitudes toward, and misunderstandings about, news. Recent Project Information Literacy research has found that embarrassed students may go with their “gut feeling” to determine the legitimacy of a news source when lacking proper source evaluation skills.69 Such “gut feelings” may be clouded by an idealization of news as an “objective reporting of facts” or by disillusionment that news sources cannot be trusted or discerned from “fake news.”70 Within this greater context, overly skeptical and underresearched student opinions may be understood as proof that students default to preconceptions of the news because they lack source evaluation skills.

In addition to the absence of external evidence, some evaluations also exhibited students’ use of ritual knowledge (that is, learned phrases that did not fit the work being evaluated). This tendency was most evident when students wrote about the sources of the article they were evaluating. Using ritual knowledge, some students held journalistic writing to the same source and citation standards as scholarly research. One student, for instance, wrote: “I look to see if any of the information in the article is backed up with any other sources or sightings. [sic] There [are] no footnotes or bibliography, which again leads me to believe in a lack of credibility.” Other students faulted the article for lacking specific source types that they, evidently, had been taught were components of evaluation checklists. The following excerpt illustrates this tendency:

I do not think this article is very credible. The author did not cite the information she uses for data which makes me wonder if it was made up. Citing is an element I used to examine its credibility. If there were citations from a scholarly journal or a gov. [sic] website I would think the data used is credible.

Responses such as this suggested that students were mimicking the use of basic indicators of authority, such as top-level domains, to determine credibility of sources and were unable to use the context of the source under consideration to formulate a more nuanced evaluation.71

It is possible that the ritual knowledge students used in early-semester evaluations resulted from their prior reliance on information evaluation checklists, which are promoted in some high school and university information literacy programs and are easily findable online.72 Such checklists, however, can fail to prepare students to properly evaluate sources or the news on the social web in these “post-truth”73 times. It is possible that many of the students who either missed the target in their evaluations, or failed to support their evaluations with evidence, relied on such limited evaluation tools from their repository of ritualized knowledge to mime responses they believed to be appropriate.74 It was evident that students needed to develop a more nuanced understanding of how to weigh, contextualize, and judge the credibility of a source.75 That is to say, students needed to move beyond checklists and their personal opinions to develop a process for critically researching, evaluating, and contextualizing the credibility of sources.76

Summative Assessment

The information literacy course for journalism students that served as the context for this assessment focused on finding and accessing information using a variety of source types (such as public records, news archives, business filings, scholarly literature), and on evaluating the credibility of this information. By the end of the semester, students were expected to show some improvement in their source evaluation skills. The summative assessment illustrated how much students learned during the semester and how effective course instruction was in advancing this learning.

The end-of-semester assessment revealed little change in the breadth of students’ evaluations (that is, the average number of credibility cues that students identified as indicators of credibility in the article they were evaluating). In both assessments, students averaged between three and four cues per evaluation. Students did identify a greater variety of credibility cues at the end of the semester, however, citing author, argument, and style less often but noting date, sources, and visuals more frequently. This suggests that, during the course of the semester, students expanded their repertoire of what constitutes an indicator of credibility in journalism.

The clearest difference between the two assessments was the increase in the depth of students’ evaluations. At the end of the semester, a greater proportion of cues received scores of 3, and a smaller proportion received scores of 2, than at the beginning of the semester. This means that students used more external evidence to support their evaluation arguments at the end of the semester than they did initially. Many students improved their reasoning in the evaluations, going beyond simply identifying cues or offering instinctive opinions about the cues.

The following evaluation exemplifies how some students validated their opinions through research in the end-of-semester assessment. In the assessment, students had been asked to evaluate a Refinery29 article about Twitter ceasing to verify accounts. The freelance reporter who authored the article largely based it on The Hollywood Reporter’s coverage and referenced administrative messages from Twitter. One student began their argument with external evidence about the publication: “Refinery 29 is a relatively new entertainment company that began as a startup but is now worth more than $100 million.” Next, the student used the website Media Bias Fact Check to look up two sources cited in the article, which said that these sources typically were accurate, but that they had a liberal bias. The student then reflected on the timeliness of the article, which covered a news event that occurred the same day that the article was published. The student cited the exact times when an official tweet about the event was posted and when the article was published. The student concluded with a summation of the author’s experience and recent publication history drawn from LinkedIn.

Focusing on the article’s publication, sources, date, and author, this student researched and reasoned about the credibility of this article within the context of its creation. First, the student consulted and referenced external sources. While it was common for students to discuss bias or the alleged political leanings of a publication absent evidence, this student cited information from the website Media Bias Fact Check in evaluating the article’s sources. Likewise, the student used information from the article author’s LinkedIn account to evaluate the author’s expertise. Finally, the student considered the timeliness of the article by placing the article’s publication within the daily news cycle. In all, this student’s response demonstrates progress toward using external evidence in support of source evaluation arguments.

While students tended to provide more researched answers at the end of the semester than they did initially, they did not abandon unsubstantiated opinions altogether. This case was particularly evident in students’ evaluations of the article’s writing style, a category that included biased writing. During both assessments, most writing style comments scored a 1 or 2 on depth, indicating that students only noted the existence of writing style or offered an unsupported opinion of it. Some students struggled with the concept of bias, using it to dismiss elements of an article that could have been better understood if researched. One student, for instance, wrote, “I think that this article is not credible because it is written with an opinion about twitter [sic]. Although they cite some of their sources, they still seem like they have a bias towards twitter [sic].” Given that the article under review was about Twitter, the author’s discussion of the company may not have implied biased reporting; and, without further evidence in the evaluation, it is impossible to know what the student perceived as biased. Such superficial responses and the consistency in student scores on writing style between the early- and late-semester assessments suggest that writing style and bias were not addressed adequately in the information literacy course.

In all, the summative assessment results suggest that, during the information literacy course, students advanced their ability to seek and articulate evidence in support of their source evaluations. While they did not rely on more credibility cues at the end of the semester than they did initially, students did appear to use a greater variety of these cues at the end of the semester. The course did not fully inoculate students against flawed reasoning and unsupported opinions, but it did appear to help many of them think more substantively about the credibility of a source.

Implications

Undergraduates’ struggle to successfully evaluate some of the cues as indicators of credibility (such as author, article’s argument, publisher, or article’s sources) seeded a revised information literacy instruction session for the course. To combat the historical problem with inconsistent library instruction in independent course sections, the authors mandated an information literacy instruction session across all sections to provide instructional consistency and to better address the source evaluation learning outcomes.

The session focused on teaching the “lateral reading” approach77 to evaluate the overall credibility of a news article. In addition to concentrating on the frame “authority is constructed and contextual,” the session sparked conversation related to other ACRL information literacy frames. Using a New York Times article about Serena Williams’s loss at the 2009 US Open, students were prompted to examine such cues as the reporting expertise of the journalist, and her sources and argument, specifically focusing on the language used to describe Williams and her opponent. Researching these cues allowed students to experience “research as inquiry” and a “strategic exploration” that may have specific goals but also allow for serendipity to find the best information. The students’ research processes involved watching replays of the match on YouTube, exploring a black feminist blog penned by academics, and skimming scholarly sources about the depiction of African American women, particularly Williams, in the media. Debating the difference between a scholarly blog and a journal article granted students the opportunity to better understand and question the creation process behind the different formats and how the creation process and time factored into the scholarly and popular values of YouTube videos, news articles, scholarly blogs, and journal articles, depending on the information need at hand. Turning to the topic of “scholarship as conversation,” the class discussed how they could use the found sources to support their evaluation of the article and challenge the authority of The New York Times as well as the journalist and her argument using their research. After successfully critiquing one of the most established newspapers in the country, students reported feeling empowered to evaluate a source’s credibility, despite their previous acceptance of the source’s authority. Student feedback on the session indicated that the session equipped them with some of the needed skills and authority to enter a professional and scholarly conversation, which many undergraduates lack.78

Future Considerations

The successful use of the assessment as a source of formative and summative data suggests future uses and informs instruction. It may be beneficial to use the assessment on an ongoing basis throughout the semester. Ongoing formative assessment would supply more frequent student feedback and better reveal the ebbs and flows of student understanding and misunderstanding.79 Armed with this information, instructors could better scaffold the various credibility cues and evaluation methods such as “lateral reading” throughout the semester and beyond.80 So doing also may enable instructors to better locate students’ “stuck places” and provide responsive instruction to advance students beyond their “epistemological obstacles.”81 Instructors, for example, could offer responsive instruction in how to properly evaluate writing style, especially as it pertains to bias, and the possible social factors that influence students’ distrust of news.82

The assessment also can be used early in a curriculum to allow disciplinary and library instructors to scaffold instruction on specific information literacy skills throughout the remainder of the curriculum. The authors plan to use this assessment’s results to inform information literacy sessions in journalism courses that follow the information literacy course, such as media writing, research methods for strategic communications, and special topics. The assessment can be used in these subsequent courses to continually gauge student development. In addition, while the assessment discussed here focused on the ACRL frame “authority is constructed and contextual,” the redesigned information literacy session guided students through interrelated ACRL information literacy frames, suggesting that this assessment may be useful for determining student comprehension of information literacy concepts beyond “authority is constructed and contextual.”

A limitation of the assessment presented here is that it does not account explicitly for the accuracy of students’ evaluations. An evaluation’s accuracy is assumed to emerge in the process of researching and articulating the credibility of individual cues. The assessment, however, does not interrogate the completeness of the research that students conduct on each cue, and the assessment does not include a score for the accuracy of an evaluation at cue or overall source levels. As some of the excerpts from student evaluations illustrate, evaluation accuracy is not guaranteed, even when students provide evidence of their credibility arguments. In the future, it may be necessary to expand the assessment to include dimensions of accuracy and research depth.

Conclusion

This paper discusses the process used to develop a source credibility assessment for a journalism information literacy course and reports the results from using this assessment as formative and summative assessment in the one-semester course. Despite being developed for a journalism course, the assessment has utility outside of this discipline. Being rooted in the universal frame of “authority is constructed and contextual,” the assessment can be adapted to any setting in which students are expected to perform source evaluation by articulating what constitutes disciplinary authority and how well a source reflects this authority. While news articles were used as the stimuli for students’ source evaluations in the instance reported here, nonjournalism instructors can ask their students to evaluate materials commonly used as information sources in their disciplines. Erin Daniels’ rubric and the derivative assessment presented here involves a general process of identifying indicators of authority within a source—which are called credibility cues here—and evaluating whether each indicator contributes to or detracts from the overall credibility of the source. This general process should be transferable across the disciplines such that its use can inform instructors and improve information literacy instruction beyond journalism education.

Notes

1. Matt Carlson and Bob Franklin, Journalists, Sources, and Credibility: New Perspectives (New York, NY: Routledge, 2011).

2. Jefferson Spurlock, “Why Journalists Lie: The Troublesome Times for Janet Cooke, Stephen Glass, Jayson Blair, and Brian Williams,” ETC: A Review of General Semantics 73, no. 1 (2016): 71–76.

3. Accrediting Council on Education in Journalism and Mass Communications, “ACEJMC Accrediting Standards,” section 3, last modified September 2013, http://acejmc.ku.edu/PROGRAM/STANDARDS.SHTML.

4. Annmarie B. Singh, “A Report on Faculty Perceptions of Students’ Information Literacy Competencies in Journalism and Mass Communication Programs: The ACEJMC Survey,” College & Research Libraries 66, no. 4 (2005): 294–311.

5. Katelyn Angell and Eamon Tewell, “Teaching and Un-Teaching Source Evaluation: Questioning Authority in Information Literacy Instruction,” Communications in Information Literacy 11, no. 1 (2017): 95–121; Erin Daniels, “Using a Targeted Rubric to Deepen Direct Assessment of College Students’ Abilities to Evaluate the Credibility of Sources,” College & Undergraduate Libraries 17, no. 1 (2010): 31–43, https://doi.org/10.1080/10691310903584767; Karen R. Diller and Sue F. Phelps, “Learning Outcomes, Portfolios, and Rubrics, Oh My! Authentic Assessment of an Information Literacy Program,” portal: Libraries and the Academy 8, no. 1 (2008): 75–89, https://doi.org/10.1353/pla.2008.0000; Jos van Helvoort, “A Scoring Rubric for Performance Assessment of Information Literacy in Dutch Higher Education,” Journal of Information Literacy 4, no. 1 (2010): 22–39, https://doi.org/10.11645/4.1.1256; Debra Hoffmann and Kristen LaBonte, “Meeting Information Literacy Outcomes: Partnering with Faculty to Create Effective Information Literacy Assessment,” Journal of Information Literacy 6, no. 2 (2012), 70–85, https://doi.org/10.11645/6.2.1615; Iris Jastram, Danya Leebaw, and Heather Tompkins, “Situating Information Literacy within the Curriculum: Using a Rubric to Shape a Program,” portal: Libraries and the Academy 14, no. 2 (2014): 165–86, https://doi.org/10.1353/pla.2014.0011; Lorrie A. Knight, “Using Rubrics to Assess Information Literacy,” Reference Services Review 34, no. 1 (2006): 43–55, https://doi.org/10.1108/00907320610640752; Davida Scharf et al., “Direct Assessment of Information Literacy Using Writing Portfolios,” Journal of Academic Librarianship 33, no. 4 (2007): 462–78, https://doi.org/10.1016/j.acalib.2007.03.005; Lara Ursin, Elizabeth Blakesley Lindsay, and Corey M. Johnson, “Assessing Library Instruction in the Freshman Seminar: A Citation Analysis Study,” Reference Services Review 32, no. 3 (2004): 284–92, https://doi.org/10.1108/00907320410553696; Dorothy Anne Warner, “Programmatic Assessment of Information Literacy Skills Using Rubrics,” Journal on Excellence in College Teaching 20, no. 1 (2009): 149–65.

6. Association of College & Research Libraries [ACRL], “Information Literacy Competency Standards for Journalism Students and Professionals,” American Library Association, last modified October 2011, http://www.ala.org/acrl/sites/ala.org.acrl/files/content/standards/il_journalism.pdf.

7. Adam J. Kuban and Laura MacLeod Mulligan, “Screencasts and Standards: Connecting an Introductory Journalism Research Course with Information Literacy,” Communication Teacher 28, no. 3 (2014): 188–95, https://doi.org/10.1080/17404622.2014.911335; Margy Elizabeth MacMillan, “Fostering the Integration of Information Literacy and Journalism Practice: A Long-Term Study of Journalism Students,” Journal of Information Literacy 8, no. 2 (2014): 3–12, https://doi.org/10.11645/8.2.1941; Carol Perruso Brown and Barbara Kingsley‐Wilson, “Assessing Organically: Turning an Assignment into an Assessment,” Reference Services Review 38, no. 4 (November 16, 2010): 536–56, .

8. Sarah McGrew, “Learning to Evaluate: An Intervention in Civic Online Reasoning,” Computers & Education 145 (February 2020): 144–45, https://doi.org/10.1016/j.compedu.2019.103711.

9. Sarah McGrew et al., “Can Students Evaluate Online Sources: Learning from Assessments of Civic Online Reasoning,” Theory & Research in Social Education 46, no. 2 (January 8, 2018): 165–93, https://doi.org/10.1080/00933104.2017.1416320.

10. ACRL, Standards to the Framework for Information Literacy for Higher Education; Amy R. Hofer, Lori Townsend, and Korey Brunetti, “Troublesome Concepts and Information Literacy: Investigating Threshold Concepts for IL Instruction,” portal: Libraries & The Academy 12, no. 4 (2012): 398–99, https://doi.org/10.1353/pla.2012.0039; Lori Townsend, Korey Brunetti, and Amy R. Hofer, “Threshold Concepts and Information Literacy,” portal: Libraries and the Academy 11, no. 3 (2011): 17–19, https://doi.org/10.1353/pla.2011.0030; Lori Townsend et al., “Identifying Threshold Concepts for Information Literacy: A Delphi Study,” Communications in Information Literacy 10, no. 1 (2016): 33–34, https://files.eric.ed.gov/fulltext/EJ1103398.pdf.

11. Wynne Harlen and Mary James, “Assessment and Learning: Differences and Relationships Between Formative and Summative Assessment,” Assessment in Education 4, no. 3 (1997): 365–79, https://doi.org/10.1080/0969594970040304; Mantz Yorke, “Formative Assessment in Higher Education: Moves Toward Theory and the Enhancement of Pedagogic Practice,” Higher Education 45 (2003): 477–501.

12. Daniels, “Using a Targeted Rubric,” 34–38.

13. Grant Wiggins and Jay McTighe, Understanding by Design, 2nd ed. (Alexandria, VA: Association for Supervision and Curriculum Development, 2005), 152–57.

14. Wiggins and McTighe, Understanding by Design, 183.

15. Alison J. Head and Michael B. Eisenberg, “Lessons Learned: How College Students Seek Information in the Digital Age” (Project Information Literacy Progress Report, University of Washington Information School, December 1, 2009): 32–35, http://www.projectinfolit.org/uploads/2/7/5/4/27541717/pil_fall2009_finalv_yr1_12_2009v2.pdf.

16. Lea Currie et al., “Undergraduate Search Strategies and Evaluation Criteria,” New Library World 111, no. 3/4 (2010): 113–24, https://doi.org/10.1108/03074801011027628.

17. Angell and Tewell, “Teaching and Un-Teaching Source Evaluation,” 95–121; Alison J. Head and Michael B. Eisenberg, “Truth Be Told: How College Students Evaluate and Use Information in the Digital Age” (Project Information Literacy Progress Report, University of Washington Information School, November 1, 2010), http://www.projectinfolit.org/uploads/2/7/5/4/27541717/pil_fall2010_survey_fullreport1.pdf.

18. Sam Wineburg et al., “Evaluating Information: The Cornerstone of Civic Online Reasoning” (Stanford History Education Group, Graduate School of Education Open Archive, November 22, 2016), http://purl.stanford.edu/fv751yt5934.

19. Arthur Taylor and Heather A. Dalal, “Information Literacy Standards and the World Wide Web: Results from a Student Survey on Evaluation of Internet Information Sources,” Information Research 19, no. 4 (2014).

20. Angell and Tewell, “Teaching and Un-Teaching Source Evaluation,” 104–07; Head and Eisenberg, “Truth Be Told,” 10.

21. Alison J. Head et al., “How Students Engage with News: Five Takeaways for Educators, Journalists, and Librarians” (Project Information Literacy Research Institute, October 16, 2018): 13–16, http://www.projectinfolit.org/uploads/2/7/5/4/27541717/newsreport.pdf.

22. J. Patrick Biddix, Chung Joo Chung, and Han Woo Park, “Convenience or Credibility? A Study of College Student Online Research Behaviors,” The Internet and Higher Education 14, no. 3 (July 2011): 175–82, https://doi.org/10.1016/j.iheduc.2011.01.003.

23. Currie et al., “Undergraduate Search Strategies and Evaluation Criteria,” 5; Jason Martin, “The Information Seeking Behavior of Undergraduate Education Majors: Does Library Instruction Play a Role?” Evidence Based Library and Information Practice 3, no. 4 (2008), https://journals.library.ualberta.ca/eblip/index.php/EBLIP/article/view/1838/3696.

24. Head and Eisenberg, “Truth Be Told,” 10–12; Currie et al., “Undergraduate Search Strategies and Evaluation Criteria,” 122–23.

25. Currie et al., “Undergraduate Search Strategies and Evaluation Criteria,” 122–23.

26. David Perkins, “The Many Faces of Constructivism,” Educational Leadership 57, no. 3 (1999): 6.

27. Perkins, “The Many Faces of Constructivism,” 8–10.

28. Jan H.F. Meyer and Ray Land, “Threshold Concepts and Troublesome Knowledge 1: Linkages to Ways of Thinking and Practising within the Disciplines,” in Improving Student Learning: Ten Years On, ed. Chris Rust (Oxford, England: Centre for Staff & Learning Development, 2003): 1–16, https://www.dkit.ie/system/files/Threshold_Concepts__and_Troublesome_Knowledge_by_Professor_Ray_Land_0.pdf; Jan H.F. Meyer and Ray Land, “Threshold Concepts and Troublesome Knowledge (2): Epistemological Considerations and a Conceptual Framework for Teaching and Learning,” Higher Education 49, no. 3 (2005): 373–88, https://doi.org/10.1007/s10734-004-6779-5.

29. Meyer and Land, “Linkages to Ways of Thinking and Practising within the Disciplines,” 1–5.

30. Meyer and Land, “Linkages to Ways of Thinking and Practising within the Disciplines,” 1–5.

31. Meyer and Land, “Linkages to Ways of Thinking and Practising within the Disciplines,” 6.

32. Glynis Cousins, “Threshold Concepts: Old Wine in New Bottles or a New Form of Transactional Curriculum Inquiry?” in Threshold Concepts within the Disciplines, eds. Ray Land, Jan H.F. Meyer, and Jan Smith (Rotterdam, The Netherlands: Sense Publishers, 2008); Mick Flanagan, “Threshold Concepts: Undergraduate Teaching, Postgraduate Training, Professional Development and School Education: A Short Introduction and a Bibliography,” last modified October 10, 2018, https://www.ee.ucl.ac.uk/~mflanaga/thresholds.html; Threshold Concepts and Transformational Learning, eds. Jan H.F. Meyer, Ray Land, and Caroline Baillie (Rotterdam, The Netherlands: Sense Publishers, 2010); Threshold Concepts within the Disciplines, eds. Ray Land, Jan Meyer, and Jan Smith (Rotterdam, The Netherlands: Sense Publishers, 2008).

33. Ann Harlow et al., “‘Getting Stuck’ in Analogue Electronics: Threshold Concepts as an Explanatory Model,” European Journal of Engineering Education 36, no. 5 (2011): 435–47, https://doi.org/10.1080/03043797.2011.606500.

34. Paul D. Williams, “What’s Politics Got to Do with It? ‘Power’ as a ‘Threshold’ Concept for Undergraduate Business Students,” Australian Journal of Adult Learning 54, no. 1 (2014): 8–29, http://files.eric.ed.gov/fulltext/EJ1031000.pdf.

35. Glen Fuller, “Enthusiasm for Making a Difference: Adapting Data Journalism Skills for Digital Campaigning,” Asia Pacific Media Educator 28, no. 1 (2018): 112–23, https://doi.org/10.1177/1326365X18768134.

36. ACRL, Framework; Hofer, Townsend, and Brunetti, “Troublesome Concepts and Information Literacy,” 398–99; Townsend, Brunetti, and Hofer, “Threshold Concepts and Information Literacy,” 17–19; Townsend et al., “Identifying Threshold Concepts for Information Literacy,” 33–34.

37. ACRL, Framework.

38. ACRL, Framework.

39. ACRL, Framework.

40. Townsend et al., “Identifying Threshold Concepts for Information Literacy,” 34.

41. Alyssa Appleman and S. Shyam Sundar, “Measuring Message Credibility: Construction and Validation of an Exclusive Scale,” Journalism and Mass Communication Quarterly 93, no. 1 (2015): 59–79, https://doi.org/10.1177/1077699015606057.

42. MacMillan, “Fostering the Integration of Information Literacy and Journalism Practice,” 3–12.

43. Meyer and Land, “Epistemological Considerations and a Conceptual Framework for Teaching and Learning,” 377–83.

44. Townsend et al., “Identifying Threshold Concepts for Information Literacy,” 33.

45. ACRL, Framework.

46. MacMillan, “Fostering the Integration of Information Literacy and Journalism Practice,” 18.

47. Megan Oakleaf, “A Roadmap for Assessing Student Learning Using the New Framework for Information Literacy for Higher Education,” Journal of Academic Librarianship 40, no. 5 (2014): 510–14, https://doi.org/10.1016/j.acalib.2014.08.001.

48. Meyer and Land, “Linkages to Ways of Thinking and Practising within the Disciplines,” 13; Townsend et al., “Identifying Threshold Concepts for Information Literacy,” 34; Townsend, Brunetti, and Hofer, “Threshold Concepts and Information Literacy,” 18–19.

49. Currie et al., “Undergraduate Search Strategies and Evaluation Criteria,” 122–23.

50. Oakleaf, “A Roadmap for Assessing Student Learning,” 513.

51. Angell and Tewell, “Teaching and Un-Teaching Source Evaluation,” 95–121; Daniels, “Using a Targeted Rubric,” 31–43; Diller and Phelps, “Learning Outcomes, Portfolios, and Rubrics, Oh My!” 75–89; van Helvoort, “A Scoring Rubric for Performance Assessment of Information Literacy,” 22–39; Hoffmann and LaBonte, “Meeting Information Literacy Outcomes,” 70–85; Jastram, Leebaw, and Tompkins, “Situating Information Literacy within the Curriculum,” 165–86; Knight, “Using Rubrics to Assess Information Literacy,” 43–55; Scharf et al., “Direct Assessment of Information Literacy Using Writing Portfolios,” 462–78; Ursin, Lindsay, and Johnson, “Assessing Library Instruction in the Freshman Seminar,” 284–92; Warner, “Programmatic Assessment of Information Literacy Skills Using Rubrics,” 149–65.

52. ACRL, “Information Literacy Competency Standards for Higher Education,” American Libraries Association, last modified January 18, 2000, http://www.ala.org/Template.cfm?Section=Home&template=/ContentManagement/ContentDisplay.cfm&ContentID=33553; Knight, “Using Rubrics to Assess Information Literacy,” 47–48; Scharf et al., “Direct Assessment of Information Literacy Using Writing Portfolios,” 473–75; Warner, “Programmatic Assessment of Information Literacy Skills Using Rubrics,” 151.

53. Katelyn Angell, “Using Quantitative Methods to Determine the Validity and Reliability of an Undergraduate Citation Rubric,” Qualitative and Quantitative Methods in Libraries 4 (2015): 755–65; Laura W. Gariepy, Jennifer A. Stout, and Megan L. Hodge, “Using Rubrics to Assess Learning in Course-Integrated Library Instruction,” portal: Libraries and the Academy 16, no. 3 (2016): 491–509, https://doi.org/10.1353/pla.2016.0043.

54. Helvoort, “A Scoring Rubric for Performance Assessment of Information Literacy,” 38–39; Jastram, Leebaw, and Tompkins, “Situating Information Literacy within the Curriculum,” 181–83.

55. Daniels, “Using a Targeted Rubric,” 34–38.

56. Harlen and James, “Assessment and Learning,” 370–75; Yorke, “Formative Assessment in Higher Education,” 478–80.

57. Daniels, “Using a Targeted Rubric,” 35.

58. MacMillan, “Fostering the Integration of Information Literacy and Journalism Practice,” 8–14.

59. Daniels, “Using a Targeted Rubric,” 35–36.

60. Daniels, “Using a Targeted Rubric,” 36.

61. Megan Oakleaf, “Using Rubrics to Assess Information Literacy: An Examination of Methodology and Interrater Reliability,” Journal of the American Society for Information Science and Technology 60, no. 5 (2009): 969–83; Wiggins and McTighe, Understanding by Design, 188–89.

62. Roger Tourangeau, Lance J. Rips, and Kenneth Rasinski, The Psychology of Survey Response (New York, NY: Cambridge University Press, 2000), 9–61.

63. Daniels, “Using a Targeted Rubric,” 38.

64. Oakleaf, “Using Rubrics to Assess Information Literacy,” 971–72.

65. Mary L. McHugh, “Interrater Reliability: The Kappa Statistic,” Biochemia Medica 22, no. 3 (2012): 276–82, PubMed PMID: 23092060; PubMed Central PMCID: PMC3900052.

66. Head et al., “How Students Engage with News,” 19.

67. Angell and Tewell, “Teaching and Un-Teaching Source Evaluation,” 104–07; Head and Eisenberg, “Truth Be Told,” 9–12; Head et al., “How Students Engage with News,” 24–28.

68. Wineburg et al., “Evaluating Information”; Sam Wineburg and Sarah McGrew, “Lateral Reading: Reading Less and Learning More When Evaluating Digital Information,” SSRN Scholarly Paper No. ID 3048994 (Rochester, NY: Social Science Research Network, 2017), https://papers.ssrn.com/abstract=3048994.

69. Head et al., “How Students Engage with News,” 20–22, figure 7.

70. Head et al., “How Students Engage with News,” 13–15, figure 4.

71. ACRL, Framework; Angell and Tewell, “Teaching and Un-Teaching Source Evaluation,” 104–07; Head and Eisenberg, “Truth Be Told,” 9–18; Head et al., “How Students Engage with News,” figure 7.

72. Sarah Blakeslee, “The CRAAP Test,” LOEX Quarterly 31, no. 3 (2004): 6–7, https://commons.emich.edu/loexquarterly/vol31/iss3/4/; Mike Caulfield, “A Short History of CRAAP,” Hapgood, last modified September 15, 2018, ; Maddie Crum, “After Trump Was Elected, Librarians Had to Rethink Their System for Fact-Checking,” Huffington Post, March 9, 2017, ; Kevin Seeber, “Wiretaps and CRAAP,” Kevin Seeber / MLIS, last modified March 18, 2017, ; Wineburg and McGrew, “Lateral Reading,” 44–46; Head et al., “How Students Engage with News,” 24–28, 31–35.

73. Head et al., “How Students Engage with News,” quote, 24, 24–28, 31–35.

74. Perkins, “The Many Faces of Constructivism,” 8–9; Meyer and Land, “Linkages to Ways of Thinking and Practising within the Disciplines,” 6–7; Caulfield, “A Short History of CRAAP.”

75. Wiggins and McTighe, Understanding by Design, 39–40, 340; Yu-Mei Wang and Marge Artero, “Caught in the Web: University Student Use of Web Resources,” Educational Media International 42, no. 1 (2005): 71–82, https://doi.org/10.1080/09523980500116670; Wineburg and McGrew, “Lateral Reading.”

76. Head et al., “How Students Engage with News,” 31–35; Alison King, “From Sage on the Stage to Guide on the Side,” College Teaching 41, no. 1 (1993): 30–35, https://www.jstor.org/stable/27558571; Wineburg and McGrew, “Lateral Reading,” 39–46.

77. Wineburg et al., “Evaluating Information”; Wineburg and McGrew, “Lateral Reading.”

78. Gloria J. Leckie, “Desperately Seeking Citations: Uncovering Faculty Assumptions about the Undergraduate Research,” Journal of Academic Librarianship 22, no. 3 (1996): 201–08, https://doi.org/10.1016/S0099-1333(96)90059-2.

79. Wiggins and McTighe, Understanding by Design, 169.

80. Wineburg and McGrew, “Lateral Reading.”

81. Meyer and Land, “Epistemological Considerations and a Conceptual Framework for Teaching and Learning,” 377.

82. Meyer and Land, “Epistemological Considerations and a Conceptual Framework for Teaching and Learning,” 377–79; Head et al., “How Students Engage with News,” 13–15, figure 4.

*Piotr S. Bobkowski is Associate Professor at the University of Kansas; email: bobkowski@ku.edu. Karna Younger is Open Pedagogy Librarian and Assistant Librarian at University of Kansas Libraries; email: karna@ku.edu. ©2020 Piotr S. Bobkowski and Karna Younger, Attribution-NonCommercial (https://creativecommons.org/licenses/by-nc/4.0/) CC BY-NC.

Copyright Piotr S. Bobkowski, Karna Younger


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

Article Views (By Year/Month)

2025
January: 112
February: 143
March: 197
April: 233
May: 208
June: 146
July: 185
August: 198
September: 218
October: 163
November: 222
December: 226
2024
January: 70
February: 51
March: 118
April: 147
May: 121
June: 159
July: 111
August: 297
September: 107
October: 95
November: 95
December: 81
2023
January: 49
February: 61
March: 41
April: 36
May: 42
June: 54
July: 42
August: 32
September: 37
October: 27
November: 41
December: 48
2022
January: 16
February: 34
March: 81
April: 30
May: 51
June: 24
July: 37
August: 44
September: 49
October: 58
November: 47
December: 53
2021
January: 53
February: 72
March: 87
April: 64
May: 50
June: 33
July: 27
August: 66
September: 98
October: 83
November: 47
December: 25
2020
January: 0
February: 0
March: 0
April: 0
May: 0
June: 8
July: 344
August: 103
September: 166
October: 86
November: 67
December: 55