03_Lambert_etal

Making Sense of Student Source Selection: Using the WHY Method to Analyze Authority in Student Research Bibliographies

In a follow-up to a pilot study published in 2019, the authors collected student research papers from English Composition II courses at three public comprehensive universities from different regions in the United States to classify and compare the sources selected by students at each institution. Working with a representative sample of 712 bibliographic references, the authors used a research-tested taxonomy called The WHY Method to classify each source by three key attributes—Who wrote each source, How it was edited, and whY it was published. The results of this cross-institutional study indicate that student source selection is affected most powerfully by the variables of which institution a student attends, student age, and whether the student is a first-generation university student. Student GPA, gender, class ranking (freshman, sophomore, and so on), and ethnicity were not statistically predictive factors. This study establishes the importance of institutional context in how students construct authority and provides librarians with a tool that enables them to better understand and describe that context.

Introduction

For at least the past two decades, a dominant discussion among librarians has been how to adequately prepare our students to navigate and engage in an increasingly complex information environment. In 2015, the Association of College and Research Libraries (ACRL) established a set of core ideas to guide information literacy education efforts, known as the ACRL Framework for Information Literacy for Higher Education.1 Among the core ideas of the Framework is the statement that “Authority Is Constructed and Contextual.”2 In other words, a credential or editorial process that might be considered authoritative in one setting might not be considered authoritative in another.

Since 2015, librarians have continued to teach source authority in response to the Framework and to the perceived needs of their students, but while lacking at times a thorough understanding of the information landscape as seen through students’ eyes. The Framework acknowledges the gap that exists between how expert and novice researchers construct source authority. As librarians, our professional expertise involves identifying, using, and promoting sources we have deemed credible, especially academic sources. Moreover, we are at least several years removed from the experience of being novice researchers ourselves.

Viewing the information landscape through the eyes of a novice researcher has vital implications for our practice. Achieving a more granular and consistent understanding of the sources students cite in their bibliographies helps us to identify student information needs and design more responsive collections. Furthermore, the Framework calls on librarians and university instructors to support college students in developing knowledge practices that can be adapted flexibly to new contexts.3 Knowledge of the first-year student’s research environment allows us to use their subsequent time as university students to prepare them to select both institutionally recognized authorities and credible nontraditional sources online. Most of our students will graduate and leave the academy, losing access to our curated, subscription-based resources. Our students require informed instructors who can provide guidance in how to select credible resources from the myriad sources they will encounter online.

This study aims to expand the profession’s understanding of how novice researchers construct authority. It puts forward The WHY Method as a reliable, research-validated, and format-neutral tool that other librarians might use to examine the kinds of authorities that students at their own institutions are selecting.4 The research team has found that The WHY Method is particularly helpful in making sense of the many nontraditional online sources that students encounter in their research and elect to cite in their research papers. This application of The WHY Method sheds light on some of the unexpected ways in which students confer authority through their research papers.

Background

This research project began several years ago when a team of three librarians wished to add to the profession’s understanding of the student research experience and, more specifically, the kinds of resources students view as authoritative. To pursue this research interest, the team gathered student research papers and their respective bibliographies written for numerous sections of a freshman-level English composition course from a public master’s-level university in the Midwestern United States. The team then analyzed the bibliographies using a validated, Framework-friendly classification scheme originally developed by Leeder, Markey, and Yakel.5 The Leeder, Markey, and Yakel taxonomy was selected because it provided simple, value-neutral criteria to classify all manner of sources, especially nontraditional sources available online. The taxonomy also has the benefit of being research-tested, both in Leeder, Markey, and Yakel’s original research and in subsequent studies that have used the taxonomy to classify sources discovered by students in response to predesigned search tasks.6

After selecting this taxonomy, the research team chose to make some adaptations to the method, in keeping with the intention of Leeder, Markey, and Yakel that the taxonomy be “flexible enough to be modified in many small ways without losing its overall integrity.”7 To minimize bias, the research team chose not to employ the taxonomy’s subjective category scoring calculations. The research team also set aside facets describing source format and genre, which conflicted with the Framework’s position that authoritative content “may include sources of all media types.”8 The research team left the subfacet categories from the original taxonomy mostly unaltered, although a Category Z was added to each facet to designate sources that the research team was unable to verify, either because of an incomplete student citation or because an online source could not be found.

The research team’s use of this modified taxonomy to analyze student bibliographies in the pilot study produced a number of promising results.9 Perhaps the most significant of those results was a high degree of homogeneity in source selection by freshmen in the sample population.10 This homogeneity was consistent in the sample regardless of student demographics, such as age, gender, class ranking, or student GPA.11 However, as this was a pilot study, the results were not conclusive, thus spurring the research team to engage in the subsequent research published in this study.

Following the publication of the pilot study’s results, the research team made further modifications to the taxonomy in an attempt to improve interrater reliability and to make the taxonomy easier to apply in a variety of instructional and assessment contexts. Scope notes for many of the subfacets were made more detailed and explicit. The facets were renamed from Leeder, Markey, and Yakel’s 3, 4, and 5 to W, H, and Y, given the research team’s interest in developing the model to be of use in instructional contexts with students, as well as being valuable for assessment and research efforts outside the classroom. The research team then named the resulting modified taxonomy The WHY Method for easy reference.

The WHY Method allows librarians to describe a wide variety of nontraditional sources with timely precision and replicability. The WHY Method focuses on three objective building blocks of source authority: 1) the professional or academic credentials of the person who wrote a piece in relation to the subject matter of the piece (Who); 2) the process by which a piece was edited and the professional or academic credentials of the editorial team (How); and 3) the reason a piece was published (whY). Each building block (author identity, editorial process, publication purpose) is divided into seven categories. The three building blocks in combination provide a description of source authority for analysis and discussion. The complete list of attributes in the classification system is available in the appendix.

Encouraged by the findings of the pilot study and by the development of The WHY Method, the research team gathered and analyzed a new set of English composition papers from that same Midwestern university, as well as papers from two other institutions from across the United States. This article will share more actualized and generalizable observations about the kinds of authorities lower-level undergraduate students select for their research assignments.

Literature Review

Given that the Framework for Information Literacy for Higher Education sets forth the idea that “Authority Is Constructed and Contextual” as one of its six core concepts, the research team remained interested in exploring student constructions of authority.12 Although most published studies that have examined student source selection have chosen to use surveys or questionnaires for data collection, the team also chose to continue its focus on bibliographic analysis of authentic student work. Ivins argues that “the non-intrusive method of bibliographic analysis provides well-thought-out student data free from the influence of the researcher.”13 The research team concurs in that judgment, preferring to analyze student understandings of authority by examining the sources they include in the final draft of a research paper.

Researchers have conducted bibliographic analysis of student assignments in the past, using different methodologies than the approach employed by the authors of this study. Some have elected to sort sources into broad categories without implementing the kind of taxonomy developed by Leeder, Markey, and Yakel.14 Sources from the Web are often lumped together in these category schemes: Clarke and Oppenheim use a single category for “Internet and websites”; Flaspohler, Rux, and Flaspohler distinguish simply between “Internet: good” and “Internet: bad” without clarifying how those labels were determined; and Ivins focuses solely on the use of periodicals, placing all other source types into a single category together.15 For the research team’s purposes, this kind of analysis was not fine-grained enough to distinguish between the wide variety of authorities and sources available.

Studies that examine authentic student work have thus far paid little attention to student demographic categories such as age, race, gender, and first-generation status. Although the data they collected are not directly analogous to the data collected in this present study, Soria, Nackerud, and Peterson did examine age, gender, race, and first-generation status in their study of first-year college students.16 In their analysis of student use of books, databases, and electronic journals, they did not report significant effects associated with age or race.17 The effect of gender was significant in only the area of book and e-book use: female students were far more likely than male students to check out books or access e-books.18 First-generation status had more wide-ranging effects, as Soria, Nackerud, and Peterson reported that self-identified first-generation students were less likely to borrow books, read e-books, or access electronic journals.19 One of the aims of this present study, then, was to examine whether the trends apparent in their research, which focused on tracking student access to materials, would be evident in the bibliographies of student papers.

Some recently published work has demonstrated the value of analyzing student writing using a Framework-grounded qualitative approach: Hosier’s study and Dempsey and Jagman’s study have each helped to establish a clearer insight into the approach of the student writer.20 The research team felt that the strengths of The WHY Method taxonomy warranted its use in the current study, which could add to the qualitative findings of these previous studies in developing a Framework-centric understanding of student writing behavior.

Given the Framework’s focus on supporting “novice learners,” the research team has chosen to continue to work with student papers from first-year English composition courses. Material of this kind has been analyzed in several previous journal articles—in 2011, Cooke and Rosenthal’s study, and Watson’s study, each categorized student sources in first-year composition papers broadly by format.21 More recently, Chisholm and Spencer employed a Framework-grounded rubric to measure source engagement in first-year composition papers.22 Chisholm and Spencer’s analysis of their data suggested that finding relevant sources is not an area of significant student need and that the focus for librarians and instructors therefore needs to be on teaching students how to engage with the sources they have found, although Chisholm and Spencer acknowledge that the size of their sample limits generalizability.23

This research team’s pilot study reached a similar conclusion, noting that academic materials were widely used by students, while cautioning that students still made extensive use of nontraditional sources and suggesting that librarians and instructors would need to teach students how to assess the authority value of a source.24 That pilot study, though, also focused on a smaller population of students at a single institution, which also limited the generalizability of the findings.25 One limitation of these studies, and of all the studies in this literature review, is that data collection was limited to a single college or university. The research team, in identifying the limitations of their own pilot study, felt that broadening the scope of analysis to collect student work from multiple universities was critically important. Therefore, one aim of the current study is to see what new insights can be gained from cross-institutional comparison.

Research Questions

Given this analysis of the available literature, and taking into consideration the research team’s pilot study, the current study was designed to answer the following research questions:

  • How do students in first-year English composition classes construct and contextualize authority as expressed in their respective papers’ sources?
  • To what extent does this construction and contextualization of authority differ, if at all, among 4-year postsecondary institutions?
  • To what extent does this construction and contextualization of authority differ, if at all, among student demographic categories?

Study Methodology

This study’s methodology was reviewed and approved by the Institutional Review Board at each author’s university. As recruitment required extensive coordination with English composition programs, the respective universities were selected for study based on the research team’s strong pre-existing relationships with these institutions. Midwest University (MWU), also the sole research site for the pilot study, has an undergraduate enrollment of roughly 5,600 FTE and has a Carnegie classification of M1 (Master’s Colleges & Universities—Larger programs). Southeast University (SEU) has an undergraduate population of roughly 14,600 FTE and is classified as a D/PU (Doctoral/Professional Universities). Pacific Coast University (PCU) has an undergraduate enrollment of roughly 8,700 FTE and, like MWU, is classified as M1.

All three universities have in place a two-course composition sequence intended for first-year students, with an emphasis on research writing in the second course. At all three institutions, English composition is part of a centralized first-year writing program, under the purview of one or more faculty coordinators. MWU, SEU, and PCU all standardize certain elements of instruction, such as learning goals and objectives, while leaving individual section instructors with varying degrees of freedom to select course topics, set assignment requirements, and so forth. All three universities list the use of “credible” sources as a goal for student papers: SEU’s expectations ask only for students to use sources evaluated for their credibility, while MWU’s standard rubric calls for students to use a “wide variety” of credible sources, and PCU’s assignment description lists both “scholarly” and “credible” sources as options and indicates that the final paper should include at least 8 scholarly sources out of a minimum total of 10–12 scholarly and credible sources. None of these universities supply students with a standard description of what constitutes a “credible” (or “scholarly”) source, although individual section instructors at each institution likely gave additional instructions and descriptions beyond the respective university’s baseline standard.

Each author recruited student participants from face-to-face sections of the second English composition course in the spring of 2019: recruitment was conducted in person, either at the beginning or end of class by arrangement with the instructor. Students were informed about the study and its purpose and were given time to read, sign, and return the consent form if they chose to do so.

Students who elected to participate completed a short demographic survey self-identifying their gender, ethnicity, and status as a first-generation student. They also agreed to give the research team access to other demographic data already held by their university—their age, their cumulative GPA, and their class standing—as well as the final draft of the research paper they submitted for the English composition course. At the end of the academic term, composition instructors supplied the research team with an ungraded electronic copy of the participants’ final research papers. To protect student and instructor privacy, the research team redacted all personally identifiable information from each paper, replacing that information with anonymous section and student identifiers.

Sampling

The research team selected digitized copies of the anonymized papers through systematic sampling, as it is a nonprobabilistic sampling method similar to random sampling, but it is more efficient and less laborious than simple random sampling.26 To ensure student anonymity, all papers from each university were organized in order based on a special code that identified the university, the respective English composition class section, and the order in which the paper was received during collection efforts. Once these papers were placed in order based on the codes, every nth paper was selected.

The references in each of the collected English composition papers are the primary unit of analysis, as the references, and not the papers themselves, are the entity coded using The WHY Method’s coding facets. The research team collected 167 English composition papers from PCU, 19 papers from SEU, and 53 papers from MWU. Ten papers from each university were selected randomly to calculate an average number of references per paper at each institution. There were, on average, 9.1 references per PCU paper, 7 references per SEU paper, and 14.7 references per MWU paper. This means that there would be an estimated 1,519.7 PCU references, 133 SEU references, and 779.1 MWU references in the population of references, totalling approximately 2,431, from which the research team could draw its sample.

As researching nearly 2,500 references would be prohibitively time-consuming considering the coding rigour applied by the research team, the authors determined that a representative systematic sample of references with 95% confidence and +/– 5% margin of error would suffice to address the team’s research questions. To reach this level of confidence, PCU would need to contribute at least 307 references from 35 of its English composition papers (= 8.77 references per paper), SEU 99 references from 15 papers (= 6.6 references per paper), and MWU 258 from 20 papers (= 12.9 references). The final contribution numbers for the sample as a result of the research team’s systematic sampling exceeded the original sampling size goal slightly, with 318 references for PCU (= 9.08 references per paper), 100 for SEU (= 6.67 references per paper), and 294 for MWU (= 14.7 references per paper) collected. As may be seen, the average number of references per paper used for the research team’s final analysis was quite close to the average number of references from the original 10 papers from each university that were sampled, showing that the sampling method had minimal sampling error.

Coding

Coding the 712 references in the sample was a two-part process. First, two members of the research team attempted to locate each of the reference sources online, or by locating the appropriate WorldCat record for materials in print. The members then sought and recorded documentation establishing the credentials of authors, the editorial process and the credentials of editorial staff, and the publication purpose for each reference. Frequently used sources during this process included the Wayback Machine, biographies on ResearchGate and LinkedIn, dissertation records in WorldCat, publication mastheads, and 501c3 databases such as GuideStar.org. The research team accepted every claim of authority as truth (such as self-reported work history in LinkedIn) to describe the information landscape as it presents itself to students. They then recorded the documented evidence of authority (such as a link to a dissertation, the webpage of an editorial board, a donation form with a claim of 501c3 status) as persistent links. With the use of online tools such as the Wayback Machine and WorldCat, the two team members were able to document the authority of 98.88 percent of the sample.

Once relevant information for each source had been documented, the two team members proceeded to the second step of the process and classified each of the 712 student references according to The WHY Method taxonomy (see appendix). This process involved viewing the gathered documentation, comparing this information to the definitions and scope notes in the taxonomy, and selecting the appropriate subfacet for each of the three categories. The two team members met regularly through video conferences to discuss their completed codes, make refinements to the scope notes, and reach 100 percent agreement before they forwarded their documentation and the updated taxonomy to the third team member and a graduate assistant to test coding validity.

A second systematic sample of the references was selected for coding by the third author and by a student graduate assistant who had no formal library science training. Every 7th reference was selected from the sample of references, meaning that 100 references were coded a second time. Krippendorff’s alpha was used to calculate the rate of intercoder reliability between the original codes assigned by the two authors and the codes assigned by the third author and the graduate assistant, each of whom coded alone. For attribute W (Who/Author), the agreement coefficient was 0.8825; attribute H (How/Editorial process) was 0.9496; and attribute Y (whY/Publication purpose) was 0.9035. A coefficient of 0.7 is considered an acceptable cut-off of reliability.27

Findings

Below are the descriptive and inferential statistical findings of the demographic data collected from the English composition students along with the reference source data from the students’ respective course papers. Unless specified otherwise, inferential statistical tests are parametric, with a focus on the mean as a measure of central tendency for all normally distributed interval and ratio variables.

The overall mean age of students in the sample enrolled in all English composition classes participating in the study was 20.33 years old. The majority, 73.1 percent, of participants were freshmen and 19.1 percent were sophomores, while the remainder were juniors and seniors. Based on gender, 70 percent of the student respondents identified as female and 30 percent male. Of these students, 40 percent identified themselves as first-generation university students: none of the universities in the study had significantly higher or lower populations of first-generation students. The calculated mean university grade point average (GPA) for all participants was 3.35. An ANOVA test, which compares means across multiple independent samples all at once, thus increasing the power of the statistical test, reveals a significant difference in the mean GPAs of these students from each of the three universities participating in this study (F = 3.63, p < 0.05). The English composition students from SEU had, on average, significantly higher GPAs (= 3.57) than those at either PCU (= 3.2574) or MWU (= 3.1445): none of the other demographic variables collected offer any apparent explanation for this difference. Table 1 shows the ethnic origin of all student participants in the sample.

TABLE 1

Ethnicity of English Composition Student Participants

Ethnicity

Frequency

Valid Percent

Cumulative Percent

White

41

59.4

59.4

Hispanic

13

18.8

78.3

Asian

7

10.1

88.4

Two or more ethnicities

4

5.8

94.2

Black or African American

3

4.3

98.6

Pacific Islander

1

1.4

100.0

Total

69

100.0

 

English Composition Papers’ Characteristics

The mean number of references per paper is highly variable. On average, each paper across institutions has 10.1 references (sd = 6.438) with a heavy positive skew, and the median number of references per paper is 9. This abnormal distribution of references requires focus on the median as a measure of central tendency and the use of nonparametric tests for analysis. A Kruskal-Wallis test shows a significant difference among each institution’s English composition papers’ median number of references per paper (H = 15.086, p < 0.01). MWU (x̅ = 14.7, = 11.5) and PCU (x̅ = 8.94, = 9) had significantly higher median number of references per paper than SEU (x̅ = 6.67,= 7) (PCU vs. SEU, H = 15.243, p < 0.05; MWU vs. SEU, H = –26.875 p = 0.00) (see figure 1). There were no significant differences in the median number of references per paper in English Composition papers between PCU and MWU. Regardless, MWU certainly had the greatest number of papers with total number of reference outliers, as figure 1 demonstrates.

FIGURE 1

Boxplot of Number of References per Paper across Universities

Fig. 1Boxplot of Number of References per Paper across Universities

Table 2 reveals all of the attribute combination types that resulted from coding efforts in the 70 papers used for the sample. The appendix may be used to interpret these codes (for example, WFHFYF is a resource identified to have been written by an author with a relevant academic credential that has been peer reviewed and published in a higher education resource such as a journal). There are only 60 different attribute combinations (such as WXHXYX) that represent all of the references in the sample. Note that only eight attribute combinations represent 75 percent of all source types.

TABLE 2

Reference Attribute Combinations from the English Composition Papers from All Three Universities

Attribute Combination

Frequency

Percent

Cumulative Percent

WFHFYF

287

40.3

40.3

WEHEYB

104

14.6

54.9

WBHEYB

37

5.2

60.1

WFHEYB

31

4.4

64.5

WCHAYC

21

2.9

67.4

WFHEYF

21

2.9

70.4

WCHAYB

17

2.4

72.8

WEHEYC

16

2.2

75.0

WCHAYE

13

1.8

76.8

WFHDYC

12

1.7

78.5

WBHDYB

11

1.5

80.1

WEHEYF

9

1.3

81.3

WEHFYF

9

1.3

82.6

WFHEYC

9

1.3

83.8

WFHDYE

8

1.1

85.0

WFHDYF

8

1.1

86.1

WBHFYF

7

1.0

87.1

WCHEYC

6

.8

87.9

WBHAYF

5

.7

88.6

WBHDYF

5

.7

89.3

WCHEYB

5

.7

90.0

WBHEYC

4

.6

90.6

WCHAYF

4

.6

91.2

WDHFYF

4

.6

91.7

WZHZYZ

4

.6

92.3

WCHDYB

3

.4

92.7

WEHDYC

3

.4

93.1

WEHEYD

3

.4

93.5

WFHDYB

3

.4

94.0

WZHAYF

3

.4

94.4

WAHFYF

2

.3

94.7

WBHAYA

2

.3

94.9

WBHEYF

2

.3

95.2

WCHEYD

2

.3

95.5

WCHEYF

2

.3

95.8

WCHFYF

2

.3

96.1

WDHDYC

2

.3

96.3

WDHEYB

2

.3

96.6

WFHAYB

2

.3

96.9

WFHEYD

2

.3

97.2

WAHAYB

1

.1

97.3

WAHCYC

1

.1

97.5

WAHDYB

1

.1

97.6

WAHDYF

1

.1

97.8

WBHAYB

1

.1

97.9

WBHDYC

1

.1

98.0

WBHEYA

1

.1

98.2

WBHEYD

1

.1

98.3

WCHEYE

1

.1

98.5

WDHDYB

1

.1

98.6

WDHEYC

1

.1

98.7

WDHEYD

1

.1

98.9

WEHAYA

1

.1

99.0

WEHAYB

1

.1

99.2

WEHDYE

1

.1

99.3

WEHDYF

1

.1

99.4

WEHEYE

1

.1

99.6

WEHFYC

1

.1

99.7

WFHEYE

1

.1

99.9

WZHEYB

1

.1

100.0

       

Total

712

100.0

 

Table 3 presents the eight most frequently occurring attribute combinations across all three universities, and it includes a translation of each combination. Tables 4, 5, and 6 show each individual university’s eight most frequently occurring attribute combinations, also with translations included. Because so many of all references in the sample are represented by only eight attribute combinations (>75%), the analysis that follows below focuses primarily on these data points.

TABLE 3

Top 75%, Most Frequently Occurring Attribute Combination of References, All Universities

Source Attribute Combination

Combination Translation of Source Type

Frequency

Percent of Top 75%

Percent of All References

Cumulative Percent

WFHFYF

Academic professional; Peer-reviewed; Higher education

287

53.7

40.3

40.3

WEHEYB

Applied professional; Editor and editorial staff; Commercial

104

19.5

14.6

54.9

WBHEYB

Layperson; Editor and editorial staff; Commercial

37

7.0

5.2

60.1

WFHEYB

Academic professional; Editor and editorial staff; Commercial

31

5.8

4.4

64.5

WCHAYC

Corporate author; Self-published; Nonprofit

21

4.0

2.95

67.45

WFHEYF

Academic professional; Editor and editorial staff; Higher education

21

4.0

2.95

70.4

WCHAYB

Corporate author; Self-published; Commercial

17

3.2

2.4

72.8

WEHEYC

Applied professional; Editor and editorial staff; Nonprofit

16

3.0

2.2

75

All others in sample

 

178

N/A

25.0

100.0

Total

 

712

 

100.0

 

TABLE 4

Most Frequently Occurring Attribute Combination of References, Pacific Coast University (PCU), of Top 75% References in Table 3

Source Attribute Combination

Combination Translation of Source Type

Frequency

Percent of all References

Cumulative Percent

WFHFYF

Academic professional; Peer-reviewed; Higher education

193

60.7

60.7

WEHEYB

Applied professional; Editor and editorial staff; Commercial

18

5.7

66.4

WFHEYF

Academic professional; Editor and editorial staff; Higher education

10

3.1

69.5

WFHEYB

Academic professional; Editor and editorial staff; Commercial

8

2.5

72.0

WBHEYB

Layperson; Editor and editorial staff; Commercial

7

2.2

74.2

WEHEYC

Applied professional; Editor and editorial staff; Nonprofit

6

1.9

76.1

WCHAYB

Corporate author; Self-published; Commercial

4

1.3

77.4

WCHAYC

Corporate author; Self-published; Nonprofit

2

.6

78.0

All others in sample

 

70

22.0

100.0

Total

 

318

100.0

 

TABLE 5

Most Frequently Occurring Attribute Combination of References, Southeast University (SEU), of Top 75% References in Table 3

Source Attribute Combination

Combination Translation of Source Type

Frequency

Percent

Cumulative Percent

WFHFYF

Academic professional; Peer-reviewed; Higher education

21

21.0

21.0

WEHEYB

Applied professional; Editor and editorial staff; Commercial

20

20.0

41.0

WFHEYB

Academic professional; Editor and editorial staff; Commercial

12

12.0

53.0

WBHEYB

Layperson; Editor and editorial staff; Commercial

6

6.0

59.0

WFHEYF

Academic professional; Editor and editorial staff; Higher education

6

6.0

65.0

WCHAYC

Corporate author; Self-published; Nonprofit

5

5.0

70.0

WCHAYB

Corporate author; Self-published; Commercial

3

3.0

73.0

WEHEYC

Applied professional; Editor and editorial staff; Nonprofit

3

3.0

76.0

All others in sample

 

24

24.0

100.0

Total

 

100

100.0

 

TABLE 6

Most Frequently Occurring Attribute Combination of References, Midwest University (MWU), of Top 75% References in Table 3

Source Attribute Combination

Attribute Combination Translation of Source Type

Frequency

Percent

Cumulative Percent

WFHFYF

Academic professional; Peer-reviewed; Higher education

73

24.8

24.8

WEHEYB

Applied professional; Editor and editorial staff; Commercial

66

22.4

47.2

WBHEYB

Layperson; Editor and editorial staff; Commercial

24

8.2

55.4

WCHAYC

Corporate author; Self-published; Nonprofit

14

4.8

60.2

WFHEYB

Academic professional; Editor and editorial staff; Commercial

11

3.7

63.9

WCHAYB

Corporate author; Self-published; Commercial

10

3.4

67.3

WEHEYC

Applied professional; Editor and editorial staff; Nonprofit

7

2.4

69.7

WFHEYF

Academic professional; Editor and editorial staff; Higher education

5

1.7

71.4

All others in sample

 

84

28.6

100

Total

 

294

100.0

 

When the authors focus their analysis on the top 75 percent frequently occurring references, a Chi-square test finds that there is a significant difference in the use of resources among these three universities (X2 = 139.552, df = 16, p = 0.00). WFHFYF resources (Academic professional; Peer-reviewed; Higher education) account for the bulk of this difference in the Chi-square calculation. However, other resources contribute to this result. For example, WEHEYB resources (Applied professional; Editor and editorial staff; Commercial) are relied upon considerably more by SWU (20% of references) and MWU (22.4% of references) English Composition students in their respective papers compared to their counterparts at PCU (5.7% of references). The same is true for resources coded as WFHEYB (Academic professional; Editor and editorial staff; Commercial).

The difference in source selection based on institution is expressed also through a variety of different student demographic variables. Because the attribute combination variable is nominal, Chi-square was determined to be the best inferential test to predict which other variables would impact source selection. To ensure the Chi-square test results were valid, variables such as student GPA and student age had to be converted from ratio to ordinal variables. The results of this analysis revealed that while gender, class ranking (freshman, sophomore, and so on), and ethnicity had no association with the types of resources chosen and found in the papers’ bibliographies, student age (X2 = 34.369, p < 0.01) and whether the student is a first-generation university student (X2 = 19.509, p < 0.05) were found to be significant associative variables. For student age, younger students (18–19 age range) were found to select resources that had attribute combinations such as WFHFYF (Academic professional; Peer-reviewed; Higher education) especially. Older students (20–21 and 22+) tended to use other types of resources such as WBHEYB (Layperson; Editor and editorial staff; Commercial), WCHAYB (Corporate author; Self-published; Commercial), and WEHEYB (Applied professional; Editor and editorial staff; Commercial) more frequently compared to their younger counterparts.

The same general pattern may be discovered when comparing first-generation university attendees versus those students who are not the first generation of their respective families to attend university. First-generation students are more likely to use resources such as WFHFYF (Academic professional; Peer-reviewed; Higher education), WFHEYF (Academic professional; Editor and editorial staff; Higher education), and WCHAYC (Corporate author; Self-published; Nonprofit) whereas non–first-generation students were more likely to choose resources with attribute combinations such as WBHEYB (Layperson; Editor and editorial staff; Commercial), WCHAYB (Corporate author; Self-published; Commercial), and WEHEYB (Applied professional; Editor and editorial staff; Commercial). Table 7 displays the major source attribution combinations broken down by age cohort and first-generation status.

TABLE 7

Top 75% WHY Attribute Combinations by Age, Family University Attendance Generation, and University

Attribute Combination

Attribute Combination Translation of Source Type

Student Age

Family University Attendance

18–19

20–21

22+

1st Gen

Not 1st Gen

WFHFYF

Academic professional; Peer-reviewed; Higher education

47.5%*

30.3%

39%

46.2%*

36.8%

WEHEYB

Applied professional; Editor and editorial staff; Commercial

10.1%

18.6%*

19.1%*

9.8%

17.5%*

WCHAYC

Corporate author; Self-published; Nonprofit

3.5%*

2.6%

2.2%

3.8%*

2.5%

WBHEYB

Layperson; Editor and editorial staff; Commercial

2.6%

8.2%*

6.6%*

3.8%

6.1%*

WCHAYB

Corporate author; Self-published; Commercial

2%

3%*

2.2%

0.8%

3.4%*

WEHEYC

Applied professional; Editor and editorial staff; Nonprofit

2.3%*

2.6%*

1.5%

2.6%*

2.0%

WFHEYB

Academic professional; Editor and editorial staff; Commercial

3.2%

6.5%*

3.7%

3.8%

4.7%*

WFHEYF

Academic professional; Editor and editorial staff; Higher education

3.2%*

3%*

2.2%

3.8%*

2.5%

*Indicates where observed count > expected count from Chi-square test.

Discussion

Reliability of The WHY Method

While the CRAAP Test and other traditional source evaluation methods require evaluators to make highly subjective decisions about authority, this follow-up study demonstrates that The WHY Method can be used to classify resources with a high degree of confidence. The agreement coefficient—that is, the rate at which the first two authors selected the same classification facets as the third author and a graduate student—greatly exceeded the general standard of reliability.28 This is especially notable given that the graduate student had no formal training or experience in library science and received only cursory instruction in applying The WHY Method. Furthermore, the coefficients of agreement are considerably higher in this study than those calculated for the pilot study, which involved only librarians as coders, demonstrating that the coding tool has become more rigorous and reliable.29

Demographic Effects on Source Selection

Based on previous studies, the research team did not anticipate the effects of both age and first-generation status on student source selection. In the pilot study conducted at MWU, the data indicated that student demographics, including age, gender, GPA, class ranking, and ethnicity, had no demonstrable effect on student behavior.30 Yet, as the research team expanded the sample to two additional universities and collected newer data, findings arose that contradicted these earlier assumptions: Although many demographic characteristics may have no significant effect on student source selection, students who are less familiar with the college environment select different kinds of sources than their more experienced peers.

As shown in the findings, younger students (18–19 years old) tended to select more peer-reviewed scholarly pieces written by credentialed academics in the field (WFHFYF) than their older classmates. The types of sources referenced in student bibliographies appear to diversify with student age (20–21 and 22+ years old). This finding may dispel the preconception that our youngest students are the least likely to use traditional library materials. It also raises the possibility that the current population of incoming college students engages differently with material available online than their predecessors do, whether influenced by new approaches to source evaluation in American high schools or by a more generational shift in how critically young people engage with the Internet.

The effect of first-generation status on student source selection was likewise surprising. First-generation students, regardless of their age, use more peer-reviewed scholarly pieces written by credentialed academics in the field (WFHFYF) and more traditionally edited scholarly books written by credentialed academics (WFHEYF) than their non–first-generation classmates. While the research team’s pilot did not collect demographic data such as first-generation status, in their literature review, the findings in the study by Soria, Nackerud, and Peterson indicated that this population was less likely than their first-year peers to access e-books and e-journals through the library.31 This finding poses particularly challenging questions for librarians: Is it possible that first-generation students spend less time accessing library resources but trust them more in their written work? Might university programming designed to support first-generation students have recently succeeded in raising their comfort level with more traditional scholarly materials? No easily resolved narratives are evident in the data.

Prevalence of Scholarly and Journalistic Materials

Students from all three universities rely heavily on two types of information sources: WFHFYF (Academic professional; Peer-reviewed; Higher education) and WEHEYB (Applied professional; Editor and editorial staff; Commercial). These two categories consist primarily of peer-reviewed journal articles and journalistic work that appears in periodicals. Combined, just these two resource types account for 55 percent of all information source materials for all papers collected across all three universities.

These results are consistent with the findings from the research team’s pilot study in which these two types of sources were also the more prevalent source types in student bibliographies.32 In the pilot study conducted at MWU, material we classified as WFHFYF (Academic professional; Peer-reviewed; Higher education) comprised 15.8 percent of references and material we classified as WEHEYB (Applied professional; Editor and editorial staff; Commercial) comprised 16 percent of references.33 While MWU students in this current study selected slightly more academic sources than journalistic sources, the relative closeness of these percentages resembles the relative closeness of the percentages in the pilot study.

Prevalence of Nontraditional Sources

Although the most prevalent sources were perhaps the most conventional sources expected in first-year composition papers, the research team considers that it is equally significant that these sources comprise only slightly more than half of the references in the sample. The other subfacet combinations represented in the top 75 percent of references represent a very wide range of sources, from edited commercial writing by lay authors who have no apparent experience or training in the field (WBHEYB) to books by academics published for commercial and higher education markets (WFHEYB; WFHEYF) to materials self-published by nonprofit and commercial authors without any evident editorial control (WCHAYC; WCHAYB).

Furthermore, a full 25 percent of references in bibliographies come from the long tail visible in table 2, which represents myriad types of sources. These range from a working professional writing for a traditionally edited K–12 publication (WEHEYD) to work produced by an academic writing outside their field for a nonprofit with unclear editorial processes (WDHDYC) to an anonymous and unidentifiable author self-publishing material for profit (WAHAYB). It is evident that, when asked to find “credible sources,” students from all three universities reached far beyond the academy for sources they consider authoritative. Even in an environment like that of PCU, where traditional, scholarly materials were required most explicitly, students regularly invoked nontraditional authorities.

Institutional Effects on Source Selection

Arguably, the most notable result from this study is the profound effect of institutions on referencing behavior. Despite the prevalence of traditional peer-reviewed journal and journalistic periodical articles across the entire sample, the three universities demonstrate highly variable use of the two most common resource types. PCU’s assignment directives, noted in the methodology section above, appear to play a role in guiding student constructions of authority at that institution. The university’s guidelines asked for between 67 and 80 percent of student sources to be “scholarly.” PCU instructors and students likely interpreted this directive as an endorsement of WFHFYF (Academic professional; Peer-reviewed; Higher education) sources. Hence, composition students from PCU rely more heavily on WFHFYF sources than students from either MWU or SEU do. While WFHFYF resources comprise 60.7 percent of PCU source material, this type of academic resource accounts for 24.8 percent of all student references at MWU and 21 percent at SEU. Meanwhile, students at MWU and SEU select journalistic sources (WEHEYB) at greater rates for their papers than do students at PCU (MWU, 22.4% of references; SEU, 20% of references; PCU, just 5.7% of references).

However, beyond this observation regarding assignment instructions, there are broader and more challenging implications to be drawn from the data. One of the reasons the research team selected this coursework for study is that the research writing composition class is nearly ubiquitous in American colleges and universities. Credit for this class transfers relatively easily among institutions, and passing this class is often a necessary prerequisite for upper-division coursework across most, if not all, undergraduate majors. Given this course’s centrality to the college experience, it is therefore surprising to observe how divergent the research bibliographies are between the sample student populations at the three universities in this study. If the standards for “credible sources” differ this widely among institutions for coursework that is otherwise considered interchangeable, it suggests that there is little agreement within the academy regarding what information literacy skills, if any, ought to be practiced in college-level research.

The authors do not presume that these data indicate that any of these three universities (PCU, MWU, or SEU) has an obviously better or worse approach to understanding source authority. The fundamental problem evident in the findings is not that students are selecting the “wrong” sources, but rather that the gap between institutions is both so pronounced and so previously undocumented. If expectations regarding source authority are profoundly influenced by institutional factors, it is concerning to realize that the studies of student source use have so rarely drawn from cross-institutional data that could illuminate these factors and help librarians to understand them better. Perhaps one obstacle to this kind of analysis has been an absence of precise descriptive language for sources that would allow easy and direct comparisons. For that reason, the research team hopes that The WHY Method’s reliability and ease of use will facilitate more cross-institutional inquiries by librarians in the future.

Librarians need this level of insight if we plan to teach effectively, as institutional effect is one of the “contexts” anticipated by the Frame “Authority Is Constructed and Contextual.” If institutional context is sometimes invisible to librarians working in their own institutions, it is even less apparent to students who are taught only their university’s expectations. Therefore, as librarians, one of our responsibilities to students is to make institutional context visible by analyzing and describing it in language they can decipher. Additionally, this analysis may yield opportunities for librarians to collaborate with composition faculty in bringing institutional expectations into closer alignment with the Framework. In the long run, these endeavors will help students develop the knowledge practices and dispositions necessary to succeed not only within the narrow expectations of that university’s composition curriculum, but also in the broader and more diverse world we are preparing them to enter.

Limitations/Future Research

The data examined in this study allow for a good understanding of practices in student bibliographies, but neither this study nor any of the studies cited in the literature review examine the question of what types of sources are cited most often within the text of the paper itself. It would add greatly to the picture of student perceptions of authority to know whether certain types of sources appear in in-text citations significantly more or less often than as references in the bibliography. Likewise, given that traditionally authoritative information is accessible more freely for some domains than others on the internet, it would be interesting to examine whether the choice of paper topic influences student constructions of authority.

The research team defined its population as students enrolled only in face-to-face class sections. If a potentially successful recruitment effort for distance, online, or hybrid students can be designed, it might prove interesting to study the references and citations contained in these students’ papers compared to their face-to-face peers to see what impact, if any, distance, online, or hybrid teaching modes might have on source selection and use.

Recruitment efforts across the three institutions had varying levels of success, with SEU in particular securing a smaller set of papers from which to draw a sample. As a result, the research team attempts not to overgeneralize their findings. Nevertheless, with the sampling method used by the research team, along with the use of statistical analytical methods that increase the power and thus the accuracy of the analyses, the findings of this study are sufficiently robust despite one university providing a smaller sample. The team would consider participant incentives or other recruitment methods to expand student participation in a future study.

The unanticipated findings surrounding the effects of age and first-generation status on source selection certainly merit further study, given the potential implications of those trends. The remarkable differences among institutions found in this study were also unexpected by the research team, and further cross-institutional data collection will therefore be important to clarify what factors combine to create this effect. The team is particularly interested in how large a role librarian instructional effort plays in creating these institutional differences.

The research team’s focus on authentic student work yields valuable insights into student practice, but it leaves unanswered the question of student motivation when they make the choices that are evident in this study. While the research team believes that student selections for their bibliographies are a good proxy for the kinds of sources they deem authoritative, other factors may influence their selections, such as time constraints and perceived instructor expectations. Dahlen et al. have developed a detailed and effective qualitative approach to understanding student search behavior.34 The research team believes that pairing this approach with authentic search tasks and the collection of student work could yield further insights into student constructions of authority.

Conclusion

This paper reports on the results of a cross-institutional follow-up study that examines the kinds of authorities English composition students select for their final research papers. In classifying source authority, the authors relied on The WHY Method, which was adapted from the Leeder, Markey, and Yakel taxonomy and which allows fine-grained analysis of various kinds of traditional and nontraditional resources.

While this study confirmed a number of findings from the pilot study, which relied solely on MWU data from 2014, it challenges certain preconceptions and previous research on how students who are less familiar with college life (that is to say, younger students and first-generation students) construct source authority. Across three institutions, these students consistently chose academic resources more frequently than their older and non–first-generation peers. Hence, librarians teaching to lower-level undergraduates may wish to examine these demographics at their own institution to construct a responsive information literacy curriculum.

The authors wish to highlight what they feel is the most important finding of this new study: the profound effect of institution on student source selection. The university a student attends is one of the contexts they encounter in the research process, and it is the characteristic most likely to predict student behavior. The authors recommend that librarians analyze a portion of their own students’ work using The WHY Method. Recent modifications to the instrument have only strengthened its high rate of interrater reliability, meaning that, with no previous instruction in the method, librarians can apply it with the expectation of intelligible, consistent results. With institutional data, librarians can tailor instruction to prepare students to engage with the sources they will encounter in the academy and beyond. When librarians attempt to instill the Frame that “Authority Is Constructed and Contextual,” they are encouraged to teach students that context is not merely discipline or topic-specific, but also institutionally situated.

Acknowledgment

The authors would like to thank Hillary Hayes for her assistance with this paper.

APPENDIX

WHY Attribute Codes for Coding Paper References (resources related to this study, including the full coding taxonomy that includes scope notes, may be found at the following online libguide: https://research.ewu.edu/thewhymethod)

Author (Who) Identity Attribute

Author Identity Category

Brief Description

WA:

Unknown Authorship

No identification is possible.

WB:

Layman

A person without demonstrated expertise in the area being written about.

WC:

Corporate Authorship

No single author identified on a work issued by an organization.

WD:

Professional Amateur

A person with a degree in another field, but demonstrating interest, dedication, and experience in the area being written about.

WE:

Applied Professional

A person with experience, training, or credentials relevant to the area being written about, or an experienced/credentialed journalist

WF:

Academic Professional

A person with a master’s or doctoral degree in the area being written about, which they held at the time the content was published.

WZ:

Source Unknown

No information on the category could be found.

Editorial (How) Process Attribute

Editorial Process Category

Brief Description

HA:

Self-Published

Material made public directly by the author.

HB:

Vanity Press

Material the author paid to publish, generally as self-promotion.

HC:

Collaborative Editing

Material that is reviewed or edited by multiple possibly anonymous collaborators.

HD:

Moderated Submissions

Contributed content that has been accepted or approved by someone other than the author.

HE:

Editor and Editorial Staff

Professionally reviewed and approved by editor/editorial staff with journalistic credentials/experience

HF:

Peer Reviewed

Evaluated by members of the scholarly community before acceptance and publication.

HZ:

Source Unknown

No information on the category could be found.

Publication (whY) Purpose Attribute

Publication Purpose Category

Brief Description

YA:

Personal

Material is published without commercial aims.

YB:

Commercial

Material is published for commercial gain.

YC:

Nonprofit

Material is published by a nonprofit organization.

YD:

K–12 Education

Material is published for educational purposes.

YE:

Government

Material is published by the government.

YF:

Higher Education

Material is published for an academic audience.

YZ:

Source Unknown

No information on the category could be found.

Notes

1. Association of College & Research Libraries (ACRL), Framework for Information Literacy for Higher Education, available online at www.ala.org/acrl/standards/ilframework [accessed June 1, 2020].

2. ACRL, Framework for Information Literacy for Higher Education.

3. ACRL, Framework for Information Literacy for Higher Education.

4. The full version of The WHY Method’s taxonomy, including all scope notes, is available at https://research.ewu.edu/thewhymethod.

5. Chris Leeder, Karen Markey, and Elizabeth Yakel, “A Faceted Taxonomy for Rating Student Bibliographies in an Online Information Literacy Game,” College & Research Libraries 73, no. 2 (2012): 115–33, https://doi.org/10.5860/crl-223.

6. Sarah P.C. Dahlen and Kathlene Hanson, “Preference vs. Authority: A Comparison of Student Searching in a Subject-Specific Indexing and Abstracting Database and a Customized Discovery Layer,” College & Research Libraries 78, no. 7 (2017): 878–97, https://doi.org/10.5860/crl.78.7.878; Sarah P.C. Dahlen et al., “Almost in the Wild: Student Search Behaviors When Librarians Aren’t Looking,” Journal of Academic Librarianship 46, no. 1 (2020): 1–13, https://doi.org/10.1016/j.acalib.2019.102096.

7. Leeder, Markey, and Yakel, “A Faceted Taxonomy for Rating Student Bibliographies in an Online Information Literacy Game,” 129.

8. ACRL, Framework for Information Literacy in Higher Education.

9. James Rosenzweig, Mary Thill, and Frank Lambert, “Student Constructions of Authority in the Framework Era: A Bibliometric Pilot Study Using a Faceted Taxonomy,” College & Research Libraries 80, no. 3 (2019): 401–20, https://doi.org/10.5860/crl.80.3.401.

10. Rosenzweig, Thill, and Lambert, “Student Constructions of Authority in the Framework Era.”

11. Rosenzweig, Thill, and Lambert, “Student Constructions of Authority in the Framework Era.”

12. ACRL, Framework for Information Literacy in Higher Education.

13. Tammy Ivins, “A Case Study of Periodical Use by Library and Information Science Students,” Journal of Education for Library and Information Science 54, no. 2 (2013): 126.

14. Maria E. Clarke and Charles Oppenheim, “Citation Behaviour of Information Science Students II: Postgraduate Students,” Education for Information 24 (2006):1–30, https://doi.org/10.3233/EFI-2006-24101; Molly R. Flaspohler, Erika M. Rux, and John A. Flaspohler, “The Annotated Bibliography and Citation Behavior: Enhancing Student Scholarship in an Undergraduate Biology Course,” CBE—Life Sciences Education 6, no. 4 (2007): 350–60, https://doi.org/10.1187/cbe.07-04-0022; Ivins, “A Case Study of Periodical Use by Library and Information Science Students.”

15. Clarke and Oppenheim, “Citation Behaviour of Information Science Students II”; Flaspohler, Rux, and Flaspohler, “The Annotated Bibliography and Citation Behavior”; Ivins, “A Case Study of Periodical Use by Library and Information Science Students.”

16. Krista M. Soria, Shane Nackerud, and Kate Peterson, “Socioeconomic Indicators Associated with First-year College Students’ Use of Academic Libraries,” Journal of Academic Librarianship 41, no. 5 (2015): 636–43, https://doi.org/10.1016/j.acalib.2015.06.011.

17. Soria, Nackerud, and Peterson, “Socioeconomic Indicators Associated with First-year College Students’ Use of Academic Libraries.”

18. Soria, Nackerud, and Peterson, “Socioeconomic Indicators Associated with First-year College Students’ Use of Academic Libraries.”

19. Soria, Nackerud, and Peterson, “Socioeconomic Indicators Associated with First-year College Students’ Use of Academic Libraries.”

20. Allison Hosier, “Teaching Information Literacy through ‘Un-research’,” Communications in

Information Literacy 9, no. 2 (2015): 126–36, https://doi.org/10.15760/comminfolit.2015.9.2.189; Paula R. Dempsey and Heather Jagman, “‘I Felt Like Such a Freshman’: First-year Students Crossing the Library Threshold,” portal: Libraries and the Academy 16, no. 1 (2016): 89–107, https://doi.org/10.1353/pla.2016.0011.

21. Rachel Cooke and Danielle Rosenthal, “Students Use More Books after Library Instruction: An Analysis of Undergraduate Paper Citations,” College & Research Libraries 72, no. 4 (2011): 332–44, https://doi.org/10.5860/crl-90; Alex P. Watson, “Still a Mixed Bag: A Study of First-year Composition Students’ Internet Citations at the University of Mississippi,” Reference Services Review 40, no. 1 (2011): 125–37, https://doi.org/10.1108/00907321211203685.

22. Alexandria Chisholm and Brett Spencer, “Through the Looking Glass: Viewing First-year Composition through the Lens of Information Literacy,” Communications in Information Literacy 13, no. 1 (2019): 43–60, https://doi.org/10.15760/comminfolit.2019.13.1.4.

23. Chisholm and Spencer, “Through the Looking Glass.”

24. Rosenzweig, Thill, and Lambert, “Student Constructions of Authority in the Framework Era.”

25. Rosenzweig, Thill, and Lambert, “Student Constructions of Authority in the Framework Era.”

26. Earl Babbie, The Practice of Social Research, 12th ed. (Belmont, CA: Wadsworth Cengage Learning, 2007), 211.

27. Andrew F. Hayes and Klaus Krippendorff, “Answering the Call for a Standard Reliability Measure for Coding Data,” Communication Methods and Measures 1, no. 1 (2007): 77–89, https://doi.org/10.1080/19312450709336664.

28. Hayes and Krippendorff, “Answering the Call for a Standard Reliability Measure for Coding Data.”

29. Rosenzweig, Thill, and Lambert, “Student Constructions of Authority in the Framework Era.”

30. Rosenzweig, Thill, and Lambert, “Student Constructions of Authority in the Framework Era.”

31. Soria, Nackerud, and Peterson, “Socioeconomic Indicators Associated with First-year College Students’ Use of Academic Libraries.”

32. Rosenzweig, Thill, and Lambert, “Student Constructions of Authority in the Framework Era.”

33. Rosenzweig, Thill, and Lambert, “Student Constructions of Authority in the Framework Era.”

34. Dahlen et al., “Almost in the Wild.”

*Frank Lambert is Assistant Professor & Program Coordinator at Middle Tennessee State University, email: frank.lambert@mtsu.edu; Mary Thill is Reference Coordinator and Humanities Librarian at Northeastern Illinois University, email: m-thill@neiu.edu; James W. Rosenzweig is Education Librarian at Eastern Washington University, email: jrosenzweig@ewu.edu. ©2021 Frank Lambert, Mary Thill, and James W. Rosenzweig, Attribution-NonCommercial (https://creativecommons.org/licenses/by-nc/4.0/) CC BY-NC.

Copyright Frank Lambert, Mary Thill, James W. Rosenzweig


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

Article Views (By Year/Month)

2026
January: 15
2025
January: 33
February: 60
March: 53
April: 67
May: 50
June: 67
July: 68
August: 96
September: 67
October: 102
November: 113
December: 93
2024
January: 100
February: 141
March: 64
April: 33
May: 32
June: 68
July: 30
August: 27
September: 22
October: 28
November: 45
December: 20
2023
January: 17
February: 27
March: 30
April: 8
May: 14
June: 6
July: 10
August: 45
September: 26
October: 19
November: 31
December: 6
2022
January: 27
February: 22
March: 21
April: 23
May: 39
June: 40
July: 19
August: 27
September: 31
October: 13
November: 9
December: 2
2021
January: 0
February: 0
March: 0
April: 0
May: 0
June: 224
July: 397
August: 141
September: 168
October: 48
November: 26
December: 17