03_Galbraith_etal

Judging Journals: How Impact Factor and Other Metrics Differ across Disciplines

Given academia’s frequent use of publication metrics and the inconsistencies in metrics across disciplines, this study examines how various disciplines are treated differently by metric systems. We seek to offer academic librarians, university rank and tenure committees, and other interested individuals guidelines for distinguishing general differences between journal bibliometrics in various disciplines. This study addresses the following questions: How well represented are different disciplines in the indexing of each metrics system (Eigenfactor, Scopus, Web of Science, Google Scholar)? How does each metrics system treat disciplines differently, and how do these differences compare across metrics systems? For university libraries and academic librarians, this study may increase understanding of the comparative value of various metrics, which hopefully will facilitate more informed decisions regarding the purchase of journal subscriptions and the evaluation of journals and metrics systems. This study indicates that different metrics systems prioritize different disciplines, and metrics are not always easily compared across disciplines. Consequently, this study indicates that simple reliance on metrics in publishing or purchasing decisions is often flawed.

Introduction

Bibliometrics, statistics used to measure the significance of academic sources, have been in use since well before the existence of online publications. One of the most popular bibliometrics is the Journal Impact Factor (JIF). Since JIF’s creation in 1975, the academic world has become irrevocably saturated with bibliometric data. One recent study found that 87 percent of universities supported using Impact Factor in promotion and tenure evaluations with no reservations, 13 percent supported it with some reservations, and no universities opposed using the Impact Factor to evaluate scholarship quality.1 Impact Factor and other similar metrics are used by universities and other groups to make decisions about individual performance regarding funding, tenure, and research quality.2 Similarly, research librarians are increasingly responsible for providing bibliometric information to their academic communities.3 Journals, articles, and scholars can all seemingly be defined by a few simple numbers. However, the use of bibliometrics in the academic world creates complications because a simple number cannot sum up the entirety of a scholar’s impact, and interdisciplinary differences create strong distinctions in disciplines’ metrics values.4 Given these limitations, scholars are increasingly suspicious of using bibliometrics, and some have suggested that the academic community give up the journal metric system entirely.5 However, most scholars, including the authors of this study, agree that journal metrics should not be abandoned altogether but should be used with caution and in reference to each other.6 Two established facts therefore emerge from the literature: measuring research by metrics is somewhat flawed, but metric systems retain value and will continue to be used.

Since metrics will still be used, and the “simple-minded comparison” of two metrics “will give meaningless results unless the indices are properly corrected for the fact that different science fields have different citation habitudes,” our study examines the disciplinary differences between journal metrics in the databases of Scopus, Eigenfactor, Web of Science, and Google Scholar.7 Previous research has suggested that these databases have been growing consistently and with enough stability to allow for a cross-disciplinary study of them such as this one.8 Although not all of these metric systems claim to offer metrics for all disciplines, they are frequently used as if they do, thus indicating a need for research like this. Impact Factor, for example, has asserted that it should not be used for Humanities journals, yet, in our experience, Humanities professors and students still attempt to use Impact Factor to assess their work.9 While previous scholars have offered their own systems for attempting to normalize disciplinary differences in bibliometrics, these systems are often complex and are, in the end, essentially unused.10 Therefore, we seek to offer academic libraries, rank and tenure committees, and other interested persons some simple trends for distinguishing the general differences between journal metrics in various disciplines. Our research also suggests which disciplines are best represented by which metrics systems.

For librarians, metrics usage is critical. Subject librarians may use metrics when deciding what journals to purchase, or they may use them in connection with conducting their own research or offering research help to others. Librarians who work closely with faculty have to be able to inform faculty on how to evaluate publications and journals in terms of impact and usage; this is particularly true for newer faculty seeking tenure, who have to be able to make a case for the significance of their scholarship. In administration, librarians may be asked to make rank and tenure decisions based on metric-based information. A library that uses metrics efficiently has a better, more expansive role in its academic community.11 In our experience, many faculty are unaware of the most relevant metrics systems in their field, and even academic librarians are often unsure of how to interpret metric data. This perceived gap was noted in 2016 by Malone and Burke, who found that academic librarians often needed to know about metrics systems but did not.12 If more academic librarians educate themselves in this area, they will be more valuable to their academic and professional communities. Thus, the results of this study should help librarians counsel both new and seasoned faculty in choosing, using, and publishing in the academic journals most relevant to their own field. We seek to offer a quick, concise guide to bibliometrics for those current subject librarians.13

This study addresses the following two questions: How well represented is each discipline in the indexing of each metrics system (Eigenfactor, Scopus, Web of Science, and Google Scholar)? How does each metrics system treat disciplines differently, and how do these differences compare across metrics systems? In order to cover a wide range of disciplines, the following areas were addressed: finance, management, chemical engineering, mechanical engineering, psychology, economics, communication, philosophy, English, law, teacher education, education leadership, biology, exercise science, chemistry, and statistics.

Overview of Metrics

It is advantageous to offer a simple definition of each of the metrics referenced in this study. First, Web of Science calculates Impact Factor for a journal to measure the frequency with which an average article in the journal has been cited in a year; this is calculated by dividing the number of times articles were cited by the number of citable articles and is based on a two-year period.14 Five-year Impact Factor is the impact when articles’ influence is considered over a five-year period. It is calculated by dividing the number of citations a journal receives from a year by the total number of articles published from the last five-year period.15 Google Scholar’s h-index factor is the maximum value of h such that the journal has published h number of papers that have each been cited at least h times.16 For example, if a journal has published fifty papers that have been cited fifty times, the h-index value of the journal would be 50, regardless of how many other less-cited papers have been published therein. Source-Normalized Impact per Paper (SNIP) is the ratio of a source’s average citation count and the number of citations that the journal might expect to receive based on its field.17 SCImago Journal Rank (SJR) is the average number of citations received during the year per document published in that journal in the previous three years and is weighted based on journal prestige.18 The organization Scopus produces two different metrics for understanding journal prestige. Scopus CiteScore reflects the average yearly number of citations of recent articles published in a journal; it is calculated by taking the number of citations from one year of articles published in the last four years, then dividing that value by the number of articles published in those four years.19 Scopus’s other metric, the Scopus CiteScore Percentile, indicates how a journal ranks relative to other journals in its field. Eigenfactor also produces two metrics. Eigenfactor Score is the number of citations received from a journal’s publications released in the last five years compared to total number of articles. Based on that value, Eigenfactor Article Influence Score is the average influence of any given article from that journal over the first five years of publication.20 Overall, although strong similarities exist between the metrics, each bibliometric accounts for article and journal prestige slightly differently, thereby allowing for comparative studies such as this one.

The first popular metric system established in the academic world was Web of Science’s Journal Impact Factor (JIF), which paved the way for future metrics systems. While being the first major and most common of the journal metrics, Impact Factor is easily skewed and therefore problematic.21 Impact Factor provides quick information about a journal, but it only considers citations within two years’ time, does nothing to distinguish specific article quality, and sometimes will unintentionally rate review articles better than original research.22 Furthermore, Impact Factors are underprovided for many subject areas, including those in the arts, humanities, natural sciences, and social sciences.23

Two later metrics systems, Scopus and Eigenfactor, are comparable to the Impact Factor.24 Eigenfactor, which was created to limit the impact of self-citation on metric score, ranks journals by looking at both the citations and their source, so and Eigenfactor score indicates a journal’s importance in the scientific community with reference to both quality and size.25 Similarly, Scopus attempts to account for both overall influence and citation.26 Scopus has been noted for covering a greater range and number of subject matters and journals than Web of Science.27 Based on Scopus, the SJR (Scimago Journal Ranking) score is created using both citation count and overall influence.28 The SNIP (Source-normalized Impact per Paper) score, which is also Scopus-based, measures a journal’s contextual citation impact, accounting for characteristics of the journal’s subject field.29 By accounting for field tendencies, SNIP is therefore meant to allow for easier comparisons across fields, although it does not account for self-citation or review articles.30 Scopus and Eigenfactor therefore challenge Impact Factor in a way that has allowed for greater comparison of metrics, particularly across disciplines.

The most recent metric to emerge is provided by Google Scholar. Google Scholar’s database is known for having the greatest number of citations indexed, although this can be complicated to interpret since Google Scholar has a higher tendency to include sources that are less academic.31 Google Scholar also fails to account for self-citation and duplicates.32 Google Scholar is, however, significantly better than Scopus and Web of Science at finding journals in foreign languages and in the fields of the humanities, social sciences, business, engineering, and economics.33 It is also known for being geographically neutral, compared to Web of Science’s American bias and Scopus’ British bias.34 Despite being newer to the metrics world and having some limitations, Google Scholar has therefore begun to gain popularity.

A final growing area of metrics is the field of altmetrics, ways of measuring scholarship’s popularity that are not based on typical academic avenues. This can take the form of news attention, number of views, sharing on social media, etc. Recent research has begun to be increasingly interested in the field of altmetrics. However, a study by Costas, Zahedi, and Woutor indicates that altmetrics are still not widely used in academic circles, and Thelwall’s research indicates that altmetrics can be just as problematic as traditional bibliometrics.35 Given these complications, our study focuses solely on comparisons between bibliometric systems. A 2014 study by Alhoori and Furuta introduced the “Journal Social Impact” score, a way of measuring an article’s popularity among sources like Facebook, Reddit, and Pinterest; the authors found a high correlation between their score and traditional bibliometric scores, suggesting some relationship between the two.36 On the other hand, this relationship has been somewhat complicated by Garcia-Villar’s more recent study, which presents a more nuanced connection between biblio- and altmetrics.37 In some instances, altmetrics can even be used to predict future bibliometric success of journal articles.38 This suggests that future research may need to take altmetrics into account when comparing bibliometrics—particularly since Thelwall found that altmetrics also varied strongly by discipline—but analysis of altmetrics was beyond the determined scope of this study.39

Literature Review

Significant research has been done to understand the relationship between the different metrics ranking systems. A strong correlation exists between Impact Factor and both Scopus and Eigenfactor, which has led some scholars to conclude that either Scopus or Eigenfactor could be used comparably.40 One study suggests that SJR and Impact Factor have a high correlation, but the correlation between Eigenfactor and Impact Factor is not nearly as high.41 SJR tends to concentrate highest scientific influence in fewer journals compared to the Impact Factor.42 A study related to journal purchasing suggests that Scopus may provide more accurate metric information for the health sciences, while Web of Science may be more valuable for other disciplines.43 Another new potential emerging metric system is the Journal Citation Indicator, released in 2021 in an attempt to normalize variations between academic fields.44 However, this new metric has not been considered in this study, which rather seeks to understand the relationships between more long-standing and widely used metrics. Ultimately, different metric systems have different disciplinary preferences and different theoretical backgrounds, so they are calculated in different ways, meaning that using a combination of metrics systems is best.45 Consequently, we decided that this study would compare different metrics systems across various disciplines to understand how the metrics systems and disciplines may be interpreted alongside each other. While some other studies have attempted to understand individual disciplines’ relationships to metrics or have attempted to compare one metric to another,46 our study is unique in its comparison of multiple metrics systems simultaneously and its desire to understand how metrics compare to each other across many different disciplines.

Some comparison of journal metrics by discipline has previously been undertaken, both in the field of higher education generally and in specific fields. Although none have been as comprehensive as this study, which includes comparisons between disciplines across academic fields, valuable insights have been gained from this previous work. For example, in the field of communication, Repiso-Caballero and Delgado-López-Cózar found that Google Scholar accounted better for journals in non-English languages than Scopus and Web of Science; Google Scholar also doubled the number of journals available compared to Scopus and tripled compared to Web of Science.47 In the fields of communication and chemical engineering, Impact Factor, h-index, and Eigenfactor scores from different metric databases are all highly correlated with each other.48 On the other hand, in the nuclear medicine field, despite finding a strong similarity between Google Scholar, Scimago, and Web of Science, Zarifmahmoudi et al. found Google Scholar and Web of Science to be missing journals, particularly non-English journals.49 In the fields of anatomy and morphology, Web of Science, Eigenfactor, and SJR all ranked journals differently.50 The fields of occupational therapy, anatomy, and morphology reported similar changes in ranking by database.51 A variety of other subject-specific studies such as these have been conducted.52 One of the most valuable studies has been Meaningful Metrics, which discusses disciplines individually in terms of their use of metrics but does not compare metrics more specifically.53 Another valuable comparative study is Wouters et al.’s study, which provides a literature review of studies comparing Web of Science, Scopus, and Google Scholar.54 These previous studies are helpful in understanding the multitude of complexities that exist in comparing journal metrics, but their incomprehensive nature necessitates a study such as this one, which combines different subject areas to create a more holistic, comparative picture.

Many studies compare disciplines that are closely related to determine differences in metric rating systems. Lillquist and Green, for example, analyzed various science fields and found that physics, biology, and chemistry had the highest h-index values and significantly out-published mathematics faculty in both quantity and h-index ranking.55 Similarly, Batista et al. found that physics publications ranked highest based in regard to metrics, followed by chemistry, then biology, then mathematics.56 Kamdem et al. found pharmacology as the field with the most scientific productivity, followed by biochemistry, then physiology and biophysics.57 These studies indicate that different fields, including closely related fields, are ranked differently in their metric evaluations. Comparing fields therefore requires an understanding of the general trends in the different rankings. Based on this research, we expect to see strong differences between how each discipline is treated by each metrics system, because of both the differences in discipline publishing tendencies and the differences in the metrics systems themselves. Understanding these differences will be invaluable in understanding how metrics systems can be of better use to the academic community, and it will indicate how individual librarians and academic libraries can best use metrics.

Methodology

At the beginning of this study, we compiled a list of universities with ARL libraries, as this was determined to be a satisfactory indicator of stellar research quality. From this list, we created a sublist of universities of approximately the same size, student makeup, and research output as the authors’ university. This was done in the hopes of considering three universities with comparable research quality, output, and goals. From this shorter list, three universities (Brigham Young University, the University of Texas at Austin, and Virginia Tech) were selected, one being the researchers’ own university and the other two being selected randomly from the list of comparable ARL library universities. Based on the first university, Brigham Young University, two departments were randomly selected from each college, colleges being employed to allow for a wide range of academic areas of research. We attempted to use the same departments from each university, although this was not always possible. Using a random stratified sampling model, 10 full or associate professors were randomly selected from each previously determined department. Faculty members were only chosen if their curriculum vitae (CV or vita) were available either on their university website or via Google Scholar.§ This gave us approximately 160 faculty members per school, yielding an approximate total of 480 faculty.

Influenced by the study of engineering metrics by Lillquist and Green, which analyzed only tenured professors due to the potential differences in rank, we chose to look at only full and associate professors, since associate professors have expertise and publishing experience comparable to that of their full professor counterparts.58 Furthermore, including associate professors in our sample size allowed for a larger sample and let us include departments that had fewer full-rank professors. While some assistant professors are well published, we decided against using assistant faculty members, since many of them have not had sufficient time in their position to publish extensively. We aimed for choosing professorial positions with a significant number of publications so that more journals would be available for our list; our ultimate goal was a large, random selection of academic journals.

Once we compiled a list of teaching faculty with online vitas, student research assistants created a spreadsheet listing each article that was published by a faculty member in a peer-reviewed journal. This required the students to look up each individual journal online to make sure it was peer-reviewed. All of this was done in an effort to get a list of journals in which faculty of various disciplines published their work. For each journal in the list, student research assistants found the metrics from Eigenfactor (including Eigenfactor score and Eigenfactor Article Influence Score), Scopus (Scopus Cite Score, Scopus CiteScore Percentile, SJR, and SNIP), Web of Science (Impact Factor and 5-year Impact Factor), and Google Scholar (h5-index score). When no metric existed for the journal, the field was marked as blank. In the end, our study was able to examine 8,418 unique journals. Data for this study was collected from January to August of 2020.

In cases where the curriculum vitae of the faculty member was not accessible online but a verified Google Scholar profile was available, the list of publications was taken from the Google Scholar profile. This allowed us to include a larger pool of faculty and, ultimately, a greater number of journals. If the faculty’s publication information was not available online in any form, a different faculty member from the department was randomly selected. This created some selection bias in favoring faculty members with their list of publications online, but this bias was deemed necessary for the overall sake of the research.

Our study was limited to universities inside the United States. Although this may be considered a limitation and further study should be performed to compare other countries’ metrics, we considered our geographical boundaries a strength. By choosing a singular country to study, we controlled for cultural and country-based influences that could arise by studying universities of multiple countries, especially given that publishing requirements can vary so extensively by country. We performed statistical analyses to ensure that different universities did not affect the bibliometric result and found no significant effect.

Our statistical analysis was conducted based on the colleges we picked, since each college essentially represented a different area of scholarship. There was obviously some range in values by department, as is evident in figure 1 below. This chart is provided to demonstrate the variance that existed across departments. Impact Factor was chosen as a metric of comparison, but any metric could have fulfilled this function. Scholars from each of the specific departments may find this chart of interest to see how their department compares to their college as a whole, as well as to the other departments outside of their college. This range demonstrates that each subject area should be conducting its own area-specific research regarding bibliometrics, since a broad study like this one cannot cover all the idiosyncrasies of each field. The discipline of statistics may be a particularly problematic area because statistics faculty tend to publish in a wide variety of fields. However, we believe that the range represented in the departments allows for a better representation of each college, so we performed our statistical analyses on colleges instead of departments.

Figure 1

Mean Impact Factor by Department

Figure 1. Mean Impact Factor by Department

Results & Discussion

Part 1: How well-represented is each discipline in the indexing of each metrics system?

The results of this table indicate how well each discipline is indexed by metric system. Scholars from each discipline may benefit from this table, as it indicates which metric system has the best coverage for each disciplinary area. For example, scholars in the humanities will benefit from using Google Scholar (51%) or Scopus (48%) when looking for metrics, especially when compared to the more limited humanities coverage that Eigenfactor and Impact Factor provide. This table is also helpful in drawing general trends regarding the journals covered by metrics systems. Based solely on the overall percentage, Eigenfactor has the smallest percentage of journals indexed, followed by Impact Factor. Interestingly, the metrics system with the highest percentage of journals classified is Google Scholar, the newest metric of the group. In fact, Google Scholar has the highest percentage of journals classified in the areas of physical science, humanities, social science (tied with Scopus), education, engineering, and law, so scholars from all those areas may benefit most from referencing Google Scholar. Compared to other metrics systems, Google Scholar’s high coverage of law journals (60%) is particularly impressive. In comparison, the fields of business, life sciences, and fine arts may consider consulting SJR or SNIP instead (although for both disciplines the difference between these systems and Google Scholar is only 1-2% and is therefore perhaps negligible).

Table 1

Percentage of Journals Represented in Each Database, Categorized by Discipline Type

Eigenfactor

Eigenfactor Article Influence

Scopus Cite Score

Scopus CiteScore Percentile

SJR

SNIP

Impact Factor

5-year Impact Factor

G Scholar h5-index

Average

Physical Sciences

71%

71%

78%

78%

79%

79%

74%

72%

80%

76%

Humanities

13%

16%

48%

48%

49%

48%

17%

17%

51%

34%

Social Sciences

68%

70%

78%

78%

79%

79%

72%

71%

79%

75%

Education

24%

27%

47%

47%

48%

48%

30%

28%

55%

39%

Engineering

71%

71%

78%

78%

79%

78%

77%

75%

81%

76%

Business

56%

57%

70%

70%

71%

71%

62%

59%

70%

65%

Life Sciences

76%

76%

84%

84%

85%

84%

82%

79%

84%

82%

Fine Arts

33%

35%

60%

60%

60%

60%

38%

37%

58%

49%

Law

34%

34%

40%

40%

40%

40%

38%

34%

65%

41%

Overall

58%

60%

72%

72%

73%

72%

64%

61%

74%

Overall, these results indicate the emerging dominance of Google Scholar in the world of academic metrics. Google Scholar and Scopus lead the metrics systems in every area, at least in regard to the percentage of journals classified. This brings up the question of whether scholars are overrelying on metrics systems that are becoming increasingly irrelevant. If Eigenfactor and Web of Science cannot keep up with the percentage of journals indexed, are they worth consulting? This is particularly relevant given accessibility to the systems. One 2019 report states that the annual subscription price of Web of Science was over $212,000, compared to only $140,000 for Scopus.59 In comparison, Google Scholar’s statistics are free. Google Scholar’s thorough coverage across all disciplines is a major strength, and its universal availability makes it highly useful for scholars of all levels of education. At the same time, however, Google Scholar sometimes picks up on sources that are non-academic, and it is somewhat error-ridden, so the reliability of its journal classifications may need to be evaluated. Thus, Google Scholar’s overall worth may need to be reevaluated in future years, and other metrics systems may need to make some changes in order to better compete in the field of academic metrics.

This table also has implications for how well journals are being indexed overall. Across all metrics systems, the best indexed field was the life sciences at 82 percent, followed closely by the physical sciences and engineering at 76 percent and the social sciences at 75 percent. Business was also fairly well indexed, with more than half the journals indexed in every metric system. All the other disciplines—fine arts, law, education, and humanities—averaged at less than half indexed. These numbers are somewhat startling, given academia’s generally high reliance on numbers and metrics in determining quality. The statistics indicate a strong preference for indexing science-based fields compared to other areas of study. Arguably, non-science-based fields and the soft sciences tend to rely less on bibliometrics to determine quality.60 However, this discrepancy notes a large problem of using metrics in academia: not all the journals are classified. With the highest rate of classification at 82 percent, the full range of academic journals in a discipline is not being wholly represented, so the system is inherently biased. It also strongly favors science-based disciplines, leaving other fields highly limited in their ability to judge the quality of their publications based on metrics. Thus, this underrepresentation of some disciplines should be fully considered when using the metrics systems.

Part 2: How does each metrics system treat disciplines differently, and how do these differences compare across metrics systems?

As evident in this table, discipline has a strong effect on bibliometric values.61 Engineering, business, and the sciences (social, physical, and life) all consistently had the highest metric values. Engineering almost always was the field with the highest metric value. Fine arts, law, and education were all consistently quite low, while the comparative humanities value varied strongly depending on the metric. While humanities had the second highest Eigenfactor value, it also had the lowest Scopus Cite Score. Thus, it is impossible to make direct comparisons between disciplines using metrics, as a hard science field will naturally have a significantly higher metric value than a humanities, education, or law. Even comparing metrics across one discipline will be problematic.

Table 2

Metric Averages for All the Journals from Each Discipline. Significance Was p < .001 for All Metrics except SNIP, Which Was Insignificant with a P Value of .803

Eigenfactor

Eigenfactor Article Influence

Scopus Cite Score

Scopus CiteScore Percentile

SJR

SNIP

Impact Factor

5-year Impact Factor

G Scholar h5-index

Physical Sciences

0.082

1.91

7.47

76.96

1.9

3.2

4.81

5.15

58.11

Humanities

0.062

1.6

2.03

70.07

0.75

1.16

3.01

3.54

16.36

Social Sciences

0.035

1.8

5.34

78.03

2.01

1.75

3.53

4.17

44.3

Education

0.013

0.81

2.81

72.76

1.06

1.51

1.98

2.59

24.88

Engineering

0.09

1.45

8.57

80.02

1.86

1.57

5.36

5.59

62.13

Business

0.019

2.08

5.28

79.85

2.82

2.17

3.17

4.36

42.15

Life Sciences

0.053

1.71

6.67

76.62

1.85

1.48

4.37

4.77

50.58

Fine Arts

0.006

0.74

2.92

76.79

0.92

1.32

1.98

2.54

25.33

Law

0.008

1.18

2.14

69.36

1.18

1.12

1.82

2.09

22.1

All Departments

0.058

1.66

6.15

77.06

1.8

1.88

4.19

4.67

47.3

It is important to note the strong differences between each metric system. Despite having some similar goals, each metric system is ultimately unique. Figure 2 below demonstrates the level of comparability between the different metric systems. Darker colors and larger dots correspond with a higher level of correlation, and no negative correlations were found. The highest correlation existed between Eigenfactor Article Influence Score and SJR, as well as between Eigenfactor Article Influence Score and the Impact Factors. The Impact Factors were also highly correlated with Scopus Cite Score. This is consistent with previous research, which found high correlation between Impact Factor, Eigenfactor, and Scopus, suggesting that these metrics might be used comparably.62 Interestingly, SNIP had almost no correlation with the other metrics. While correlation does not necessarily equate with reliability, the consistency between Impact Factor, Eigenfactor, and Scopus is somewhat encouraging in regard to the future of the metrics systems. At the same time, the lack of consistency between some metrics is alarming, since it shows that measuring quality using metrics can be highly problematic. Although the use of metrics and numbers may seem entirely objective, clearly this is not an entirely consistent system.

Figure 2

Correlation between Metric Systems

Figure 2. Correlation between Metric Systems

One of the biggest trends in our data, which is consistent with what might be expected, is that fields that are more closely tied to the sciences nearly always have higher metric measurements than less scientific fields, regardless of the type of metric. Education, Fine Arts, Law, and Humanities all consistently had lower metrics, and they were less represented overall in metric systems. It is important that university rank and tenure committees note this important disciplinary difference. Publishing is obviously an important part of rank advancement, and having good metrics can make a significant difference for a faculty member, so it is critical to understand what is considered a good metric for each discipline. Similarly, librarians purchasing access to journals across fields should note that the metrics values will vary based on discipline. Higher metric scores may be more consistent for the sciences, business, and engineering, so a humanities, law, education, and fine arts journal should not be dismissed simply because their metric values seem lower in comparison; similarly, just because a journal is not included in a metric system does not necessarily mean that it is of lower quality than an indexed journal. Furthermore, with such vast discrepancies in the metrics, any comparisons must be undertaken with much caution.

Limitations

A primary limitation of our study is that our data was only conducted for scholars publishing in the United States. Consequently, the majority of the journals included were written in English. Given that some databases have limited coverage of non-English journals and some disciplines rely more heavily on foreign language publications, it is likely that our study’s results are inaccurate in regard to non-English journals. Further studies should include international universities in their data in order to offset any national or linguistic bias that may occur. An additional limitation of our study is that we only looked at faculty members who had their list of publications available online. This may preclude examination of less tech-savvy faculty, who may publish more often in certain journals. While our study was intended to examine differences by discipline and therefore should not be overly affected by this limitation, there remains the possibility that limiting our sample to only faculty with their publications listed online caused a difference in our results.

We recognize that a major limitation of our study is its generality. After all, each discipline that we have compared contains a variety of subfields and specialties, each of which may have its own differences in how they are treated by journal metrics, and we only compared a few of many disciplines.63 Furthermore, many areas of research overlap with each other or may not be easily defined within one discipline.64 These difficulties warrant further research beyond the scope of this study, in that our primary goal was to offer a general overview comparison of the differences in journal metrics by discipline. We recommend that every discipline undertake a study of its subdisciplinary differences in journal metrics, as has been done in engineering and other fields.65

Conclusion

Our study identifies some interesting trends in publishing metrics across disciplines—trends that will be useful to academic librarians in advising their faculty on publications and in working on publications of their own. We found that the fields of business, engineering, and the sciences tend to have higher publishing metrics and a greater representation in bibliometric systems than fine arts, humanities, and education. Different metric systems’ treatments of the various disciplines produced distinct results. The biggest takeaway from our study is the huge discrepancy between disciplines, which prevents comparing their bibliometrics. The metrics system is not consistent, and it is ultimately an imperfect way to measure research quality. More research and scholarship will be necessary to understand the flaws in this system more fully, but our study provides some initial backdrop of how interdisciplinary differences impact journal metrics. Our results demonstrate that librarians as well as scholars and administrators must be careful in their treatment of metrics. No metric system can be considered an ideal measurement of quality, and all metric systems should be used with caution and careful attention to how different disciplines are treated differently.

Appendix

Figure 3

Eigenfactor Average by Discipline (p < .0001)

Figure 3. Eigenfactor Average by Discipline (p < .0001)

Figure 4

Eigenfactor Article Influence Score Average by Discipline (p < .0001)

Figure 4. Eigenfactor Article Influence Score Average by Discipline (p < .0001)

Figure 5

Scopus Cite Score Average by Discipline (p < .0001)

Figure 5. Scopus Cite Score Average by Discipline (p < .0001)

Figure 6

Scopus Cite Score Percentile Average by Discipline (p < .0001)

Figure 6. Scopus Cite Score Percentile Average by Discipline (p < .0001)

Figure 7

SJR Average by Discipline (p < .0001)

Figure 7. SJR Average by Discipline (p < .0001)

Figure 8

SNIP Average by Discipline (not statistically significant)

Figure 8. SNIP Average by Discipline (not statistically significant)

Figure 9

Impact Factor Score Average by Discipline (p < .0001)

Figure 9. Impact Factor Score Average by Discipline (p < .0001)

Figure 10

5-year Impact Factor Score Average by Discipline (p < .0001)

Figure 10. 5-year Impact Factor Score Average by Discipline (p < .0001)

Figure 11

Google Scholar h5-index Score Average by Discipline (p < .0001)

Figure 11. Google Scholar h5-index Score Average by Discipline (p < .0001)

Notes

1. Erin C. McKiernan, Lesley A. Schimanski, Carol Muñoz Nieves, Lisa Matthias, Meredith T. Niles, and Juan P. Alperin, “Meta-research: Use of the Journal Impact Factor in Academic Review, Promotion, and Tenure Evaluations,” eLife 8 (2019): 1–12, https://doi.org/10.7554/eLife.47338.001.

2. Dimitris Bertsimas, Erik Brynjolfsson, Shachar Reichman, and John Silberholz, “Or Forum—Tenure Analytics: Models for Predicting Research Impact,” Operations Research 63, no. 6 (2015): 1246–61, https://doi.org/10.1287/opre.2015.1447; McKiernan, “Meta-research”; Tara Malone and Susan Burke, “Academic Librarians’ Knowledge of Bibliometrics and Altmetrics,” Evidence Based Library and Information Practice 11, no. 3 (2016): 34–49; Jacob B. Slyder, Beth R. Stein, Brent S. Sams, David M. Walker, B. Jacob Beale, Jeffrey J. Feldhaus, and Carolyn A. Copenheaver, “Citation Pattern and Lifespan: A Comparison of Discipline, Institution, and Individual,” Scientometrics 89, no. 3 (2011): 955–66, https://doi.org/10.1007/s11192-011-0467-x.

3 . Tara Malone and Susan Burke, “Academic Librarians’ Knowledge of Bibliometrics and Altmetrics,” Evidence Based Library and Information Practice 11, no. 3 (2016): 36–37.

4. Mayur Amin and Michael A. Mabe, “Impact Factors: Use and Abuse,” Medicina (Buenos Aires) 63, no. 4 (2003): 347–54; Fredrick Ruban, “Journal Impact Factor: An Academic Inquest,” Journal of the Gujarat Research Society 21, no. 10 (2019): 146–54.

5. Björn Brembs, Katherine Button, and Marcus Munafò. “Deep Impact: Unintended Consequences of Journal Rank.” Frontiers in Human Neuroscience 7 (2013): 1-12, https://doi.org/10.3389/fnhum.2013.00291; Gualberto Buela-Casal and Izabela Zych, “What do the scientists think about the impact factor?” Scientometrics 92, no. 2 (2012): 281–92; Roger Burrows, “Living with the h-Index? Metric Assemblages in the Contemporary Academy,” The Sociological Review 60, no. 2 (2012): 355–72.

6. Dag W. Aksnes., Liv Langfeldt, and Paul Wouters, “Citations, Citation Indicators, and Research Quality: An Overview of Basic Concepts and Theories,” Sage Open 9, no. 1 (2019): 1–17; Jeffrey W. Alstete, Nicholas J. Beutell, and John P. Meyer, Evaluating Scholarship and Research Impact: History, Practices, and Policy Development, (Emerald Group Publishing, 2018); Mario Cantín, M. Muñoz, and Ignacio Roa, “Comparison between Impact Factor, Eigenfactor Score, and SCImago Journal Rank Indicator in Anatomy and Morphology Journals,” International Journal of Morphology 33, no. 3 (2015): 1183–3, https://dx.doi.org/10.4067/S0717-95022015000300060; Liv Langfeldt, Ingvild Reymert, and Dag W. Aksnes, “The Role of Metrics in Peer Assessments,” Research Evaluation 30, no. 1 (2021): 112–26; Alexander M. Petersen, Fengzhong Wang, and H. Eugene Stanley, “Methods for Measuring the Citations and Productivity of Scientists across Time and Discipline,” Physical Review E 81, no. 3 (2010): 1–9, https://doi.org/10.1103/PhysRevE.81.036114.

7. Juan Iglesias and Carlos Pecharromán, “Scaling the H-Index for Different Scientific ISI Fields,” Scientometrics 73, no. 3 (2007): 303-20, https://doi.org/10.1007/s11192-007-1805-x, 317.

8. Anne-Wil Harzing and Satu Alakangas, “Microsoft Academic: Is the Phoenix Getting Wings?,” Scientometrics 110, no. 1 (2017): 371–83, https://doi.org/10.1007/s11192-015-1798-9; Adam Prins, Rodrigo Costas, Thed van Leeuwen, and Paul F. Wouters, “Using Google Scholar in Research Evaluation of Humanities and Social Science Programs: A Comparison with Web of Science Data,” Research Evaluation 25, no. 3 (2016): 264–70, https://doi.org/10.1093/reseval/rvv049.

9. See “Journal Citation Reports: Reasons for Not Calculating Impact Factors for Journals Covered in Arts & Humanities Citation Index,” Clarivate, 27 June 2018, https://support.clarivate.com/ScientificandAcademicResearch/s/article/Journal-Citation-Reports-Reasons-for-not-calculating-Impact-Factors-for-journals-covered-in-Arts-Humanities-Citation-Index?language=en_US.

10. Shakil Ahmad, Mohammad Sohail, Abu Waris, Amir Elginaid, and Isam Mohammed, “SCImago, Eigenfactor Score, and H5 Index Journal Rank Indicator: A Study of Journals in the area of Construction and Building Technologies,” DESIDOC Journal of Library & Information Technology 38, no. 4 (2018): 278-85, https://doi.org/10.14429/djlit.38.4.11503; Mike Thelwall, “Three Practical Field Normalised Alternative Indicator Formulae for Research Evaluation,” Journal of Informetrics 11, no. 1 (2017): 128–51, https://doi.org/10.1016/j.joi.2016.12.002.

11. Fredrik Åström and Joacim Hansson, “How Implementation of Bibliometric Practice Affects the Role of Academic Libraries,” Journal of Librarianship and Information Science 45, no. 4 (2013): 316–22; Franci Demšar and Primož Južnič, “Transparency of Research Policy and the Role of Librarians,” Journal of Librarianship and Information Science 46, no. 2 (2014): 139–47.

12. Malone, “Academic Librarians’ Knowledge of Bibliometrics and Altmetrics.”

13. For a longer and broader study of librarians and metrics, see Robin Chin Roemer and Rachel Borchardt, Meaningful Metrics: A 21st-Century Librarian’s Guide to Bibliometrics, Altmetrics, and Research Impact (Chicago: Association of College & Research Libraries, 2015), particularly 208–31. While the resources covered in this book are extensive and invaluable, we hope our study will offer a quick, concise resource for librarians to access regularly. We also hope to contribute something new in our comparative approach, considering how metrics compare in terms of the journals used regularly by faculty.

14. “The Clarivate Analytics Impact Factor,” Clarivate, Web of Science Group, Accessed 8 January 2022, https://clarivate.com/webofsciencegroup/essays/impact-factor/.

15. “The Clarivate Analytics Impact Factor.”

16. “Google Scholar Metrics,” Google Scholar, Accessed 8 January 2022, https://scholar.google.com/intl/en/scholar/metrics.html#metrics.

17. “Measuring a Journal’s Impact,” Elsevier, Accessed 8 January 2022, https://www.elsevier.com/authors/tools-and-resources/measuring-a-journals-impact.

18. Borja González-Pereira, Vicente P. Guerrero-Bote, and Félix Moya-Anegón, “A New Approach to the Metric of Journals’ Scientific Prestige: The SJR Indicator,” Journal of Informetrics 4, no. 3 (2010): 379–91, https://doi.org/10.1016/j.joi.2010.03.002.

19. “Measuring a Journal’s Impact.”

20. “About the Eigenfactor Project,” Eigenfactor, Accessed 8 January 2022, http://www.eigenfactor.org/about.php.

21. James Ravenscroft, Maria Liakata, Amanda Clare, and Daniel Duma, “Measuring Scientific Impact beyond Academia: An Assessment of Existing Impact Metrics and Proposed Improvements,” PloS One 12, no. 3 (2017): 1–21, https://doi.org/10.1371/journal.pone.0173152.

22. Ruban, “Journal Impact Factor: An Academic Inquest.”

23. Prins, Costa, van Leeuwen, and Wouters, “Using Google Scholar in Research Evaluation of Humanities and Social Science Programs”; William H. Walters, “Information Sources and Indicators for the Assessment of Journal Reputation and Impact,” The Reference Librarian 57, no. 1 (2016): 13–22, https://doi.org/10.1080/02763877.2015.1088426.

24. Khalid Mahmood and Muhammad Ajmal Khan, “Comparison among Journal Impact Factor, Eigenfactor Score and SCImago Journal Rank Indicator of LIS Journals,” Pakistan Library & Information Science Journal 50, no. 1 (2019): 4–14, https://lib.byu.edu/remoteauth/?url=http://search.ebscohost.com/login.aspx?direct=true&db=lih&AN=137738287&site=eds-live&scope=site; Junwen Zhu and Weishu Liu, “A Tale of Two Databases: The Use of Web of Science and Scopus in Academic Papers,” Scientometrics (2020): 1-15, https://doi.org/10.1007/s11192-020-03387-8.

25. Carl T. Bergstrom, Jevin D. West, and Marc A. Wiseman, “The Eigenfactor™ Metrics,” Journal of Neuroscience 28, no. 45 (2008): 11433–4, https://doi.org/10.1523/JNEUROSCI.0003-08.2008; Mahmood and Khan, “Comparison among Journal Impact Factor, Eigenfactor Score and SCImago Journal Rank Indicator of LIS Journals.”

26. Mahmood and Khan, “Comparison among Journal Impact Factor, Eigenfactor Score and SCImago Journal Rank Indicator of LIS Journals.”

27. Karen Chapman and Alexander E. Ellinger, “An Evaluation of Web of Science, Scopus and Google Scholar Citations in Operations Management,” The International Journal of Logistics Management 30, no. 4 (2019): 1039–53, https://doi.org/10.1108/IJLM-04-2019-0110; Harzing and Alakangas, “Microsoft Academic.”

28. Mahmood and Khan, “Comparison among Journal Impact Factor, Eigenfactor Score and SCImago Journal Rank Indicator of LIS Journals.”

29. Henk F. Moed, “Measuring Contextual Citation Impact of Scientific Journals,” Journal of informetrics 4, no. 3 (2010): 265–77, https://doi.org/10.1016/j.joi.2010.01.002.

30. Moed, “Measuring Contextual Citation Impact of Scientific Journals.”

31. Michael Gusenbauer, “Google Scholar to Overshadow Them All? Comparing the Sizes of 12 Academic Search Engines and Bibliographic Databases,” Scientometrics 118, no. 1 (2019): 177–214, https://doi.org/10.1007/s11192-018-2958-5; Alberto Martín-Martín, Enrique Orduna-Malea, Mike Thelwall, and Emilio Delgado López-Cózar, “Google Scholar, Web of Science, and Scopus: A Systematic Comparison of Citations in 252 Subject Categories,” Journal of Informetrics 12, no. 4 (2018): 1160–77, https://doi.org/10.1016/j.joi.2018.09.002; Henk F. Moed, Judit BarIlan, and Gali Halevi, “A New Methodology for Comparing Google Scholar and Scopus,” Journal of Informetrics 10, no. 2 (2016): 533–51, https://doi.org/10.1016/j.joi.2016.04.017.

32. Chapman and Ellinger, “An Evaluation of Web of Science, Scopus and Google Scholar Citations in Operations Management”; Prins, Costa, van Leeuwen, and Wouters, “Using Google Scholar in Research Evaluation of Humanities and Social Science Programs.”

33. Halevi, Moed, and Bar-Ilan, “A New Methodology for Comparing Google Scholar and Scopus”; Harzing and Alakangas, “Microsoft Academic”; Martín-Martín, Orduna-Malea, Thelwall, and López-Cózar, “Google Scholar, Web of Science, and Scopus”; Alberto Martín-Martín, Enrique Orduna-Malea, and Emilio Delgado López-Cózar, “Coverage of Highly-Cited Documents in Google Scholar, Web of Science, and Scopus: A Multidisciplinary Comparison,” Scientometrics 116, no. 3 (2018): 2175–88, https://doi.org/10.1007/s11192-018-2820-9.

34. Syed Rahmat Ullah Shah and Khalid Mahmood, “Review of Google Scholar, Web of Science, and Scopus Search Results: The Case of Inclusive Education Research,” Library Philosophy and Practice (2017).

35. For a study indicating the low usage of altmetrics in academic circles, see Rodrigo Costas, Zohreh Zahedi, and Paul Wouters. “Do ‘altmetrics’ correlate with citations? Extensive comparison of altmetric indicators with citations from a multidisciplinary perspective.” Journal of the Association for Information Science and Technology 66, no. 10 (2015): 2003–19. For an analysis on complications of altmetric usage, see Mike Thelwall, “Measuring societal impacts of research with altmetrics? Common problems and mistakes,” Journal of Economic Surveys 35, no. 5 (2021): 1302–14.

36. Hamed Alhoori and Richard Furuta, “Do Altmetrics Follow the Crowd or Does the Crowd Follow Altmetrics?” In IEEE/ACM Joint Conference on Digital Libraries, 375–78. IEEE, 2014.

37. Christina García-Villar, “A Critical Review on Altmetrics: Can We Measure the Social Impact Factor?” Insights into Imaging 12, no. 1 (2021): 1–10.

38. Akhil Pandey Akella, Hamed Alhoori, Pavan Ravikanth Kondamudi, Cole Freeman, and Haiming Zhou, “Early Indicators of Scientific Impact: Predicting Citations with Altmetrics,” Journal of Informetrics 15, no. 2 (2021): 1–35.

39. Thelwall, “Measuring Societal Impacts, 1302–14.

40. Mahmood and Khan, “Comparison among Journal Impact Factor, Eigenfactor Score and SCImago Journal Rank Indicator of LIS Journals.”

41. Ahmad, Sohail, Waris, Elginaid, and Mohammed, “SCIMago, Eigenfactor Score, and H5 Index Journal Rank Indicator.”

42. Borja González-Pereira, Vicente P. Guerrero-Bote, and Félix Moya-Anegón, “A New Approach to the Metric of Journals’ Scientific Prestige: The SJR Indicator.”

43. Katherine Chew, Mary Schoenborn, James Stemper, and Caroline Lilyard, “E-Journal Metrics for Collection Management: Exploring Disciplinary Usage Differences in Scopus and Web of Science,” Evidence Based Library and Information Practice 11, no. 2 (2016): 97–120, https://doi.org/10.18438/B85P87.

44. Clarivate, “Introducing the Journal Citation Indicator: A New Approach to Measure the Citation Impact of Journals in the Web of Science Core Collection,” Clarivate, 2021, https://clarivate.com/wp-content/uploads/dlm_uploads/2021/05/Journal-Citation-Indicator-discussion-paper-2.pdf, 1-3.

45. Cantín, Muñoz, & Roa, “Comparison between Impact Factor, Eigenfactor Score, and SCImago Journal Rank Indicator”; Leili Zarifmahmoudi, Jamshid Jamali, and Ramin Sadeghi, “Google Scholar Journal Metrics: Comparison with Impact Factor and SCImago Journal Rank Indicator for Nuclear Medicine Journals,” Iranian Journal of Nuclear Medicine 23, no. 1 (2015): 8–14.

46. Roemer and Borchardt, Meaningful Metrics, 180–98. Meaningful Metrics considers many individual fields and their relationships to metrics, and we feel that our study builds on that research by comparing metrics with each other across fields—a comparative approach similar to the interdisciplinary-type work expected nowadays for subject librarians.

47. Rafael Repiso-Caballero and Emilio Delgado-López-Cózar, “The Impact of Scientific Journals of Communication: Comparing Google Scholar Metrics, Web of Science and Scopus,” Comunicar 21, no. 41 (2013): 45–52, https://www.scipedia.com/public/Delgado_Repiso_2013a.

48. Repiso-Caballero and Delgado-López-Cózar, “The Impact of Scientific Journals of Communication”; Chun-Yang Yin, “Do Impact Factor, H-Index and Eigenfactor™ of Chemical Engineering Journals Correlate Well with Each Other and Indicate the Journals’ Influence and Prestige?,” Current Science 100, no. 5 (2011): 648–53, www.jstor.org/stable/24075802.

49. Zarifmahmoudi, Jamali, and Sadeghi, “Google Scholar Journal Metrics: Comparison with Impact Factor and SCImago Journal Rank Indicator for Nuclear Medicine Journals.”

50. Cantín, Muñoz, & Roa, “Comparison between Impact Factor, Eigenfactor Score, and SCImago Journal Rank Indicator.”

51. Ted Brown and Sharon A. Gutman, “Impact Factor, Eigenfactor, Article Influence, Scopus SNIP, and SCImage Journal Rank of Occupational Therapy Journals,” Scandinavian Journal of Occupational Therapy 26, no. 7 (2019): 475–83, https://doi.org/10.1080/11038128.2018.1473489; Cantín, Muñoz, & Roa, “Comparison between Impact Factor, Eigenfactor Score, and SCImago Journal Rank Indicator.”

52. Harzing and Alakangas, “Microsoft Academic.”

53. Roemer and Borchardt, Meaningful Metrics, 180–98

54. Paul Wouters, Mike Thelwall, Kayvan Kousha, Ludo Waltman, Sarah de Rijcke, Alex Rushforth, and Thomas Franssen, The Metric Tide: Literature Review (Supplementary Report I to the Independent Review of the Role of Metrics in Research Assessment and Management (Leiden: HEFCE, 2015), 4–7.

55. Eva Lillquist and Sheldon Green, “The Discipline Dependence of Citation Statistics,” Scientometrics 84, no. 3 (2010): 749–62, https://doi.org/10.1007/s11192-010-0162-3.

56. Pablo D. Batista, Mônica G. Campiteli, and Osame Kinouchi, “Is It Possible to Compare Researchers with Different Scientific Interests?” Scientometrics 68, no. 1 (2006): 179–89, https://doi.org/10.1007/s11192-006-0090-4.

57. Jean Paul Kamdem, Daniel Henrique Roos, Adekunle Adeniran Sanmi, Luciana Calabró, Amos Olalekan Abolaji, Cláudia Sirlene de Oliveira, Luiz Marivando Barros et al., “Productivity of CNPq Researchers from Different Fields in Biomedical Sciences: The Need for Objective Bibliometric Parameters—A Report from Brazil,” Science and Engineering Ethics 25, no. 4 (2019): 1037–55, https://doi.org/10.1007/s11948-018-0025-5.

58. Lillquist and Green, “The Discipline Dependence of Citation Statistics”; Tamarinde L. Haven, Joeri K. Tijdink, Brian C. Martinson, and Lex M. Bouter, “Perceptions of Research Integrity Climate Differ between Academic Ranks and Disciplinary Fields: Results from a Survey among Academic Researchers in Amsterdam,” PloS one 14, no. 1 (2019).

59. “Web of Science versus Scopus: Journal Coverage Overlap Analysis,” Texas A&M University Libraries, https://docslib.org/doc/11344199/web-of-science-versus-scopus-journal-coverage-overlap-analysis1.

60. Lutz Bornmann, Andreas Thor, Werner Max, and Hermann Schrier, “The Application of Bibliometrics to Research Evaluation in the Humanities and Social Sciences: An Exploratory Study Using Normalized Google Scholar Data for the Publications of a Research Institute,” Journal of the Association for Information Science and Technology 67, no. 11 (2016): 2778–89.

61. See Appendix for charts illustrating how each metric changes based on discipline. This powerful visual comparison was deemed too lengthy for the main text of this paper but clarifies how each metric treats disciplines differently.

62. Mahmood and Khan, “Comparison among Journal Impact Factor, Eigenfactor Score and SCImago Journal Rank Indicator of LIS Journals.”

63. M. Ryan Haley, “A Ranking of Journals for the Aspiring Health Economist,” Applied Economics 48, no. 18 (2016): 1710–18, https://doi.org/10.1080/00036846.2015.1105927; Danny Kingsley, “Those Who Don’t Look Don’t Find: Disciplinary Considerations in Repository Advocacy,” OCLC Systems & Services 24, no. 4 (2008): 204–18, https://doi.org/10.1108/10650750810914210.

64. Kingsley, “Those Who Don’t Look Don’t Find.”

65. Batista, Campiteli, and Kinouchi, “Is It Possible to Compare Researchers with Different Scientific Interests?”; Lillquist and Green, “The Discipline Dependence of Citation Statistics.” Some valuable research in this area has already been conducted. For examples, see Roemer and Borchardt, Meaningful Metrics, 180–98.

* Quinn Galbraith, Brigham Young University, email: quinn_galbraith@byu.edu; Alexandra Carlile Butterfield, Emory University, email: carlilealexandra@gmail.com; and Chase Cardon, Brigham Young University, email: chasecardon@hotmail.com. ©2023 Quinn Galbraith, Alexandra Carlile Butterfield, and Chase Cardon Attribution-NonCommercial (https://creativecommons.org/licenses/by-nc/4.0/) CC BY-NC.

Arguably, a number of other qualifications could have been used to determine research quality of university. The Carnegie Classification of Institutions of Higher Education, for example, may have also been a good choice. However, given our interest in how metrics can apply to academic librarians, we chose to use an indicator based on stellar academic libraries.

In some cases, departments could not be matched exactly. BYU’s Exercise Science department was paired with University of Texas’ Kinesiology and Health Education Department and Virginia Tech’s Human Nutrition, Food and Exercise Sciences Department. The biology departments of BYU and Virginia Tech were paired with the University of Texas’ Integrative Biology department. In the field of Education, the departments used for BYU were Teacher Education and Educational Leadership and Foundations, the departments for the University of Texas were Curriculum and Education and Educational Leadership and Policy, and no department distinctions were made for Virginia Tech (all faculty were chosen from their entire School of Education). Furthermore, Virginia Tech did not have a law school, so results for Law are calculated solely from the BYU and University of Texas data.

§ The number of faculty members without online CVs varied widely, based on department and university. Seemingly, some departments prioritized or required listing an online CV; this meant that in some cases, students could use the first ten randomly selected professors from the department. On average, students estimated that 15–20 percent of the first ten professors selected in a department would not have a CV available online or on Google Scholar.

Copyright Quinn Galbraith, Alexandra Carlile Butterfield, Chase Cardon


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

Article Views (By Year/Month)

2025
January: 79
February: 94
March: 93
April: 132
May: 131
June: 176
July: 210
August: 202
September: 146
October: 167
November: 252
December: 187
2024
January: 120
February: 82
March: 135
April: 81
May: 125
June: 87
July: 82
August: 59
September: 73
October: 75
November: 64
December: 54
2023
January: 0
February: 0
March: 0
April: 0
May: 0
June: 0
July: 0
August: 0
September: 0
October: 6
November: 508
December: 148