Rethinking the Subscription Paradigm for Journals: Using Interlibrary Loan in Collection Development for Serials

Many librarians evaluate local Interlibrary Loan (ILL) statistics as part of collection development decisions concerning new subscriptions. In this study, the authors examine whether the number of ILL article requests received in one academic year can predict the use of those same journal titles once they are added as library resources. There is little correlation between ILL requests for individual titles and their later use as subscribed titles. However, there is strong correlation between ILL requests within a subject category and later use of subscribed titles in that subject category. An additional study examining the sources from which patrons made ILL requests shows that database search results, not journal titles, dominate. These results call into question the need for libraries to subscribe to individual journal titles rather than providing access to a broad array of articles.

Introduction/Statement of Problem

Interlibrary loan (ILL) is not merely a valuable public service to library users. Examination of patron behavior around ILL can lead to insights about how the collection can best be shaped to meet users’ information needs.

In a traditional collection development model, collection development librarians try to identify journal titles most likely to be of use to patrons, leaving their colleagues in interlibrary loan to meet patrons’ information needs when the library lacks access to a particular title. Collection development and ILL are therefore closely linked in fulfilling the library’s mission to provide patrons the information they desire. What is more, interlibrary loan data are often used to shape collection development decisions. It stands to reason that, if a patron needed a title badly enough to request it via ILL, collection development librarians should consider adding it to the collection. Conversely, collection development decisions such as cancelling a subscription have the potential to affect the workload of ILL staff.

However, as more libraries are gravitating toward emphasis on access to information rather than “ownership” (including licensed rights to electronic content), ILL data can shed light on the relative importance of each model of librarianship.1 If patterns of ILL use indicate a need for specific serials titles, then the ownership model can be justified. However, where usage is diffused, librarians may do well to question whether providing content by acquiring individual titles is the most effective approach to meeting information needs.

This study examines user behavior around ILL to test a commonly accepted principle of collection development: that ILL requests should inform decisions to begin subscriptions to individual serials titles. Building on the findings of that test, this study also examines the ways in which patrons identify the material they later request through ILL. Putting the former findings in the context of the latter findings allows us to weigh in on the important question of ownership versus access.

Examining Assumptions about the Use of ILL Data for Collection Development Decisions

In serials librarianship, it has become a standard practice for collection development librarians to look at ILL requests as one factor among several that inform decisions about whether to start a new subscription. From 1958 through 1960, Eugene Graziano of Southern Illinois University tested the hypothesis that “titles most frequently requested for interlibrary loan in any given time interval…are the titles that should be considered for first purchase.”2 Based on his findings, Graziano ultimately concluded that this kind of scrutiny had little value for selecting journals; yet the idea has persisted. In 1974, Doris New and Retha Ott suggested that titles requested with enough frequency “should be considered for subscription”; and, in the decades since, Brian Williams and Joan Hubbard note, that notion has become “widely recognized” in collection development.3 Elena Bernardini notes that it behooves publishers to allow ILL of their titles because “analysis of transaction data often has a positive effect on new titles acquisitions.”4

In broad strokes, Herbert White notes that “anything ‘borrowed’ often enough to endanger fair use criteria should be bought immediately.”5 Peggy Johnson writes in her standard guide to collection development that “repeated requests from users for articles from a particular journal suggest that journal should be added to the collection.”6 The rationale for using ILL data to inform collection development decisions is a recognition that librarians can never fully comprehend the information needs of their patrons. White expressed it in terms of anticipation of user needs:

When we select for purchase we make a conscious effort to anticipate what our clients might ask for and most librarians do it well. However, permanent acquisition through purchase cannot be a perfect process. We can’t always anticipate what one user might want, nor can we anticipate that two or more users might want the same thing at the same time. Limitations of budget and space constrain even the largest research library and the interest in certain material could peak and decline rapidly.”7

Williams and Hubbard observe that “interlibrary borrowing requests represent demonstrated needs by the faculty and students.”8

While this practice of using ILL data to select journal titles for subscription seems to be sound at first glance, it rests on two assumptions. The first is that collection development should prioritize acquisition of resources that are expected to be used most often. The second assumption is that prior ILL usage predicts future usage of subscribed content. The first assumption involves a philosophy of collection development and cannot be tested. The second assumption, however, is subject to examination through a test of correlation.

There is solid bibliographical ground for assuming that prior ILL requests predict future usage. In some ways, materials requested through ILL are similar to other serials. For example, it is established that patterns of usage through ILL are similar to patterns of usage for materials owned by a library. Stephen Bensman and Stanley Wilder note that “interlibrary loan use of serials is not the random use of rare and unimportant titles. On the contrary, it manifests the same characteristics of serials use within a library and is dominated by the same titles.”9 It may be sensible to consult ILL statistics for collection development decisions in similar ways as usage statistics are examined.

However, if prior ILL usage is not, in fact, correlated to usage of the same title after it is subscribed, then reliance upon ILL data to inform serials collection decisions is probably ill-advised. This study offers a test of that assumption, using a single academic library’s ILL data and usage figures for subscribed titles.

Exploring How Library Patrons Identify Materials to Borrow

The practice of using ILL data to identify titles for which to begin subscriptions has embedded in it the notion of the journal title per se as a thing of value to information seekers. While it is inescapable that journal publishers still sell their content in packages called journal subscriptions, the question of whether the package is important to library patrons is open. In the last century, patrons and librarians alike had internalized lists of the most useful or esteemed journals in certain fields. This sense of some journals being more valuable than others persists, especially among academic faculty. In many disciplines, publishing in certain journals carries more weight toward tenure and promotion than does publishing in less prestigious journals. Criteria for a journal being ranked highly often include peer review, expert opinion, low acceptance rate, and high circulation.10

In the twentieth century, Wilder notes, the information landscape, particularly in the sciences, was marked by “a high degree of consensus on what is important research, which individuals and institutions produce it, and what journals publish it… [The consensus was] measurable, and highly stable over time.”11 This was in an era when information retrieval was time-consuming, requiring manual searches in paper indexes or, at best, CD-ROM databases, and then consultation of physical volumes to assess the usefulness of an article. In such an environment, the prestige (and accompanying familiarity) of a journal was an important marker of an article’s likely value to library patrons. The journal title was associated with the usefulness of its articles as part of the library’s collection, justifying the collection development approach discussed above.

However, with nearly seamless integration of online indexes and full-text journal articles, the library patron of the twenty-first century may give less weight to the journal title when considering which articles to seek out. Examination of ILL data to explore how patrons gather information about the articles they request may provide insight into the importance of collecting journal titles as opposed to providing more access to individual articles.

Literature Review

Use of ILL Data for Collection Development Decisions

Many studies have been performed regarding comparison of ILL requests and subscribed materials; the most common have looked at ILL requests for titles that have been cancelled. Most of them have determined that cancelling subscriptions does not cause a significant increase in ILL requests; further, they often then conclude that ILL is a cost-effective alternative to subscriptions as a method of providing access to content published in journals.12 Such calculations, however, rest on the assertion that reliance upon ILL provides patrons access to content with the same ease as does a library’s subscription. Elizabeth Roberts reported that a significant portion of library patrons simply does not bother to use ILL because of “delay or inconvenience.”13 This phenomenon was corroborated in a study by Steven Knowlton, Iulia Kristanciuk, and Matthew Jabaily, who found that, “after patrons identify desired articles that require ILL, they only submit ILL requests 31 percent of the time.”14

Studies of the opposite problem—testing whether ILL data is a useful predictor of titles that should be subscribed—are less common. Graziano’s 1962 study occurred in a time before usage statistics were available. Instead, he compared ILL requests to lists of most cited serials and lists of desiderata prepared by teaching faculty. He concluded that “interlibrary loan records are of limited value in pointing up specific serials titles for backfile purchase… [because] as many as 70 [percent] of all serials titles requested on interlibrary loan should be excluded from consideration for purchase, because they are likely to be obscure, not likely to be found on any ‘most cited’ lists and not likely to be on departmental desiderata lists.”15 His findings were confirmed by T.D. Wilson, who performed the same analysis for a special library, in contrast to Graziano’s academic library.16

It was not until 1999 that a study testing the link between ILL requests and later usage of the same titles was performed. Mary Wilson and Whitney Alexander were preparing to enter an agreement with Elsevier to subscribe to a package of electronic titles. They used ILL requests to identify 54 titles “they had confidence… students, faculty, and staff would use.” After three months of electronic access, 80 percent of the titles had been used at least once.17

In 2004, Paoshan Yue and Millie Syring explored a similar situation, as their library began a subscription to the full Elsevier package in 1999. They found that 97 percent of Elsevier journals that had been requested via ILL in the year prior to the package subscription were “downloaded more frequently in Fiscal Years 2000 to 2003 than they were requested in FY 1999.” However, the relative frequency of ILL requests was not a predictor of the frequency of later downloads: “a title that is requested most frequently in a given year has about a 25% chance to also be the most needed title in the following years.”18

The few studies published to date have tested the notion that ILL use should dictate beginning a subscription because ILL requests are a predictor of later usage, but only in broad terms. They have explored ILL requests within groups of titles. However, the prevailing notion is that ILL requests for individual titles will correlate to use of subscribed content within the same title after a subscription is begun. Although Wilson and Alexander tested such a correlation to assess the value of a single publisher’s package, we have found no study testing that correlation in a publisher-neutral study.

ILL Data and Information-Seeking Behavior

While there is abundant literature regarding information-seeking behavior among library patrons, few studies have incorporated ILL data.19 Lynne Porat examines the most relevant theories of decision making as it applies to library use. She notes that the “non-rational” library user (that is, virtually all users who engage in practices such as satisficing and reduction of effort) will choose “a second-rate article downloaded from the Internet instead of requesting a first-rate one via ILL.”20 Porat also summarizes numerous studies on how ILL users evaluate the perceived usefulness of an article before they request it. Top methods include reading abstracts and assessing the article title. In none of the studies did patrons select articles to request based upon the journals in which they were published.21

In 2008, Monica Vezzosi studied the information-seeking behavior of doctoral students in biology and environmental science, and discovered that students rely on a number of different techniques to identify sources that they later borrow via ILL. Among those techniques are Internet searches, library databases, and following the chain of citations from works already in hand. Two-thirds of the participants report that they browse the tables of contents from important journals, to be certain they do not “miss anything important.”22

On the other hand, Allison M. Sutton and JoAnn Jacoby studied library use among faculty, staff, and students in four social science disciplines. When compared to baseline studies from 1979 and 1989, the 2008 study showed a significant decrease in information-seeking behavior that centered around journal titles. Users were much less likely to have personal subscriptions to journals or to borrow journal issues from colleagues.23

Examination of ILL data to determine the sources from which ILL requests are derived will provide an additional data point to add to this discussion.

Two Studies of ILL Data to Inform Collection Development Decisions

Part 1: Comparison of ILL Requests to Later Usage of the Same Titles as Subscriptions

A. Methods

At the University of Memphis (UofM), a metropolitan research university with around 20,000 students, we have a natural experiment available to assess the assumption that ILL requests will correlate to later use of the same titles after subscriptions are begun. Following a thorough review of our serials portfolio in 2012, UofM initiated several hundred new journal subscriptions. The number of ILL requests was one of many data points that influenced our choices of new subscriptions.

For the years prior to the new subscriptions, we captured ILL requests for content from the titles in question. Following the onset of new subscriptions, we captured usage data. We tested the correlation between the number of ILL requests and the number of successful full-text article retrievals (SFTARs) for a large number of titles.

B. Data Set

In 2012, we agreed to begin two new “Big Deal” serials packages, with the publishing houses of Sage and Springer. Broadly, the terms of the agreement were that UofM agreed to maintain our subscriptions to titles published by Sage and Springer for several years. In exchange, Sage and Springer made many other titles available to our patrons. Between the two Big Deals, 1,162 new titles became available to UofM patrons in 2013. Of those new titles, 169 had been requested via ILL at least once in the years 2011 and 2012. This data set includes titles in many fields of inquiry and is not arranged to emphasize any particular discipline.

For each of those 169 titles, we compiled the number of article requests through ILLiad, using a query of ILLiad based on the journal title.24 We also compiled the number of SFTARs for the same titles during the years 2013 and 2014. SFTARs were gathered using the “click-through” report generated by Serials Solutions 360 Core, and by adding those figures to JR1 reports from the publishers’ administration modules.25 JR1 stands for “Journal Report 1” and relays the number of SFTARs by month and journal title.

In addition to comparing ILL requests and SFTARs for each title, we also assigned each title a subject category and compared ILL requests and SFTARs for each category. Subjects were assigned using our sense of the content in each journal and were confirmed or adjusted by comparing the initial subject assignments to the subjects assigned by Ulrich’s Periodicals Directory.26 Table 1 displays the data under review in each subject category.

Table 1

Subject Categories of Titles Included in the Study


ILLiad Requests, 2011–2012

SFTARs, 2013–2014

Psychology/Psychiatry/Child-Family Development



Sociology/Political Science/Social Work









Biology (including Genetics)



Education/Evaluation/Qualitative Review



Engineering/Material Sciences















GPS/Built Environment



Management/Human Resources/Organizational Studies



Computer Sciences/Simulation/Informatics





















Communication Sciences & Disorders






C. Statistical Methods

This study involves tests of correlation. Correlation involves documenting the association of one variable, or phenomenon, to another. It is known as a “positive correlation” when the first variable (instances of a phenomenon happening) increases, and then there is a documented increase in the other variable (instances of a second phenomenon happening). When the first goes up, and then the second sees an increase as well, there is a positive correlation between the two variables or phenomena. An example from daily life is the correlation between the number of minutes spent exercising and the number of calories burned. Correlation does not necessarily mean that one phenomenon causes the other. Tests of causation are different from tests of correlation and demand much stricter proofs.

There are a number of tests of correlation, and choosing the appropriate test depends upon whether the data set follows a normal (“bell-shaped”) distribution. Distribution is a way of describing how the events that are being studied are grouped. Think of a set of data that is the heights of all the students in the twelfth grade at a school. There will be a small number of students who are less than five feet tall, and more students whose heights are between 5’0” and 5’6”, a lot of students between 5’6” and 6’0”, and progressively fewer students as heights approach seven feet. Plotted on a graph, this normal (or Gaussian) distribution will create the “bell curve” that many readers are familiar with. Most sets of data fall into the normal distribution, and certain tests of correlation are appropriate to use on this kind of data set.

The data set for this study, however, does not follow the normal distribution. Instead, there were three categories with a large number of downloads, and the rest had only a few downloads. Instead of a bell curve, it would look like a three-humped camel’s back. Therefore, statistician George Relyea, our coauthor, advised on the proper tests for our study and performed the analysis. In this case, a test called the Kruskal-Wallis test was most appropriate. This test looks at a data set that does not have a normal distribution and assesses whether the data in the first category (number of ILL requests before the subscription began) are correlated with the data in the second category (SFTARs after the subscription began).27

D. Results

i. Individual Titles

Our test of correlation found that, for any individual serial title, the number of ILLiad requests is only slightly positively correlated to later SFTARs. Complete results are found in the endnotes.28

ii. Subject Categories

Our test of correlation found that, within a broad subject category, the number of ILLiad requests is strongly positively correlated to later SFTARs. Complete results are found in the endnotes.29

Part 2: Exploring the Nature of the Sources of ILL Requests

A. Methods

For the purposes of examining the extent to which desire for information published by individual journal titles drives ILL requests, we analyzed a report of article requests from patrons placed during the first six weeks of 2013. Using the query function of ILLiad, we generated a report showing all article requests. Fields generated included “Journal Title” and “Cited In.” The last field may be populated in two ways. For users in a database who follow the link resolver to ILLiad, the field is automatically populated with the name of the referring database. For users who enter their requests manually, it is a free entry field. Visual inspection of the data allowed us to remove cancelled requests, book chapters, articles from conference proceedings, and newspapers. We also normalized journal titles.

B. Data Set

The final report included 234 article requests placed between January 1 and February 14, 2013.

C. Results

The 234 articles requested came from 186 journals. Each journal had a mean number of 1.26 citations. Two journals had five requests, while 153 journals had only one request. Table 2 contains complete data on the number of journal titles receiving each number of requests.

Table 2

Number of Journal Titles Receiving a Given Number of Requests

Number of Requests

Number of Journals Receiving that Number of Requests


153 (77%)


24 (12%)


18 (9%)


1 (1%)


2 (1%)

figure 1

Percentage of Journal Titles Receiving a Given Number of Requests

Figure 1. Pie Chart of Percentage of Journal Titles Receiving a Given Number of Requests

Of the 234 articles, 154 had data in the “Cited In” field. However, 10 of those were from a patron’s RefWorks account. Because the initial source of the RefWorks citations is unknown, for purposes of this analysis, those 10 articles are excluded. Therefore, of 224 articles, 144 (64%) had data in the “Cited In” field. A total of 28 different library databases were represented as sources. Seven articles were “Cited In” open web searches (Google, PubMed, and the University of Notre Dame domain). Three articles were “Cited In” a book or journal article. Table 3 contains complete data on the figures for the “Cited In” field.

Table 3

Number of Requests Originating in Each Source

“Cited in” Source

Number of Requests

Not Specified

80 (36%)

Library-hosted Database

135 (60%)

Open Website

7 (3%)

Citations in Other Publications

3 (1%)

figure 2

Percentage of Requests Originating in Each Source

Figure 2. Pie Chart of Percentage of Requests Originating in Each Source


The debate over access and ownership in serials librarianship revolves around the question of whether libraries should acquire content provided as bundles of articles carrying titles of journals, usually with perpetual access, or rather spend collection money on options that provide wide access to many titles, but lack long-term “ownership.” The data from ILL requests we have studied provide some evidence that patrons’ information-seeking behavior places higher value on the individual article than on the journal title.

The common wisdom that librarians should rely upon ILL data to select subscriptions that will be highly used is founded upon a notion that ILL requests will correlate positively to later usage. On an individual title level, that notion appears to be ill-founded. The very slight positive correlation seen in this study is too weak to make any claims about a relationship between ILL requests and later SFTARs. Subscriptions to individual titles on the basis of ILL requests are not any more likely to see high usage than subscriptions selected randomly.

However, within subject categories, the correlation between ILL requests and later SFTARs is strong. This implies that collection development librarians may be better advised to seek guidance from ILL requests about broad disciplines where the library’s serials portfolio needs to be strengthened. This idea is in accord with some of the more cautious librarians who have written on the topic. For example, Gary Byrd, D.A. Thomas, and Katherine Hughes emphasized the analysis of ILL data to show the areas in which a library’s collection is especially deficient.30 Mounir Khalil noted that ILL data showing high levels of borrowing in a subject area could provide “good justification for asking for funds to correct the weak areas in the library collection.”31

The notion of looking to strengthen access by subject rather than title is reinforced by the pattern of ILL requests analyzed by title and by source. UofM by no means had a robust collection of individual subscriptions in 2013. In fact, between 1994 and 2008, a series of cuts had reduced the number of direct subscriptions from more than 5,000 titles to fewer than 2,000 titles. If individual titles were important, we would expect to see many requests for the same title. However, we saw that the vast majority of titles requested were only requested once. Similarly, in the more limited data set showing how patrons determined a need to request an article, we see that almost all the requested articles were found through patron searches in databases, not from existing knowledge of the journal’s importance. This claim is limited, however, by the large number of requests for which the patron’s source of information is not known.

For librarians looking to use ILL data to inform serials collection development decisions, title-by-title analysis is unlikely to prove useful. In fact, despite the fact that the 169 new journal subscriptions were those that had been requested via ILL prior to beginning the subscriptions, the same titles constituted only 5 percent of ILL requests.

However, examination of the larger subject areas that see the most ILL requests will provide some insight into disciplines where more subscriptions are likely to see high usage. Librarians may be advised to then select a number of titles within that subject area, using other important markers such as bibliometric measurements (such as impact factor, Eigenfactor, or Source Normalized Impact per Paper [SNIP]) or citation counts. Alternatively, bundles of titles from either subject specialist publishers or larger publishing houses that allow “Medium Deals” of a smaller set of titles within a subject area may be desirable.32

Commercial alternatives to direct subscriptions also exist. Some publishers offer “token” or “pay-per-view” programs, which allow a library to purchase preapproved access to online articles.33 Also, intermediary services such as Get It Now and Reprints Desk offer the ability for a library to purchase articles from publishers who do not offer per-article purchasing directly by libraries.34 By offering a broad array of articles from a larger number of journal titles, libraries are more likely to meet the information needs of their patrons. While this may come at the expense of long-term access to the “most important” titles, evidence seems to indicate that the prestige of a journal is only one of many factors that influence a patron’s choice to access an article.

(Of course, none of this is to say that librarians should not use individual judgment and information provided by faculty members to inform decisions about subscriptions to highly important individual journal titles. However, those “core” titles are seldom at question in the scenario we describe; rather, librarians have been using ILL data to determine what “optional” titles should be subscribed to. Steven Knowlton, Adam Sales and Kevin Merriman studied faculty valuation of journal titles as compared to bibliometric valuation of the same titles, indicating that core titles are easily identified by both faculty and librarians.)35


Much of serials collection development focuses on obtaining the journal subscriptions that are most suitable for the library’s patrons. However, examination of ILL data calls this approach into question. It is a widely accepted notion that the number of ILL requests for an individual title will correlate to usage if the title is later taken on subscription by a library. Our test of correlation between ILL requests and SFTARs after subscriptions began found no strong correlation. However, it did show that there is a strong correlation between ILL requests and SFTARs within a larger subject category. Furthermore, examination of the distribution of requests by journal title, and the sources from which users found the citations they requested, show that library users rely mostly on databases to find material, and draw from widely scattered journal titles when making ILL requests. Collection development decisions based upon ILL requests are likely to deliver the most useful content when focused on building serials portfolios in a subject area, or even providing broad access at the article level, rather than on adding individual titles with high numbers of ILL requests.


1. For discussion of the “ownership” versus “access” question, see (among others) Laura Kane, “Access vs. Ownership: Do We Have to Make a Choice?” College & Research Libraries 58, no. 1 (1997): 59–67; Robert Lawson and Patricia Lawson, “Libraries in a Bind: Ownership Versus Access,” Journal of Consumer Affairs 36, no. 2 (2002): 295–96; Jeffrey M. Mortimore, “Access-Informed Collection Development and the Academic Library: Using Holdings, Circulation, and ILL Data to Develop Prescient Collections,” Collection Management 30, no. 3 (2005): 21–37.

2. Eugene E. Graziano, “Interlibrary Loan Analysis: Diagnostic for Scientific Serials Backfile Acquisitions,” Special Libraries 53, no. 5 (1962): 251.

3. Doris E. New and Retha Zane Ott, “Interlibrary Loan Analysis as a Collection Development Tool,” Library Resources and Technical Services 18, no. 2 (1974): 282; Brian W. Williams and Joan G. Hubbard, “Interlibrary Loan and Collection Management Applications of an ILL Database Management System,” Journal of Interlibrary Loan, Document Delivery and Information Supply 1, no. 3 (1991): 68.

4. Elena Bernardini, “The Relationship between ILL/Document Supply and Journal Subscriptions,” Interlending and Document Supply 39, no. 1 (2011): 19.

5. Herbert S. White, “Interlibrary Loan: An Old Idea in a New Setting,” Library Journal (July 1987): 54.

6. Peggy Johnson, Fundamentals of Collection Development and Management (Chicago: American Library Association, 2004), 107.

7. White, “Interlibrary Loan,” 54.

8. Williams and Hubbard, “Interlibrary Loan and Collection Management Applications,” 68.

9. Stephen J. Bensman and Stanley J. Wilder, “Scientific and Technical Serials Holdings Optimization in an Inefficient Market: A LSU Serials Redesign Project Exercise,” Library Resources and Technical Services 42, no. 3 (1998): 189–90.

10. Judith Nixon, “Core Journals in Library and Information Science: Developing a Methodology for Ranking LIS Journals,” College & Research Libraries 75, no. 1 (2014): 66–90.

11. Stanley J. Wilder, “A Simple Method for Producing Core Scientific and Technical Journal Title Lists,” Library Resources and Technical Services 44, no. 2 (2000): 92.

12. See, for example, Thomas L. Kilpatrick and Barbara G. Preece, “Serial Cuts and Interlibrary Loan: Filling the Gaps,” Interlending & Document Supply 24, no. 1 (1996): 12–20; Janet Hughes, “Can Document Delivery Compensate for Reduced Serials Holdings? A Life Sciences Library Perspective,” College & Research Libraries (1997): 421–31; Michele J. Crump and Leilani Freund, “Serials Cancellations and Interlibrary Loan: The Link and What It Reveals,” Serials Review 21, no. 2 (1995): 29–36; Eleanor Gossen and Sue Kaczor, “Variation in Interlibrary Loan Use by University at Albany Science Departments,” Library Resources & Technical Services 41, no. 1 (1997): 17–28; Gale Etschmaier and Marifran Bustion, “Document Delivery and Collection Development: An Evolving Relationship,” Serial Librarian 31, no. 3 (1997): 13–27; Andrea L. Duda and Rosemary L. Meszaros, “Validating Journal Cancellation Decisions in the Sciences: A Report Card,” Issues in Science & Technology Librarianship (1998), available online at www.istl.org/98-summer/article4.html [accessed 2 March 2016]; Jonathon Nabe and David C. Fowler, “Leaving the ‘Big Deal’: Consequences and Next Steps,” Serials Librarian 62, no. 1/4 (2012): 59–72; Kristen Calvert, Rachel Fleming, and Katherine Hill, “Impact of Journal Cancellations on Interlibrary Loan Demand,” Serials Review 39, no. 3 (2013): 184–87; Wayne A. Pedersen, Janet Arcand, and Mark Forbis, “The Big Deal, Interlibrary Loan, and Building the User-Centered Journal Collection: A Case Study,” Serial Review 40 (2014): 242–50.

13. Elizabeth P. Roberts, “ILL/Document Delivery as an Alternative to Local Ownership of Seldom-Used Scientific Journals,” Journal of Academic Librarianship 18, no. 1 (1992): 32.

14. Steven A. Knowlton, Iulia Kristanciuk, and Matthew J. Jabaily, “Spilling Out of the Funnel: How Reliance upon Interlibrary Loan Affects Access to Information,” Library Resources and Technical Services 59, no. 1 (2015): 4.

15. Graziano, “Interlibrary Loan Analysis,” 256.

16. T.D. Wilson, “Follow-up on Interlibrary Loan Analysis,” Special Libraries 53 (1962): 493–94.

17. Mary Dabney Wilson and Whitney Alexander, “Automated Interlibrary Loan/Document Delivery Data Applications for Serials Collection Development,” Serials Review 25, no. 4 (1999): 17.

18. Paoshan W. Yue and Millie L. Syring, “Usage of Electronic Journals and Their Effect on Interlibrary Loan: A Case Study at the University of Nevada, Reno,” Library Collections, Acquisitions, and Technical Services 28, no. 4 (2004): 429.

19. The most current research is summarized in Information Seeking Behavior and Technology Adoption: Theories and Trends, eds. Mohammed Nasser Al-Suqri and Ali Saif Al-Aufi (Hershey, Pa.: Information Science Reference, 2015) and Donald O. Case and Lisa M. Given, Looking for Information: A Survey of Research on Information Seeking, Needs, and Behavior (Bingley, U.K.: Emerald, 2016).

20. Lynne Porat, “Interlibrary Loans and Academic Research: The Differences between Users and Non-Users and Factors Affecting Satisfaction with Outcomes” (PhD diss., Bar-Ilan University, 2008), 29.

21. Porat, “Interlibrary Loans and Academic Research,” 41–47.

22. Monica Vezzosi, “Doctoral Students’ Information Behaviour: An Exploratory Study at the University of Parma (Italy),” New Library World 110, no. 1/2 (2009): 71.

23. Allison M. Sutton and JoAnn Jacoby, “A Comparative Study of Book and Journal Use in Four Social Science Disciplines,” Behavioral and Social Sciences Librarian 27, no. 1 (2008): 1–33.

24. ILLiad is a resource sharing management system that aggregates requests from differing systems (such as OCLC and Docline) into a single interface for staff and users. ILLiad was developed at Virginia Tech and is marketed by OCLC (Dublin, Ohio).

25. Serials Solutions 360 Core is an OpenURL link resolver service available through ProQuest (Ann Arbor, Mich.); JR1 is a COUNTER-compliant report of SFTARs.

26. Ulrich’s Periodicals Directory, 53rd ed. (New Providence, N.J.: R.R. Bowker, 2015).

27. To assess the differences between ILLiad requests and SFTARs, we performed a one-way analysis of variance using a nonparametric Kruskal-Wallis test. All analysis was conducted using SAS version 9.4 (Cary, N.C.). Because the data do not follow normal distribution, we used a Spearman’s rank correlation to test the strength of the correlation between the number of ILLiad requests and later downloads. This nonparametric test is appropriate for our data set because of the extreme variability in the number of downloads.

28. For the comparison of ILLiad requests to SFTARs by individual journal title, ρ = 0.30, P < 0.0001, where ρ = rho, the measure of correlation and P is the level of significance. A P-value closer to zero denotes that the measure in the statistical test [correlation] has a high probability of being valid in similar situations. In this case, you would have to analyze more than 10,000 similar data sets before encountering a situation where no correlation existed.

29. The Kruskal-Wallis test revealed significant differences between subject categories on the number of downloads, C2 (22) = 67.240, P < 0.001. The three categories of Health/Sports/Epidemiology/Nutrition, Psychology/Psychiatry/Child-Family Development, and Education/Evaluation/Qualitative Review were significantly higher than all of the other categories. These fields of study are among the most populous on the UofM campus. When assessing the association between ILLiad requests and total SFTARs by subject, we found a marginally positive association, Spearman ρ = 0.303, P < 0.001. When aggregating subject categories into broader categories (see table 1), we found a stronger, positive association, Spearman ρ = 0.867, P < 0.001.

30. Gary D. Byrd, D.A. Thomas, and Katherine E. Hughes, “Collection Development Using Interlibrary Loan Borrowing and Acquisitions Statistics,” Bulletin of the Medical Library Association 70 (1982): 1–9.

31. Mounir A. Khalil, “Applications of an Automated ILL Statistical Analysis as a Collection Development Tool,” Journal of Interlibrary Loan, Document Delivery & Information Supply 4, no. 1 (1993): 49.

32. See, for example, Nabe and Fowler, “Leaving the ‘Big Deal’.”

33. For discussion of token or pay-per-view programs, see Clint Chamberlain and Barbara MacAlpine, “Pay-Per-View Article Access: A Viable Replacement for Subscriptions?” Serials 21, no. 1 (2008): 30–34; Marty Coleman, “Toward Improved ROI: Outcomes of Researching Current Pay-Per-View Practices,” Against the Grain 28, no. 1 (2016): 54; and Matthew J. Jabaily, James R. Rodgers, and Steven A. Knowlton, “Leveraging Use-by-Publication-Age Data in Serials Collection Decisions,” in Where Do We Go from Here? Proceedings of the 2015 Charleston Conference, eds. Beth R. Bernhardt, Leah H. Hinds, and Katina P. Strauch (West Lafayette, Ind.: Purdue University Press, 2016), 292–302, doi: 10.5703/1288284316271.

34. For discussion of intermediary services, see Karl F. Suhr, “Get It Now: One Library’s Experience with Implementing and Using the Unmediated Version of the Copyright Clearance Center’s Document Delivery Service,” Journal of Electronic Resources Librarianship 25, no. 4 (2013): 321–25; Megan Jaskowiak and Todd Spires, “The Usage of ILLiad and Get It Now at a US Medium-Sized Academic Library over a Three-Year Period,” Interlending & Document Supply 44, no. 2 (2016): 81–87; “Reprints Desk Launches Academic Document Delivery Service,” Advanced Technology Libraries 42, no. 6 (2013): 10; and Gail Perkins Barton, forthcoming article on Reprints Desk, in process.

35. Steven A. Knowlton, Adam C. Sales, and Kevin W. Merriman, “A Comparison of Faculty and Bibliometric Valuation of Serials Subscriptions at an Academic Research Library,” Serials Review 40, no. 1 (2014): 28–39.

*Gail Perkins Barton is Assistant Professor and Interlibrary Loan Librarian, Collection Management Interim Head, at the University of Memphis; e-mail: gpbarton@memphis.edu. George E. Relyea is Research Assistant Professor, Division of Epidemiology, Biostatistics, and Environmental Health, at the University of Memphis; e-mail: grelyea@memphis.edu. Steven A. Knowlton is Librarian for History and African American Studies at the Princeton University Library (formerly Collection Development Librarian, University of Memphis Libraries); e-mail: steven.knowlton@princeton.edu. ©2018 Gail Perkins Barton, George E. Relyea, and Steven A. Knowlton, Attribution-NonCommercial (http://creativecommons.org/licenses/by-nc/4.0/) CC BY-NC.

Copyright Gail Perkins Barton, George E. Relyea, Steven A. Knowlton

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

Article Views (Last 12 Months)

No data available

Contact ACRL for article usage statistics from 2010-April 2017.

Article Views (By Year/Month)

January: 19
February: 46
March: 31
April: 5
January: 47
February: 54
March: 55
April: 57
May: 49
June: 43
July: 17
August: 70
September: 52
October: 86
November: 37
December: 57
January: 14
February: 15
March: 547
April: 192
May: 97
June: 62
July: 82
August: 45
September: 71
October: 82
November: 69
December: 24
April: 1
May: 107
June: 11
July: 60
August: 7
September: 10
October: 1
November: 4
December: 3