07_McDonaldTrujillo

Library Terms that Users (Don’t) Understand: A Review of the Literature from 2012-2021

This paper compares website usability—specifically library users’ understanding of library terms—for fifty-one original research studies between 2012-2021, with the findings of John Kupersmith’s 2011 white paper “Library Terms That Users Understand.” Studies reported approximately twice as many terms that users didn’t understand than terms users did understand, with some terms appearing in both categories. Analysis of the findings suggests a majority of Kupersmith’s guidelines remain applicable to today’s online environment, with some adjustments related to technology advances. We propose an additional guideline that acknowledges the role non-library websites play in guiding how users interact with library terminology.

Introduction

Naismith and Stein observed in 1989, “as is true with many professions, librarianship employs many words and phrases that can be considered technical language.”1 To describe library operations, services, resources and workflows, library and information professionals have developed jargon, or specialized vocabulary, hereafter referred to as “library terms.” These terms include phrases such as “call number” that are used in many libraries, and specific names adopted within individual libraries and library systems, such as naming a library catalog or search. Users of libraries may not be familiar with these library terms, negatively impacting their use of library services and resources.

John Kupersmith iteratively revised and published a white paper summarizing best practices for using library terms, originally gleaned from findings of library usability research published between 1997 to 2008. He later included studies published from 2009 to 2011.2 His paper presented seven best practices:

  1. Test to see what users do and don’t understand and what terms they most strongly relate to.
  2. Avoid—or use with caution—terms that users often misunderstand.
  3. Use natural language equivalents on top-level pages.
  4. Enhance or explain potentially confusing terms.
  5. Provide intermediate choices when a top-level menu choice presents ambiguities that can’t be resolved in the space available.
  6. Provide alternative paths where users are likely to make predictable “wrong” choices.
  7. Be consistent to reduce cognitive dissonance and encourage learning through repetition.3

Acknowledging the enduring usefulness of Kupersmith’s white paper, as well as the significant changes in both user interfaces and user expectations since 2011, we wished to provide updated results for library practitioners and library web developers, including a review of whether Kupersmith’s summary findings still hold true. In this paper we present an analysis of original research studies conducted between 2012-2021 with findings related to library users’ understanding of library terms, and compared these findings with Kupersmith’s work.

Research Questions

R1: Have there been changes over time in types of organizations represented, in research questions, or in methodology?

R2: What library terms do users understand?

R2.1: What terms do users not understand?

Literature Review

Usability Research in Libraries

The well-known user experience (UX) consulting firm, Nielsen Norman Group, defines “usability” as, “a quality attribute that assesses how easy user interfaces are to use. The word ‘usability’ also refers to methods for improving ease-of-use during the design process.”4 Mentions of the importance of usability and usability testing began to regularly appear in the library literature in the late 1990s.5 By 2003, Vaughn and Callicott comment, “web site usability testing has rapidly become de rigueur in libraries across the country… Simple usability testing can be a fast, cheap, and effective means of Web site evaluation.”6

Libraries’ attention to usability appears to have been well-warranted. In a 2007 literature review, Blummer stated, “although many academic library web pages contain relevant resources and services, navigation [and usability] studies revealed users encountered difficulties obtaining materials and services because of the poor design of the sites.”7 Survey studies have continued to find issues with website accessibility, content, and design.

In the later 2000s concerns started to emerge about the prevalence of library usability case studies whose results were sometimes so localized as to not be broadly generalizable, with Emanuel offering recommendations in her 2013 literature review “to make [study] results applicable…beyond a single interface evaluation at one library and be generalizable across different interfaces and among different libraries.”8

In the mid-2010s, focus shifted from a sole consideration of usability to the broader lens of UX. For example, Bell asserted that, “academic librarians should commit to a total, organization-wide effort to design and implement a systemic user experience.”9 MacDonald’s interviews with UX librarians suggested that barriers to embracing this broader approach remained, including cultural resistance and resource limitations.10 In 2020, Young et al. found that UX maturity in libraries remained in the “low-to-middle” range.11

Despite, or perhaps within, the ongoing shift towards a more holistic approach to overall UX, the COVID-19 pandemic renewed library interest in web usability testing. A 2021 American Libraries feature article on user-friendly websites commented: “the increased importance of library websites during the COVID-19 era has highlighted common usability shortcomings—and opportunities.”12

Jargon and Its Impact on End-User Engagement and Understanding

Jargon is “the technical terminology or characteristic idiom of a special activity or group.”13 Though this can be a helpful shortcut for intra-organizational dialogue, many organizations have expressed their concern with using jargon in communications meant for general audiences.

The Plain Language Action and Information Network, an “unfunded working group of federal employees” directs writers to avoid jargon, saying: “readers complain about jargon more than any other writing fault, because writers often fail to realize that terms they know well may be difficult or meaningless to their audience.”14

Likewise, the Nielsen-Norman Group advocates for clear straightforward writing which, “communicates information succinctly and efficiently so that readers understand the message quickly, without having to decipher complicated sentences or vague jargon.”15 Usability consulting company UserTesting pointed to the damaging effect of jargon on customer experience, saying, “use of jargon can impair clarity, and can be isolating and/or condescending to the reader.”16

A large-scale study of U.S. readers’ comprehension of technical writing presented with and without jargon terms found that, “simply providing definitions or explainers alongside technical language will not reduce the negative effects of jargon use. Instead, practitioners should remove jargon—or other forms of technical language—where possible.”17

Library Terms (Jargon) and Usability

Naismith and Stein’s 1989 study, testing student comprehension of library terms, issued a strong warning:

Although each profession has its share of jargon, librarianship is such a heavily user-oriented field that any indication of a lack of communication should be given serious attention. The results reported here indicate clearly that there is a communications problem between librarians and patrons. Librarians cannot rely on the patrons to decipher a meaning from the context.18

Interest in various questions related to user comprehension of library jargon appeared as a distinct thread in the library literature related to usability in the 1990s and early 2000s.19

Kupersmith’s aforementioned meta-analysis, “Library terms that users understand,” analyzed and presented the results of original research studies published between 1997 and 2008 focusing on those “evaluating terminology on library websites, and suggest[ing] test methods and best practices for reducing cognitive barriers caused by terminology.”20

In 2012, Majors conducted a usability study of multiple discovery interfaces, and stated: “it is clear that in some areas the library could adopt different public-facing terms that might more clearly suggest to patrons what is meant.”21

In a 2017 content analysis of signage, websites and documents for four New Zealand public libraries, Fauchelle concluded, “while jargon might be useful when communicating within a discipline, it is crucial to use language that library clients easily understand.”22 Backowski et al. revised database descriptions to eliminate jargon and found that Plain Language descriptions improved participants’ ability to select databases. Referencing “equity and usability” as drivers, they stated, “Plain Language database descriptions offer an opportunity to practice user-centered librarianship.”23

Methods

Search Strategy

We identified the library and information science bibliographic databases LISTA24 and LISA25 as key sources of professional literature. To complement the professional literature—and to include gray literature such as presentations, white papers, and other non-peer reviewed sources—we also searched Google Scholar as recommended by Haddaway et al.26

In order to identify as many pertinent research articles for our literature review as possible, we started with the following search structure:

LISTA (via EBSCO) and LISA (via ProQuest)

Terms:

website OR “web site” OR libguides OR site OR “online tutorial” AND librar* AND usability OR “user research” OR “user experience”

Limiter:

2012-2021

Google Scholar

Terms:

(website OR “web site” OR libguides OR site OR “online tutorial”) AND (library OR libraries) AND (usability OR “user research” OR “user experience”)

Limiter:

2012-2021

We limited terms to studies done on online environments to reflect Kupersmith’s original goal of assisting library web developers. As Kupersmith’s paper included research up to 2011, we limited the date to papers published in 2012 and beyond.

We did the searches on the same day in February 2022 and downloaded results into a shared Zotero Group Library. We added all results returned from the search queries within LISA and LISTA. Following the recommendation of Haddaway et al., we downloaded the first two hundred relevance sorted Google Scholar results.27

In sum, we retrieved 1260 results. After deduplication, 978 results remained (see Figure 1: Study Selection).

Figure 1

Study Selection

Figure 1. Study Selection

Screening Strategy

Once we had gathered and deduplicated the documents, we removed documents where the library location was outside North America and the language of publication was not English, as library terminology and services often have a regional element. We then closely reviewed the content of the remaining 115 papers and removed documents that did not report on an experiment involving non-librarian patrons using online library terminology, as our research questions center on directly observed user behavior and not indirect methods such as content analysis and heuristic evaluation. After we finished screening fifty-one articles met criteria (Appendix A).

Data Analysis

To analyze the documents, we identified specific demographics and characteristics of interest (e.g., number of participants, methods, etc.) and recorded them on a shared spreadsheet (Appendix B). We also coded Kupersmith’s research in an abbreviated form so that we could compare study changes over time. We recorded article findings on a separate spreadsheet tab and assigned values indicating type of platform mentioned, library service task performed, Kupersmith guideline(s) followed, and notes.

To define “Library Service or Task,” we developed fifteen categories mapped to library services or tasks, and categorized findings across all studies. This categorization was particularly helpful in analyzing findings based on library-specific themes. Most frequently covered topics were databases / journals / articles (twenty-eight), website navigation (twenty-one), instruction (sixteen) and borrowing (fifteen) (table 1). Although our search strategy did not focus on physical spaces, eight studies mentioned physical spaces in their findings. The full dataset is available to readers (https://doi.org/10.17605/OSF.IO/ZR4AB). As we took notes on the articles, we met to discuss classifications and refine meanings, recording outcomes in a data dictionary.

Table 1

Number of Studies Assigned to Each Library Service or Task

Library Service or Task Category

# Studies

Databases/journals/articles

28

Website navigation

21

Instruction

16

Borrowing

15

Discovery search

11

Research assistance

10

Books

9

Physical facilities

8

Library catalog

7

Search box

5

Item metadata

4

Citation styles & management

3

Special collections & archives

3

Datasets

2

Images

1

Findings

R1. Have There Been Changes over Time in Types Of Organizations Represented, in Research Questions, or in Methodology?

Kupersmith reports on forty-seven studies from thirty-one institutions in his table titled ‘Library Terms Evaluated in Usability Tests and Other Studies.’ He does not articulate his selection criteria. We examined fifty-one studies and have articulated our selection criteria in the previous section on methods.

Fifty-one institutions were represented, though there is not a one-to-one relationship between studies and institutions.

Types of Organizations Represented

To understand the types of organizations represented in Kupersmith and our collection of studies we compared institutions by Carnegie Control and Carnegie Classification.28

Carnegie Control indicates whether an institution is public, private not-for-profit, or for-profit. Carnegie Classification categorizes institutions based on number and type of degrees offered.

The results were not different enough to be meaningful. In both studies just over 80 percent of institutions were public with the remainder being private not-for-profit (in the current study, two percent of organizations were not institutions of higher education and thus had no Carnegie Control designation). In Kupersmith’s paper, a little more than two-thirds of references originated from doctoral universities, with additional groups being Master’s Colleges and Universities (16.1 percent), approximately three percent each from Associate’s Colleges and Baccalaureate Colleges, and the remainder of institutions not having a Carnegie Classification. The current paper represents a very slightly greater diversity of Carnegie Classification types. Doctoral Universities still account for approximately two-thirds of the references. In addition to Master’s Colleges and Universities, Baccalaureate Colleges, and Associate’s Colleges, references also originated from Special Focus Four-Year and Doctoral/Professional Universities. Approximately eight percent of our references could not be mapped to Carnegie Classification because their institutions do not participate.

Neither Kupersmith’s paper nor this paper includes research originating from public libraries. One study reviewed by Kupersmith was undertaken by the Minitex/Minnesota State Library Standards Review Task Force; its results report on a survey of 7651 library users, including 5021 users of public libraries, 232 K-12, and 202 “other.”

Research Questions

Kupersmith did not specifically capture information related to research questions or study goals. As noted in our Methods section, we recorded various characteristics for each reviewed study, including the research question or research goal. Sixteen of the fifty-one studies reviewed in this paper—approximately 30 percent—articulated a formal research question. The remainder of the studies outlined a variety of goals. Seven high-level categories emerged across all fifty-one studies (see figure 2). Some studies identified multiple topics of interest.

Figure 2

Research Goals by Topic

Figure 2 Research Goals by Topic

The most common topics of research were associated with usability inquiries, the redesign of a website or web application (before or after it took place), and the assessment of a discovery tool. Less frequently mentioned were tutorials or course/research guides, navigation/information architecture, a specific investigation of terminology, or mobile web interfaces. Frequently, the goal-based studies framed their purpose broadly as assessing usability or user preferences, or in relation to a redesign.

Number of Participants and Types of Research Methods

Looking at study methodology, we reviewed the number of participants and the number and types of user research methods.

The median and mode for number of participants for studies reviewed by this paper and studies reviewed by Kupersmith were quite comparable, as described in table 2. Mean values were not a meaningful measure due to several large outliers.

Table 2

Number of Participants, Comparison of Kupersmith & this Study

Total Studies

Median

Mode

Mean

Kupersmith

47

15

9

224.6

This study

51

19

10

27.2

To understand the type of user research methods used in the study we plotted each method on a graph using the framework proposed by Gordon and Rohrer’s 2022 article, “A Guide to Using User-Experience Research Methods.”29 This framework identifies studies using the following dimensions:

  • Attitudinal versus Behavioral: This dimension “contrast[s] what people say versus what they do.”30
  • Qualitative versus Quantitative: Qualitative methods involve direct observation and data gathering where quantitative methods make use of indirect observation and data gathering. Qualitative “methods are better suited to answering questions about why or how to fix a problem,” where quantitative methods “answer how many and how much types of questions.”31
  • Context of product use: This dimension refers to whether study participants engage in a natural use of a product, a scripted/lab-based use of a product, a limited use of a product, or are not using a product and thus providing decontextualized feedback.32

The forty-six papers reviewed by Kupersmith reported a total of ten user research methods. Twenty-eight mentioned user observation (i.e., usability testing). Other methods were used much more infrequently: six mentions of card sorting (“a technique that involves asking users to sort information into logical groups”33), five each for surveys and questionnaires, three for focus groups, two “link choice” studies, and one mention of prototyping. Although heuristic evaluation and design walkthrough were each also mentioned once, they are expert evaluation methods rather than user research methods and therefore do not appear in figure 3. Only three studies used two methods and just two studies reported use of four methods.

Figure 3

User Research Methods Reported in Kupersmith and in this Paper, Presented in Gordon & Rohrer’s Three-Dimensional Framework

Figure 3. User Research Methods Reported in Kupersmith and in this Paper, Presented in Gordon & Rohrer’s Three-Dimensional Framework

The fifty-one studies reviewed in this paper reported a total of eleven user research methods. The overwhelming favorite was usability testing with fifty mentions. Surveys were implemented by ten studies and card sorting by eight. Other methods mentioned were interviews, prototyping, focus groups, advanced scribbling (categorized as participatory design in figure 3), analytics (i.e. Google Analytics), eye tracking, transaction log analysis, and tree mapping. In seventeen studies, more than one method was used; four studies employed three methods; and three studies employed four.

As figure 3 shows, original library research reviewed in this paper and Kupersmith’s paper tended heavily to the behavioral/qualitative and attitudinal/qualitative quadrants, revealing a focus on “why” and “how to fix” questions. Use of attitudinal/quantitative methods centered on surveys and questionnaires. The behavioral/quantitative quadrant (what people do) was least represented, with only one study reporting use of log analysis.

In summary, there is not a meaningful change in types of organizations represented, research questions, number of participants and types of research methods between Kupersmith’s analysis in 2011 and this one.

R2: What Library Terms Do Users Understand?

Though we were looking for terms users understand, we found far more studies mentioning terms that users didn’t understand (table 3). Overall, we identified forty-one unique understood terms in the literature, compared with 106 unique misunderstood terms. Eleven unique terms fell within both categories. As the main goal of many studies was to improve a library website or sites, we think the results tended to highlight pain points such as misunderstood terms. Also, we did not mark a term as understood unless the study indicated that it had been tested, so many improved terms were not included in the understood term list.

Table 3

Terms Identified in Studies, Understood, Misunderstood, or Both

Understood Terms

Both

Misunderstood Terms

[vendor name]

Acronyms

Full book PDF download available

About

Articles

[database name]

Advanced search

Ask a Librarian

About Us

All Databases

Chat Icon

Adobe Digital Editions

Book

Clinical information

Articles & Databases

Check holdings

Full-text

Articles & more

COM 100 (or other class name) Guide

Icon

CAARP test

Contact Us

Library catalog

Call number

Find

List of Journals and Magazines/Journal Title List

Catalog

Find books and media

Peer-reviewed

Chinook classic

Guest

Research Guides

Circulation policies

Help

Citation

Hours

Classic Catalog

Include results outside of library databases

Clinical Specialties

Materials

Collections

Medical

ConnectNT

PDF Full Text

Course reserves

Print Books Only

Creation date

Quick Links

Database

Quick Search

Digital commons

Resources

Digital library

Send To

Digital scholarship

Tools

DocRetriever

Services

Document Delivery

Sign in

E-journals

Textbooks

Evaluating what you find

Videos

Expand My Results

We don’t have a physical copy at CSUDH, but you can still get it. Sign-in to request it from another library”

EZ Borrow

We don’t have a physical copy in the library, but you can still get it

FAQs

Find it at Pratt

Get It

Google Preview

Guided search

Guides

HELIX

Hold

How Do I

How to distinguish between types of periodical

Identifying and narrowing a topic

Identifying search terms

ILL

Indexes

Information Literacy

Instructional support

Interlibrary loan

Journal

Journal articles

Journal titles

Journals A-Z

Learn About

Libguide

LibGuides

Library Help

Library Information

Library instruction

Library Location

Library locations

Library Service

Media services

Member node

Mobile Databases

Newspapers

No full-text

Novanet catalogue

OER

Off-campus access

OhioLINK

Originator

Peer-reviewed Journals

Periodical

Placing an item on hold

Privileges

PubMed

Recall

Reference Resources

Reference Sources

Renew Materials

Request Delivery

Research Tools

Reserves

ROBCAT

Scholarly/peer reviewed

See Online Tutorials

SO Journal Title/Source

Subject Guides

Subject Librarian

Title

To request to have this resource delivered to you (ILLiad) please sign in.

Top Resources

Topic Guides

Tutorials

Use our Spaces

User Groups

Using the Library

We found understood terms generally aligned with two of Kupersmith’s findings:

  1. Use natural language equivalents on top-level pages. Understood terms generally used natural language and target words like “Find books.”
  2. Enhance or explain potentially confusing terms. When hard to understand terms were provided extra text or mouse overs for context, users were successful.

These two findings were explicitly referenced in all but four of our fifteen library service categories (search box, citation styles & management, datasets, images). We cover these terms and findings in more depth in the discussion section of this paper.

In addition, we identified a third theme that aligns with Nielsen Norman’s Heuristic #4: Consistency and standards, which advises following industry-wide conventions so that users may apply learned behaviors from one situation to another.34 Many of the studies we reviewed found that participants had no problem with terms that appeared across many other sites like “Hours,” “Services,” “Help,” “About,” “Tools,” “Services,” and “Advanced Search.”35 Terms commonly used in libraries, such as “ask a librarian,” and “library catalog’’ were also understood in limited capacity.36 Other recognized specific terms were frequently-used academic terms such as “peer-review,” and widely-recognized database names like “PubMed” and “Google Scholar.”37 Users clearly brought mental models from other experiences to the library website.

R2.1: What Terms Do Users Not Understand?

As previously noted, the list of terms users didn’t understand was larger by far than the list of understood terms (see table 3).

Articles, Databases and Journals

The not understood terms appearing with the greatest frequency were “journals” (twelve), “articles” (ten), and “databases” (ten). This generally agrees with Kupersmith’s findings. Lemieux and Powelson described participants searching for articles by navigating to the e-Journal list instead of the main discovery service search.38 Becker and Yannotta discovered a similar situation, where students didn’t know to first go to a tab labeled databases in order to use the databases to search for articles.39 Even participants using a discovery search had problems; when users were asked to find a journal article, they selected the journal format filter, which then displayed results only at the journal title level.40 As the terms “database” and “journal” often confuse users trying to find articles, designers should take care when they use these terms and identify easy ways of recovery if users should accidentally use the wrong search.

Circulation and Library Catalogs

The second most misunderstood group of terms related to circulation and library catalogs. These included “library catalog,” nicknames of library catalogs, such as “ROBCAT,” terms related to functions and services of the catalog (e.g., “reserves,” “circulation policies,” “placing an item on hold,” “interlibrary loan”), and specific library location labels.41 This agrees with Kupersmith’s finding that users misunderstand “Library Catalog” and “Interlibrary Loan.”

One reason for the difficulty of understanding the term “library catalog” may relate to a growing confusion between the library catalog, which searches just books, and a library discovery service, which searches books and articles. Two studies observed users assuming that the main search box, or discovery search, contained all library resources and not understanding why they would need to search other places.42 Conversely, in another study where participants were asked to find physical books, they did not use the “catalog only” filter, which would have increased their success of finding books within a discovery service, because they did not equate the catalog with books.43

Local nicknames for library catalogs, such as “Chinook classic,” “classic catalog,” “HELIX,” and “ROBCAT,’’ were also found to be ineffectual in conveying the contents or purpose of a library catalog.44 Recognizing that not all users know that the library catalog is the main repository for books, some researchers dealt with this issue by renaming the library catalog “books,” or “books and media.”45

Interlibrary Loan

Interlibrary loan was especially challenging for users to understand as librarians and users had different perceptions of the service. Sundt and Eastman’s card sorting study showed most participants sorted interlibrary loan under a resource-related category instead of a service-related category, which is where librarians tended to locate it.46 Swanson et al. noted that, when asked to get books from various systems, “[participants] did not recognize the difference between interlibrary loan, placing an item on hold at the main campus and requesting delivery of items to our satellite campuses. Internally, these three services involve different staff members and processes.”47 The authors ultimately recommended placing all of these services on one “order items” page to conform to the user’s expectations. Valenti observed that users tasked with finding a book outside the library went to the menu labeled “find” or navigated directly to the other library’s website.48 Studies repeatedly showed that, for many users, interlibrary loan services were associated with the task mindset of requesting something that the library doesn’t have, which was at odds with the librarian separation of interlibrary loan and circulation services.

This difficulty interpreting interlibrary loan options persisted in discovery services. Participants in Comeaux’s study on the Primo discovery system, upon viewing a message “no full-text,” were expected to click a link labeled “services” and then navigate a list of options; however, “part of the difficulty was the students’ tendency to view “No full-text” as a dead end.”49 Those who continued demonstrated confusion on several points related to terminology and process: they did not equate services with interlibrary loan; did not understand what the term interlibrary loan meant; and they had to choose between multiple interlibrary loan service options (depending on the user’s affiliation). A later study looking at a multi-campus Primo implementation found that success requesting items from interlibrary loan was predicated on several factors: whether the library showed interlibrary loanable items in the default search; whether the library showed interlibrary loan options before the user logged in; and whether the library used sufficiently clear language for users unfamiliar with interlibrary loan or ILLiad.50 Clear terminology, previous familiarity, and placement of this term were critical for task success.

Research Assistance

The final large category of misunderstood terms relates to librarian research assistance services, both in person and virtual.

Multiple studies mentioned “subject librarian,” or “find a public specialist” as not effectively conveying to participants that librarian experts offered assistance with in-depth research questions.51 In one particularly dispiriting example, Chase et al. reported that, when asked to find information about research assistance, participants navigated away to a non-library site (specifically, the institution’s research foundation) or were otherwise not able to find the information.52

Many studies found that participants struggled to conceptualize the idea that librarians would create research guides to support their research process, what these guides would contain, and how such guides might help them.53 Denton et al. observed that users did not have a mental model for this type of help, noting the continual poor performance of their help guides on their library website despite making changes to what terms they employed.54 In a study by Conrad and Alvarez, “students expected the “research guides” link on the homepage to direct them to a list of book and article results for the specific subject or discipline referenced.”55 Other studies also found that users expected “research guides” to themselves be databases, or a list of links to online resources, more akin to a bibliography, rather than process-oriented narratives.56 Actions taken to rectify this problem included:

  • Using course names in the titles of guides.57
  • Grouping guides together under a common heading, like “tutorials.”58
  • Including the word help in the title (e.g., “help finding books”), to try and signify the guide’s purpose to users.59

Other studies suggested making research/course guides more “googlable” to align with user behavior to search the internet for help when running into difficulties.60

Discussion

Reviewing fifty-one articles from 2012-2021, we find ample evidence that library jargon continues to present challenges to users. While many of the findings repeat those of Kupersmith’s paper, the studies examined offered further nuance and examples that might help librarians understand the complexities at play when choosing terms to use on the library website.

Conflicting Findings

We found many examples of conflicting evidence in the articles. In particular, eleven terms were noted as both understood and misunderstood (see table 3).

Industry and Branded Terms

Terms used throughout the industries of higher education and libraries such as “peer reviewed,” “resource,” “full text,” and “article,’’ discussed earlier in this paper, could both be understood or misunderstood based on a user’s previous exposure to the term. Likewise, specific course names, generic names for library services or popular branded databases held meaning for users, but only with previous exposure. Otherwise, names that were library-specific, whether databases or catalog nicknames, tended to cause confusion.

Icons

There were conflicting findings about the use of icons instead of terms to convey meaning. Users preferred icons to words when they were easy to understand but were frustrated when they couldn’t tell the icon’s meaning. Galbreath et al. and Jacobs et al. both mention that commonly used icons (e.g., pin icon, email icon, and chat icon) were easy to use in a discovery system.61 However some icons hindered task completion: those that caused users to guess the incorrect format;62 those where inconsistent mouseover language was used;63 and those that exhibited unexpected behavior.64 As icon use continues to become more ubiquitous, designers should take care to choose icons with the same care that they give to terms, avoiding inconsistent, unclear or poor-quality icons, and following best practices related to consistency, labeling, legibility, contrast and clickability.65

Kupersmith’s Summary Findings and Guidelines

As mentioned in the introduction of this paper, Kupersmith presented seven best practices, all of which we found to hold true. In this section we discuss evidence supporting the continued relevance of all seven.

Test to See What Users Do and Don’t Understand and What Terms They Most Strongly Relate To

The number of articles identified through our search strategy suggests that libraries continue to vigorously test, acknowledging that only a proportion of testing undertaken in libraries is subsequently submitted for publication. Beyond libraries, regular user testing is generally accepted as a standard and best practice.

Avoid—or Use with Caution—Terms that Users Often Misunderstand

In our discussion of R2 earlier in this paper, we delved into understood and misunderstood terms, finding that misunderstood terms (total: 106) occurred approximately twice as often as understood terms (total: forty-one). Our findings supported Kupersmith’s further comment related to this recommendation: “if you must use terms frequently cited as problematic in usability studies …expect that [a] significant number of users will not interpret them correctly.”66

Study participants were very clear that local ‘nicknames’ and many library acronyms are not meaningful or understood.67

Previous interactions with library employees (e.g., instruction, assistance at a desk) appear to be influential in term comprehension and/or future task behavior. For example, Gillis noted, “one participant even noted that she had always used the link under the Favourites menu since she had been instructed to do so earlier on by a librarian.”68

For users who are less familiar with library processes, some frequently used terms (e.g., “resources,” “information”) resulted in ambiguities that impacted their ability to interact with library websites. In Mitchell and West’s study, which focused on distance students, participants struggled to understand or interpret labels or terms.69 Sundt and Eastman’s card sorting study supports our earlier finding that associations to broad terms such as “resources” and “services” are mapped differently by librarians and users.70

Use Natural Language Equivalents on Top-Level Pages

The Nielsen-Norman Group found as early as 1997 that, when looking at information on the web, people are task-focused and scan instead of read; they re-validated their results in 2020.71 Using natural language and target words aligns with these findings.

Participants in Sundt and Eastman’s card sorting study preferred the word “find” in high level navigation.72 In the Paladino et al. study, participants preferred “find books and media” instead of “library catalog.”73)Other examples of target words or action terms useful to users were “contact ss,” “check holdings,” “include results outside of library databases,” “send to,” and “sign in.”74 Understanding what tasks users wanted to complete were an essential part of constructing these terms.

Enhance or Explain Potentially Confusing Terms

Kupersmith recommended expanding text or labels, or adding enhancements to text or links (e.g., mouseovers) to clarify meaning. This recommendation points to the tension in web writing to be concise, yet not so concise readers cannot understand.

Echoing back to the previous recommendation, clear and concise natural language labels were preferred, though slightly longer explanatory text was acceptable when unavoidable. Lierman et al. found that, “several [users] noted specific instances in which they would not have understood the nature or purpose of a database without the description that was provided.”75

As discussed previously, participants’ and librarians’ ideas of clear language differed. Conerton and Goldenstein noted, “one interviewee commented that a tab labeled “articles” should be labeled “search databases” because the page did not offer a list of articles.”76 Dease found that “some users were unable to determine which [homepage] shortcut to [specific library resources to] click on by looking at the icon and label alone. One user in particular could not determine the difference between books and databases and referred to them as ‘librarian words.’”77

Numerous studies cited examples where language was clarified to better indicate link meaning, of which a few selections appear in table 4.78

Table 4

Examples of Original and Expanded Terms

Original Term

Expanded Term

Reference

Journal Title List

List of Journals and Magazines

Becker and Yannotta, 2013

Expand My Results

Include results outside of library databases

Jacobs et al., 2020

To request to have this resource delivered to you (ILLiad) please sign in

We don’t have a physical copy at CSUDH, but you can still get it. Sign-in to request it from another library.

Jacobs et al., 2020.

Interestingly mouseovers were rarely mentioned by study participants or employed by study authors as a solution. This might be due to new accessibility and usability issues with mouseovers which are especially problematic on touch devices.79

Provide Intermediate Pages

Specifically, Kupersmith recommends that, “when a top-level menu choice presents ambiguities that can’t be resolved in the space available … have your Find Books link lead to a page offering the local catalog, system or consortium catalog, e-books, WorldCat, etc.”80

Current practice in top level navigation seems to confirm Kupersmith’s advice is frequently followed, with categories such as “find” leading to intermediate pages with additional information and links. Task-based groupings are preferred over ‘user type’ groupings.81 Some findings suggest that the same confusion and ambiguity regarding categorization can recur with groupings for intermediate pages so careful consideration and testing is advised.82

In response to “confusion in selecting from the different delivery services that our library offered,” Swanson et al. created a single page titled “order items” to provide access to three delivery and request options.83 Dease described revisions to information architecture to address “duplicate content and … critical information that was difficult to find.”84 Brown and Yunkin found that ‘pop-up’ menus with multiple choices were not clear: “many users did not recognize that popup lists functioned as menus … users did not seem to understand the difference between library “Information” (label in the first popup) and a library “Service” (label in the second popup), indicating that navigation was not intuitive.”85

Provide Alternative Paths

Kupersmith’s suggested action on the part of libraries—creating cross-references “where users are likely to make predictable ‘wrong’ choices” —was not explicitly addressed in many of the studies. However, numerous findings reporting continuing user confusion about library collections and services suggest that careful attention should be given to this type of contextual linking.

Users frequently struggle to distinguish between the purpose, destination and scope of search boxes or tabs and tend to assume more rather than less comprehensive coverage.86 Azadbakht et al., 2017 reported, “participants from all groups, especially undergraduate students, assumed that any search box on the Libraries’ website was designed to search for and within resources like article databases and the online catalog, regardless of how the search box was labeled.”87

Results suggest that users expect information about specific library policies related to materials (i.e., loan periods) to be available from links referencing the materials themselves, such as “books and media.”88 This conflicts with libraries’ frequent practice of making use of the categories “resources” (for databases, collections/materials) and “services” (for physical facilities, borrowing/request functions).

Similarly, placing links to specific help at points of need to better integrate into the user’s help-seeking process might address misunderstandings and lack of information about subject-specific research supports noted earlier in this paper. Conrad and Alvarez discovered that users did not gain awareness of available library services from navigation text, in particular services related to physical spaces.89

As discussed in previous sections, it should not be assumed that users understand the purpose or content of different library resources, nor that they distinguish between various levels of content types (article, journal, database).90 Librarians should understand which content types get confused and offer clear pathways from one to the other.

Be Consistent

This finding is also widely accepted as a best practice in information architecture, website design and learning theory; for example, Nielsen’s heuristic on this topic was cited earlier in this paper (“Consistency & Standards”).

Participants noted inconsistencies in tab names, icon types and application and functionality (e.g., a visualization for relevancy mirrored the style commonly used for ranking in commercial sites) in multiple studies.91

As previously found, participants’ familiarity with terms strongly impacted task success emphasizing the importance of consistent use of known terms. Lierman et al. found that, “in general, users latched on quickly to terms in subpage and box titles that seemed relevant to their tasks, and some expressed feelings of increased confidence and reassurance when seeing a familiar term featured prominently on an otherwise unfamiliar resource.”92

The relationship between electronic resources/services and physical resources/services was not always clear, increasing the importance of consistency and alignment of terms used online and in physical spaces. Becker and Yannotta made changes to terminology in order to increase consistency across modalities, using the phrase “checkout policies” online to mirror language at their physical circulation desk.93

In addition to Kupersmith’s original seven guidelines, we propose an eighth guideline.

Follow Industry-wide Conventions

When encountering terms common in other websites like “about,” users generally were able to apply past comprehension to the new term, as long as the term correctly aligned with their expectations.

This eighth guideline reflects a new theme identified in the literature, where libraries were able to adapt terms used on other websites in such a way that the user could use their previous mental models to successfully complete the tasks. It also reflects a common finding in many of the studies we reviewed: users brought their past experiences to the library website and these experiences informed what they did.

Gaps in the Research

The library literature primarily represents studies done in academic libraries, with few special libraries and, in this literature review, no public libraries. This is reflective of the library science field as a whole, as public libraries are underrepresented in the literature.94 However, evidence such as case studies, online reports, and project descriptions show the importance these other library organizations place on user studies.95 Dedicated efforts to publish studies including public libraries and special libraries, and a greater attention to the gray literature in future literature reviews, would allow us to contrast and compare organizational findings.

We found only a few studies that used behavioral/quantitative methodologies like analytics or text mining. Using these methods alongside qualitative interviews or usability tests would enlarge the sample size and provide complementary evidence. For example, going through chat transcripts and focusing on words used around a particular service or policy might help libraries identify term alternatives to test in a usability study or identify task-based questions.96 Comparing the clicks on different term options for the same menu item or service using A/B testing could provide a larger sample from which to make a final decision.97 More research identifying ways analytics can best inform usability testing, and vice versa, could offer time-strapped organizations clearer ways to continually evaluate the user experience.

A majority of studies focused on one organization with one website or platform configuration. An interesting exception is the multi-campus comparison of local configurations to a discovery service done by five campuses in the California State University Libraries system. This study, organized by a cross-campus team focused on discovery and usability, compared population and terminology configuration differences in a way challenging to do with a single platform.98 Multi-campus or consortial involvement in usability studies can provide a larger and more representative sample size of users, spaces, and terminology choices. It could also provide an alternative to A/B testing, as seen in a more recent article done by the same consortium.99

Like Kupersmith’s review, our scoping included studies discussing library terms online, and excluded studies done on physical spaces. But terminology is not found only in the online world. Research around signage and wayfinding contain valuable additions to how library users perceive and use library terminology.100 Additionally the service design approach, which uses design thinking methods to create or reimagine a service, could identify terminology that makes sense to users across many platforms and channels such as emails, signs, language at the service desk and websites.101 This omnichannel model, becoming more accepted in business studies, provides a way to examine how physical and online environments interact.102 In the future, we see literature reviews on library terminology including studies that represent library terms users understand in both physical and online environments.

Conclusion

This review finds that Kupersmith’s guidelines are still relevant to today’s academic library websites, with a few minor exceptions due to technological advancements. We emphasize academic, as the majority of studies featured academic libraries, and no studies featured public libraries. We added an additional guideline, “Follow industry-wide conventions,” to highlight how context guides users’ understanding of library terminology. Librarians should look at evidence that they currently collect, such as chat, emails, and reference conversations, to better understand what services users expect to encounter and how they would describe those services. Future reviews should consider library terminology use beyond the website and examine how the different modes of communication clarify Kupersmith’s guidelines.

Notes

1. Rachel Naismith and Joan Stein, “Library Jargon: Student Comprehension of Technical Language Used by Librarians,” College & Research Libraries 50, no. 5 (September 1989): 543, https://doi.org/10.5860/crl_50_05_543.

2. John Kupersmith, “Library Terms That Users Understand,” February 29, 2012, https://escholarship.org/uc/item/3qq499w7.

3. Kupersmith, “Library Terms.”

4. Nielsen Norman Group, “Usability 101: Introduction to Usability,” January 3, 2012, https://www.nngroup.com/articles/usability-101-introduction-to-usability/.

5. Nicole Campbell et al., “Discovering the User: A Practical Glance at Usability Testing,” Electronic Library 17, no. 5 (October 1999): 307–11; Janet Chisman and Karen Diller, “Usability Testing: A Case Study,” College & Research Libraries 60, no. 6 (November 1999): 552, https://doi.org/10.5860/crl.60.6.552; Jessica L. Milstead and Lois F. Lunin, “Needs for Research in Indexing,” Journal of the American Society for Information Science 45, no. 8 (September 1994): 577–82, https://doi.org/10.1002/(SICI)1097-4571(199409)45:8<577::AID-ASI12>3.0.CO;2-P; Gabriel K. Rousseau et al., “Assessing the Usability of On-Line Library Systems,” Behaviour & Information Technology 17, no. 5 (September 1998): 274–81, https://doi.org/10.1080/014492998119346; Jerilyn R. Veldof, Michael J. Prusse, and Victoria A. Mills, “Chauffeured by the User,” Journal of Library Administration 26, no. 3–4 (January 25, 1999): 115–40, https://doi.org/10.1300/J111v26n03_07; Zimin Wu, Anne Ramsden, Dianguo Zhao, “The User Perspective of the ELINOR Electronic Library,” Aslib Proceedings 47, no. 1 (January 1995): 13–22.

6. Debbie Vaughn and Burton Callicott, “Broccoli Librarianship and Google-Bred Patrons, or What’s Wrong with Usability Testing?,” College & Undergraduate Libraries 10, no. 2 (December 2003): 2, https://doi.org/10.1300/J106v10n02_01.

7. Barbara A. Blummer, “A Literature Review of Academic Library Web Page Studies,” Journal of Web Librarianship 1, no. 1 (June 21, 2007): 56, https://doi.org/10.1300/J502v01n01_04.

8. Jennifer Emanuel, “Usability Testing in Libraries: Methods, Limitations, and Implications,” OCLC Systems & Services: International Digital Library Perspectives 29, no. 4 (January 1, 2013): 205, https://doi.org/10.1108/OCLC-02-2013-0009.

9. Steven J. Bell, “Staying True to the Core: Designing the Future Academic Library Experience,” Portal: Libraries and the Academy 14, no. 3 (2014): 370, https://doi.org/10.1353/pla.2014.0021.

10. Craig M. MacDonald, “‘It Takes a Village’: On UX Librarianship and Building UX Capacity in Libraries,” Journal of Library Administration 57, no. 2 (February 17, 2017): 194–214, https://doi.org/10.1080/01930826.2016.1232942.

11. Scott W. H. Young, Zoe Chao, and Adam Chandler, “User Experience Methods and Maturity in Academic Libraries,” Information Technology and Libraries 39, no. 1 (March 16, 2020): 5, https://doi.org/10.6017/ital.v39i1.11787.

12. Greg Landgraf, “How User-Friendly Is Your Website? Usability Lessons for Libraries in a Remote World,” American Libraries 52, no. 3/4 (2021): 30.

13. Merriam Webster, Incorporated, “Definition of JARGON,” Merriam-Webster.Com Dictionary, 2022, https://www.merriam-webster.com/dictionary/jargon.

14. Plain Language Action and Information Network, “Avoid Jargon,” Plainlanguage.Gov, n.d., https://www.plainlanguage.gov/guidelines/words/avoid-jargon/.

15. Noa Loranger, “Plain Language Is for Everyone, Even Experts,” Nielsen Norman Group, October 8, 2017, https://www.nngroup.com/articles/plain-language-experts/.

16. Steven Carr, “Why Jargon Is the Silent Killer of Customer Experience,” UserTesting, December 11, 2019, https://www.usertesting.com/blog/jargon-customer-experience.

17. Hillary C. Shulman et al., “The Effects of Jargon on Processing Fluency, Self-Perceptions, and Scientific Engagement,” Journal of Language and Social Psychology 39, no. 5–6 (October 1, 2020), https://doi.org/10.1177/0261927X20902177.

18. Naismith and Stein, “Library Jargon,” 551.

19. Abdus Sattar Chaudhry and Meng Choo, “Understanding of Library Jargon in the Information Seeking Process,” Journal of Information Science 27, no. 5 (October 1, 2001): 343–349, https://doi.org/10.1177/016555150102700505; Norman B. Hutcherson, “Library Jargon: Student Recognition of Terms and Concepts Commonly Used by Librarians in the Classroom,” College & Research Libraries 65, no. 4 (July 2004): 349–354, https://doi.org/10.5860/crl.65.4.349; Diane Nahl, “Creating User‐centered Instructions for Novice End‐users,” Reference Services Review 27, no. 3 (January 1, 1999): 280–286, https://doi.org/10.1108/00907329910283467; Mark Aaron Polger, “Student Preferences in Library Website Vocabulary,” Library Philosophy and Practice, June 2011, 1–16; Nancy Lee Shires and Lydia P. Olszak, “What Our Screens Should Look Like: An Introduction to Effective OPAC Screens,” RQ 31, no. 3 (1992): 357–369; Mark A. Spivey, “The Vocabulary of Library Home Pages: An Influence on Diverse and Remote End-Users,” Information Technology and Libraries 19, no. 3 (September 2000): 151–156; Suzanne M. Shultz, “Medical Jargon,” Medical Reference Services Quarterly 15, no. 3 (October 14, 1996): 41–47, https://doi.org/10.1300/J115V15N03_04; Tiffini Anne Travis and Elaina Norlin, “Testing the Competition: Usability of Commercial Information Sites Compared with Academic Library Web Sites,” College & Research Libraries 63, no. 5 (September 2002): 433–448, https://doi.org/10.5860/crl.63.5.433.

20. Kupersmith, “Library Terms.”

21. Rice Majors, “Comparative User Experiences of Next-Generation Catalogue Interfaces,” Library Trends 61, no. 1 (2012): 190.

22. Michael Alexander Fauchelle, “Libraries of Babel: Exploring Library Language and Its Suitability for the Community,” Library Review 66, no. 8/9 (2017): 614, https://doi.org/10.1108/LR-04-2017-0034.

23. Roxanne Backowski, Kate Hinnant, and Liliana LaValle, “Writing Library Database Descriptions in Plain Language,” College & Undergraduate Libraries 29, no. 3–4 (October 2, 2022, https://doi.org/10.1080/10691316.2022.2149439.

24. EBSCO Publishing, “Library, Information Science & Technology Abstracts with Full Text,” accessed November 17, 2022, https://www.ebsco.com/products/research-databases/library-information-science-technology-abstracts-full-text.

25. ProQuest, “Library and Information Science Abstracts (LISA),” accessed November 17, 2022, https://proquest.libguides.com/lisa/home.

26. Neal Robert Haddaway et al., “The Role of Google Scholar in Evidence Reviews and Its Applicability to Grey Literature Searching,” PLOS ONE 10, no. 9 (September 17, 2015): e0138237, https://doi.org/10.1371/journal.pone.0138237.

27. Haddaway et al., “The Role of Google Scholar.”

28. American Council on Education, “Carnegie Classification of Institutions of Higher Education,” 2022, https://carnegieclassifications.acenet.edu/index.php.a

29. Kelley Gordon and Christian Rohrer, “A Guide to Using User-Experience Research Methods,” Nielsen Norman Group, August 21, 2022, https://www.nngroup.com/articles/guide-ux-research-methods/.

30. Gordon and Rohrer, “A Guide to Using.”

31. Gordon and Rohrer, “A Guide to Using.”

32. Gordon and Rohrer, “A Guide to Using.”

33. Experience UX, “What Is Card Sorting?,” Experience UX, accessed March 3, 2023, https://www.experienceux.co.uk/faqs/what-is-card-sorting/.

34. Jakob Nielsen, “10 Usability Heuristics for User Interface Design,” Nielsen Norman Group, April 24, 1994, https://www.nngroup.com/articles/ten-usability-heuristics/.

35. Andrea H. Denton, David A. Moody, and Jason C. Bennett, “Usability Testing as a Method to Refine a Health Sciences Library Website,” Medical Reference Services Quarterly 35, no. 1 (January 2016): 1–15; Alex Sundt and Teagan Eastman, “Informing Website Navigation Design with Team-Based Card Sorting,” Journal of Web Librarianship 13, no. 1 (January 2019): 37–60; W. Jacobs, Mike DeMars, and J. M. Kimmitt, “A Multi-Campus Usability Testing Study of the New Primo Interface,” College & Undergraduate Libraries 27, no. 1 (January 2020): 1–16; Nicholas Dease, Elena Villaespesa, and Craig M. MacDonald, “Working Together: Using Student-Driven UX Projects to Improve Library Websites,” College & Undergraduate Libraries 27, no. 2–4 (April 2020): 397–419; Junior Tidal, “Creating a User-Centered Library Homepage: A Case Study,” OCLC Systems & Services 28, no. 2 (May 2012): 100.

36. Denton, Moody, and Bennett, “Usability Testing;” Rice Majors, “Comparative User Experiences of Next-Generation Catalogue Interfaces,” Library Trends 61, no. 1 (2012): 186–207; Emily B. Paladino, Jacqueline C. Klentzin, and Chloe P. Mills, “Card Sorting in an Online Environment: Key to Involving Online-Only Student Population in Usability Testing of an Academic Library Web Site?,” Journal of Library & Information Services in Distance Learning 11, no. 1/2 (January 2017): 37–49.

37. Blake Lee Galbreath, Corey Johnson, and Erin Hvizdak, “Primo New User Interface: Usability Testing and Local Customizations Implemented in Response,” Information Technology and Libraries 37, no. 2 (2018): 10–33; Michelle Lemieux and Susan Powelson, “Results of a Usability Study to Test the Redesign of the Health Sciences Library Web Page,” Journal of the Canadian Health Libraries Association (JCHLA) 35 (August 2014): 49–54.

38. Lemieux and Powelson, “Results of a Usability Study.”

39. Danielle A. Becker and Lauren Yannotta, “Modeling a Library Website Redesign Process: Developing a User-Centered Website Through Usability Testing,” Information Technology & Libraries 32, no. 1 (March 2013): 6–22.

40. Jeanne M. Brown and Michael Yunkin, “Tracking Changes: One Library’s Homepage Over Time—Findings from Usability Testing and Reflections on Staffing,” Journal of Web Librarianship 8, no. 1 (January 2014): 23–47; Erin Dorris Cassidy et al., “Student Searching with EBSCO Discovery: A Usability Study,” Journal of Electronic Resources Librarianship 26, no. 1 (2014): 17–35.

41. Suzanna Conrad and Julie Shen, “Designing a User-Centric Web Site for Handheld Devices: Incorporating Data-Driven Decision-Making Techniques with Surveys and Usability Testing,” Journal of Web Librarianship 8, no. 4 (October 2014): 349–383; Gricel Dominguez, Sarah J. Hammill, and Ava Iuliano Brillat, “Toward a Usable Academic Library Web Site: A Case Study of Tried and Tested Usability Practices,” Journal of Web Librarianship 9, no. 2/3 (April 2015): 99–120; Majors, “Comparative User;;” Dease, Villaespesa, and MacDonald, “Working Together;” Jin Wu and Janis F. Brown, “Website Redesign: A Case Study,” Medical Reference Services Quarterly 35, no. 2 (April 2016): 158–174; Sundt and Eastman, “Informing Website Navigation.”

42. Dease, Villaespesa, and MacDonald, “Working Together;” Conrad and Shen, “Designing a User-Centric Web Site for Handheld Devices.”

43. Cassidy et al., “Student Searching.”

44. Majors, “Comparative User;” Dease, Villaespesa, and MacDonald, “Working Together;” Wu and Brown, “Website Redesign;” Sundt and Eastman, “Informing Website Navigation.”

45. Paladino, Klentzin, and Mills, “Card Sorting;” Dominguez, Hammill, and Brillat, “Toward a Usable.”

46. Sundt and Eastman, “Informing Website Navigation.”

47. Troy A. Swanson et al., “Guiding Choices: Implementing a Library Website Usability Study,” Reference Services Review 45, no. 3 (July 2017): 365.

48. Alyssa M. Valenti, “Usability Testing for a Community College Library Website,” Library Hi Tech News 36, no. 1 (January 2019): 1–8.

49. David J. Comeaux, “Usability Testing of a Web-Scale Discovery System at an Academic Library,” College & Undergraduate Libraries 19, no. 2–4 (2012): 202.

50. Jacobs, DeMars, and Kimmitt, “A Multi-Campus Usability.”

51. Suzanna Conrad and Nathasha Alvarez, “Conversations with Web Site Users: Using Focus Groups to Open Discussion and Improve User Experience,” Journal of Web Librarianship 10, no. 2 (April 2016): 53–82; Conrad and Shen, “Designing a User-Centric;” Dominguez, Hammill, and Brillat, “Toward a Usable.”

52. Darren Chase, Elizabeth Trapasso, and Robert Tolliver, “The Perfect Storm: Examining User Experience and Conducting a Usability Test to Investigate a Disruptive Academic Library Web Site Redevelopment,” Journal of Web Librarianship 10, no. 1 (January 2016): 28–44.

53. Rob O’Connell, “Beyond the Bento: A New Discovery Experience at Smith College,” Computers in Libraries 39, no. 2 (March 2019): 18–21; Valerie Linsinbigler et al., “User-Centered Design of a Library Tutorials Page: A Solution to Digital Hoarding,” Journal of Library & Information Services in Distance Learning 15, no. 2 (June 2021): 142–156.

54. Denton, Moody, and Bennett, “Usability Testing.”

55. Suzanna Conrad and Nathasha Alvarez, “Conversations with Web Site Users: Using Focus Groups to Open Discussion and Improve User Experience,” Journal of Web Librarianship 10, no. 2 (2016): 78.

56. Kate Conerton and Cheryl Goldenstein, “Making LibGuides Work: Student Interviews and Usability Tests,” Internet Reference Services Quarterly 22, no. 1 (January 2017): 43–54; Ashley Lierman et al., “Testing for Transition: Evaluating the Usability of Research Guides Around a Platform Migration,” Information Technology & Libraries 38, no. 4 (December 2019): 76–97.

57. Suzanna Conrad and Christy Stevens, “‘Am I on the Library Website?’: A LibGuides Usability Study,” Information Technology & Libraries 38, no. 3 (September 2019): 49–81.

58. Jeffrey W. Gallant and Laura B. Wright, “Planning for Iteration-Focused User Experience Testing in an Academic Library,” Internet Reference Services Quarterly 19, no. 1 (January 2014): 49–64.

59. Paladino, Klentzin, and Mills, “Card Sorting.”

60. Denton, Moody, and Bennett, “Usability Testing.”

61. Galbreath, Johnson, and Hvizdak, “Primo New User;” Jacobs, DeMars, and Kimmitt, “A Multi-Campus Usability.”

62. Cassidy et al., “Student Searching.”

63. Jacobs, DeMars, and Kimmitt, “A Multi-Campus Usability.”

64. Galbreath, Johnson, and Hvizdak, “Primo New User.”

65. Jeremy Elliott, “How to Use Icons in Design: UX and UI Best Practices,” The Noun Project Blog, September 7, 2022, https://blog.thenounproject.com/how-to-use-icons-in-ui-and-ux-design-best-practices/; Kara Pernice, “Bad Icons: How to Identify and Improve Them,” Nielsen Norman Group, November 19, 2017, https://www.nngroup.com/articles/bad-icons/.

66. Kupersmith, “Library Terms That Users Understand.”

67. Wu and Brown, “Website Redesign;” Paladino, Klentzin, and Mills, “Card Sorting.”

68. Roger Gillis, “Watch Your Language: Word Choice in Library Website Usability,” Partnership: The Canadian Journal of Library & Information Practice & Research 12, no. 1 (January 2017): 12.

69. Emily Mitchell and Brandon West, “Collecting and Applying Usability Data from Distance Learners,” Journal of Library & Information Services in Distance Learning 11, no. 1/2 (January 2017): 9.

70. Sundt and Eastman, “Informing Website,” 49.

71. Jakob Nielsen, “How Users Read on the Web,” Nielsen Norman Group, September 30, 1997, https://www.nngroup.com/articles/how-users-read-on-the-web/; Kate Moran, “How People Read Online: New and Old Findings,” Nielsen Norman Group, April 5, 2020, https://www.nngroup.com/articles/how-people-read-online/.n

72. Sundt and Eastman, “Informing Website.”

73. Paladino, Klentzin, and Mills, “Card Sorting.”

74. Denton, Moody, and Bennett, “Usability Testing;” Galbreath, Johnson, and Hvizdak, “Primo New User;” Jacobs, DeMars, and Kimmitt, “A Multi-Campus Usability.”

75. Lierman et al., “Testing for Transition,” 82.

76. Conerton and Goldenstein, “Making LibGuides Work,” 47.

77. Dease, Villaespesa, and MacDonald, “Working Together,” 413.

78. Becker and Yannotta, “Modeling a Library Website;” Jacobs, DeMars, and Kimmitt, “A Multi-Campus Usability.”

79. Mozilla Development Network Contributors, “ARIA: Tooltip Role - Accessibility,” MDN Web Docs, accessed November 15, 2022, https://developer.mozilla.org/en-US/docs/Web/Accessibility/ARIA/Roles/tooltip_role.

80. Kupersmith, “Library Terms.”

81. Lierman et al., “Testing for Transition.”

82. Sundt and Eastman, “Informing Website Navigation;” Sarah Guay, Lola Rudin, and Sue Reynolds, “Testing, Testing: A Usability Case Study at University of Toronto Scarborough Library,” Library Management 40, no. 1 (January 2019): 88–97.

83. Swanson et al., “Guiding Choices,” 365.

84. Dease, Villaespesa, and MacDonald, “Working Together,” 401.

85. Brown and Yunkin, “Tracking Changes,” 30.

86. Megan Johnson, “Usability Test Results for Encore in an Academic Library,” Information Technology & Libraries 32, no. 3 (September 2013): 59–85.

87. Elena Azadbakht, John Blair, and Lisa Jones, “Everyone’s Invited: A Website Usability Study Involving Multiple Library Stakeholders,” Information Technology & Libraries 36, no. 4 (December 2017): 42–43.

88. Valenti, “Usability Testing.”

89. Conrad and Alvarez, “Conversations with Web Site Users.”

90. Xi Niu, Tao Zhang, and Hsin-liang Chen, “Study of User Search Activities with Two Discovery Tools at an Academic Library,” International Journal of Human-Computer Interaction 30, no. 5 (2014): 422–433.

91. Alec Sonsteby and Jennifer DeJonghe, “Usability Testing, User-Centered Design, and LibGuides Subject Guides: A Case Study,” Journal of Web Librarianship 7, no. 1 (January 2013): 83–94; Cassidy et al., “Student Searching;” Rachel Volentine et al., “Usability Testing to Improve Research Data Services,” Qualitative and Quantitative Methods in Libraries 4, no. 1 (2017): 59–68.

92. Lierman et al., “Testing for Transition,” 82–83.

93. Becker and Yannotta, “Modeling a Library,” 14.

94. Michelle Penta and Pamela J. McKenzie, “The Big Gap Remains: Public Librarians as Authors in LIS Journals, 1999-2003,” Public Library Quarterly 24, no. 1 (January 1, 2005): 33–46, https://doi.org/10.1300/J118v24n01_04.

95. Michael Lascarides, “Infomaki: An Open Source, Lightweight Usability Testing Tool,” The Code4Lib Journal, no. 8 (November 23, 2009), https://journal.code4lib.org/articles/2099; Tara Wood, “Usability Studies,” SWAN Library Services, September 21, 2022, https://support.swanlibraries.net/documentation/64810; Simplemente UX, “IA Redesign - Maplewood Public Library,” Simplemente UX, accessed December 8, 2022, https://www.simplementeux.com/work/the-ruse-4hrj3.

96. Jaci Wilkinson et al., “Constructing Citations: Reviewing Chat Transcripts to Improve Citation Assistance as a Service,” Reference Services Review 49, no. 2 (January 1, 2021): 194–210, https://doi.org/10.1108/RSR-03-2021-0007; J. Michael DeMars, Gabriel Gardner, and Joanna Messer Kimmitt, “If You Build It, Will They Come?: Natural Experiments with Springshare’s Proactive Chat Reference” (ELUNA 2019 Annual Meeting, Atlanta, GA, 2019), http://documents.el-una.org/1856/; Susan Gardner Archambault, Jennifer Masunaga, and Kathryn Ryan, “Lingua Franca: How We Used Analytics to Describe Databases In Student Speak,” Computers in Libraries 39, no. 8 (October 2019): 25–28.

97. Scott W. H. Young, “Improving Library User Experience with A/B Testing: Principles and Process,” Weave: Journal of Library User Experience 1, no. 1 (2014), https://doi.org/10.3998/weave.12535642.0001.101.

98. Jacobs, DeMars, and Kimmitt, “A Multi-Campus Usability.”

99. Heather L. Cribbs and Gabriel J. Gardner, “To Pre-Filter, or Not to Pre-Filter, That Is the Query: A Multi-Campus Big Data Study,” Journal of Librarianship and Information Science, September 28, 2022, 09610006221124609, https://doi.org/10.1177/09610006221124609.

100. Wencheng Su et al., “Let Eyes Tell: Experimental Research on University Library Signage System and Users’ Wayfinding Behavior,” Library Hi Tech 40, no. 1 (January 1, 2021): 198–221, https://doi.org/10.1108/LHT-01-2020-0007; Mark Aaron Polger, Library Signage and Wayfinding Design: Communicating Effectively with Your Users (Chicago: ALA Editions, 2022).

101. Lisa Chow and Sandra Sajonas, “From UX Study to UX Service: Using People-Centered Research Methods to Improve the Public Library Experience,” Public Library Quarterly 39, no. 6 (November 1, 2020): 493–509, https://doi.org/10.1080/01616846.2019.1682884; Kate Haines and Emily Puckett Rodgers, “Creating a Contactless Pickup Service at the University of Michigan Library: Iterative Service Design and Interaction Safety during the Pandemic,” Journal of Access Services 18, no. 4 (October 2, 2021): 225–252, https://doi.org/10.1080/15367967.2021.1967162.

102. Ruchi Mishra, Rajesh Kumar Singh, and Bernadett Koles, “Consumer Decision‐making in Omnichannel Retailing: Literature Review and Future Research Agenda,” International Journal of Consumer Studies 45, no. 2 (March 2021): 147–174, https://doi.org/10.1111/ijcs.12617.

* Courtney McDonald is Associate Professor and User Experience Librarian and Nicole Trujillo is Assistant Professor, Access and Discovery Librarian and Lead, Metadata Optimization and Discovery section, both at the University of Colorado Boulder Libraries; email: crmcdonald@colorado.edu, nicole.trujillo@colorado.edu. The authors wish to thank Jennifer Knievel for generously giving her time and comments on this paper. ©2024 Courtney McDonald and Nicole Trujillo, Attribution-NonCommercial (https://creativecommons.org/licenses/by-nc/4.0/) CC BY-NC.

Copyright Courtney R McDonald, Nicole Trujillo


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

Article Views (By Year/Month)

2025
January: 585
February: 383
March: 306
April: 290
May: 323
June: 282
July: 628
August: 232
September: 224
October: 290
November: 270
December: 232
2024
January: 0
February: 0
March: 0
April: 0
May: 0
June: 0
July: 0
August: 6
September: 4849
October: 1400
November: 862
December: 252