Dissatisfaction in Chat Reference Users: A Transcript Analysis Study

This study aims to identify factors and behaviors associated with user dissatisfaction with a chat reference interaction to provide chat operators with suggestions of behaviors to avoid. The researchers examined 473 transcripts from an academic chat reference consortium from June to December 2016. Transcripts were coded for 13 behaviors that were then statistically analyzed with exit survey ratings. When present in the chat, three behaviors explained user dissatisfaction: clarification, transfers, and referrals. The absence of three more behaviors also explained dissatisfaction: ending the chat mutually; maintaining a professional tone; and displaying interest or empathy.


Any library staff member who has ever answered online reference questions over chat knows that, sadly, not all interactions are positive. Sometimes a user arrives at the chat after spending several frustrating hours trying to find articles for their paper. Or they might resent “wasting” time getting help from the library when the link resolver should work seamlessly. If we are honest, there are times when we are not at our best either. Perhaps we are working with multiple users and lose track of one of the chats. Or maybe we are rushing because the end of our shift is approaching and we have a meeting scheduled in another part of the library directly afterward. Whatever the reason, and despite the best intentions of everyone involved, sometimes the chat just does not go well. While it is tempting to forget these less-than-ideal interactions as soon as possible, they could be mined for valuable insights about why some interactions go bad in an effort to find things we can do to prevent that from happening.

Chat reference researchers have done excellent work identifying behaviors that positively affect user satisfaction with a chat interaction, allowing us to create and validate best practices. We now know that, in text-based synchronous, online reference (hereafter referred to as “chat”), asking follow-up questions, maintaining word contact, and instruction will all contribute to a user’s positive assessment of the chat interaction.1 Since the “dos” of chat are so well covered, it might now be time for us to turn our attention to the “don’ts.” It is important to study dissatisfaction separately from satisfaction to avoid overlooking variables that negatively affect a user’s experience of chat reference interactions. While a library staff member’s choice to do X during a chat may increase user satisfaction, the absence of X in a chat does not necessarily lead to dissatisfaction. Studying dissatisfaction allows us to explore the effect of the lack of certain behaviors, as well as identifying other behaviors that might have negative satisfaction consequences. For the purposes of this paper, the authors will refer to the library staff member as the “operator” and the individual chatting with them as the “user.”

The present study examined a corpus of chat interactions that occurred between June and December 2016 on a consortial reference service in Ontario, Canada. All eligible interactions included a prechat survey, a transcript of the interaction, an exit survey, and metadata about the chat. Researchers coded transcripts for operator behaviors and compared them to the user’s self-reported satisfaction as collected in the exit survey. We hoped to discover operator behaviors that should be avoided. The following research question guided this project:

  • What operator behaviors are associated with dissatisfaction…
    • At the beginning of the chat?
    • Anytime during the chat?
    • At the end of the chat?

Literature Review

User satisfaction is a popular metric in chat transcript studies, as it offers practitioners actionable findings for improving users’ perception of the service received. To study factors that influence satisfaction, researchers often look to operators’ behavior. The RUSA guidelines are a convenient set of behaviors to study as they are taught in library and information programs and are widely accepted as best practice.2

Operator Behavior and User Satisfaction

The reference interview comes at the beginning of the chat and is regarded by practitioners as the basis of a successful interaction. Researchers at Carnegie Mellon University observed that clarifying questions were phrased as open ended in 17 percent of interactions, and as closed in 46 percent of interactions, which users were positive about in almost all cases.3 These actions are represented in the RUSA guidelines as 3.1.7 “Uses open-ended questions to encourage the patron to expand on the request or present additional information” and 3.1.8 “Uses closed and/or clarifying questions to refine the search query.”4 RUSA 3.1.5 “Rephrases the question or request and asks for confirmation to ensure accurate understanding” has also been included in some work.5 A case study at Texas A&M revealed that this behavior was used by only 10 percent of operators in the chats studied even though 82 percent of transcripts included evidence of user satisfaction.6 In the early stages of a chat, the operator’s intention to help the user also comes through. Keyes and Dworak found that, in 5 percent of transcripts studied, the operator referred the question without attempting to assist the user.7 Ninety percent of the users in that study who responded to an exit survey reported that their overall experience was good or great.8

An operator’s manner has also been shown to relate to satisfaction. Operator courteousness corresponds to RUSA 3.1.1 “Communicates in a receptive, cordial, and supportive manner.”9 Kwon and Gregory’s seminal work found that “listening to questions in a cordial and receptive manner” was significantly associated with user satisfaction as expressed in exit surveys.10 A peer assessment of chat transcripts at the University of Kansas showed that the majority had exemplary courtesy (73.1% in 2015 and 65.9% in 2016) and concluded that this behavior was important to an interaction’s success.11 A study at a large-scale consortial academic virtual reference service in Ontario, Canada found that tone significantly correlated with satisfaction.12 Pomerantz, Luo, and McClure also examined courtesy and found that the majority of interactions fell in the second-highest category at 47.4 percent, with direct or indirect evidence of user satisfaction in 61 percent of transcripts.13 They also observed that operators were mostly either neutral (43.1%) or second-highest rated (32.8%) in enthusiasm, which relates to RUSA’s 2.0 section on interest.14 Though not a RUSA behavior, Prieto argues that using emotional intelligence, which includes empathy, can help chat operators seem more welcoming and supportive.15 He argues that using a more informal communication style can help “create a more relaxed and authentic environment.”16 There is mixed evidence to corroborate this statement at present, however. Waugh interviewed students and asked them to compare a formal-style chat to an informal-style one to conflicting results.17 Three interviewees would only return to the more formal operator, citing professionalism and trustworthiness, while two preferred the informal operator’s approachability.18 A linguistic analysis found that students tended to use more informal communication features than librarians did, but that librarians who mirrored the students’ language were more likely to be rated as very helpful, especially regarding the use of contractions, ellipses, capitalization, and punctuation.19

What Radford and Radford call the “closing ritual” is also often included in chat satisfaction studies.20 Usually, researchers represent this phase of the chat with two RUSA behaviors, 5.1.1 “Asks the patron if his/her questions have been completely answered” and 5.1.2 “Encourages the patron to return if he/she has further questions,” which are commonly called a “satisfaction check” and an “invitation to return.”21 Both of these were found to be significantly associated with satisfaction in Kwon and Gregory’s study.22 Some projects also touch on the way the chat ends with RUSA 5.1.7 “Takes care not to end the reference interview prematurely.”23 Lux and Rich found that all three closing behaviors were present in only 31 percent of chats with librarian-operators and 25 percent of chat with student-operators, which may have contributed to the positive comments and thanks received by 70 percent of librarian operators and 81 percent of student operators.24

Sometimes an operator’s behavior can be influenced by their limitations. Referring a user to another service point or library staff member is a common way for operators to direct users to someone with the necessary expertise. RUSA 4.1.9 defines this behavior as “Recognizes when to refer patrons for more help. This might mean a referral to a subject librarian, specialized library, or community resource.”25 Kwon found that users of a public library chat reference service whose questions ended in a referral fell into the middle level of satisfaction, along with those who only received partial answers.26 More recent work by Ward and Jacoby has found that the more complicated a question was, the more likely a referral would be needed, though no satisfaction component was included in that study.27 Similarly, in collaborative chat reference settings, operators may be matched with users who are not from their institutions. In a statewide chat consortium, Bishop found that nonlocal operators performed comparably to local operators, though questions related to employment, library cards, and log-ins had low rates of correct responses.28 He also observed that many nonlocal operators “put forth extra effort to answer virtual questions as if they were a local librarian”29 but did not count how often operators revealed that they were not local to the user nor how satisfied users were with local and nonlocal operators.

Operators are also sometimes limited by time. On many chat services, operators are scheduled for a specific shift and are relieved by other operators when their shift is over. The chat may need to be transferred to a new operator if the first operator cannot continue chatting. The authors were unable to find sources discussing the influence of transfers on user satisfaction. Finally, sometimes an operator cannot complete a user’s information request because their institution does not have the resource they are looking for, because they asked for something that contravened library policy, or because of technical problems, among many other explanations. To the authors’ knowledge, having to tell the user that the operator cannot do something they requested has not been studied.


Actively dissatisfied users represent a small proportion of chat reference users in most populations. Strong evidence of dissatisfaction with an answer was found in only 2.8 percent of interactions studied by Pomerantz, Luo, and McClure.30 Similarly, only 2 percent of users surveyed at Southern Illinois University said they would not use the chat service again.31 Marstellar and Mizzy found so few unfavorable patron responses in their study (five of 270 transcripts) that they were unable to perform planned cross tabulations.32 Only 0.8 percent of respondents who participated in a chat reference pilot indicated that they would not use the service again, a measure Durrance asserts is a strong indicator of a reference transaction’s success.33 Illinois State University observed much higher rates of dissatisfaction, with 14.3 percent of survey respondents indicating that they were dissatisfied or very dissatisfied.34 This dissatisfaction seemed to stem from the quality of the answers provided and the knowledge of the librarian, as this was identified as dissatisfactory in 7.3 percent and 5.4 percent of responses.35 Similarly, Kwon observed that 12.6 percent of users were not satisfied with the answer they received.36

Phase III of the Library Visit Study, a long-term study of reference service provision and user satisfaction at Western University in Ontario, Canada, includes some of the only work looking specifically at dissatisfaction in chat reference interactions.37 In it, MLIS students posed real questions to public and academic library service points, both in person and virtually, then reported on their experiences in a reflection of the interaction and a questionnaire. Nilsen identified three operator behaviors that were associated with user dissatisfaction:38

  1. Bypassing the reference interview (for instance, not asking a single question to clarify the user’s information need);
  2. Unmonitored referrals (such as referring the user to an information source without checking to make sure that it contained the desired information); and
  3. Failure to ask follow-up questions (for example: not checking to see if the operator answered the user’s question satisfactorily or inviting the user to return later for further assistance).

The reference interview was missing in 80 percent of virtual reference interactions, while 70 percent were missing follow-up questions and 38 percent contained unmonitored referrals.39 The operator’s helpfulness, friendliness, and understanding of the information need were not correlated with the user’s willingness to return to the service.40


Background and Setting

Scholars Portal is the service arm of the Ontario Council of University Libraries, a consortium representing the 21 university libraries in Ontario, Canada. Scholars Portal’s technical infrastructure preserves and provides access to information resources collected and shared by member libraries. Scholars Portal also develops and manages a wide range of digital services, including Ask a Librarian: a collaborative, bilingual chat reference service.

Ask a Librarian accepts library- and research-related questions from students, faculty, staff, and alumni at participating universities 67 hours per week during the academic year. The service reaches approximately 375,000 full-time equivalent students and receives more than 25,000 chats per year. The service is staffed primarily by librarians and paraprofessional library staff during daytime hours. Graduate student library assistants (GSLAs) from library or information studies programs also staff the service during evenings and weekends. At the time of the study, Ask a Librarian used LivePerson’s LiveEngage chat software.

The researchers received approval for this study from the University of Toronto’s Research Ethics office and through the consortium’s Data Working Group before beginning. Users were informed that their interactions could be used for research in the privacy policy that was included in the prechat survey. Operators were informed during training.

Data Collection and Sampling

A total of 9,424 chat interactions occurred between June 1, 2016, and December 1, 2016. All interactions included a transcript of the conversation between the user and the chat operator, metadata about the interaction, and a prechat survey. An optional exit survey was presented to users when the operator terminated the chat or when the user clicked an end chat button, but not when the user closed the browser window without ending the chat first. These data were routinely archived by Scholars Portal staff.

Of the 9,424 chat interactions, 1,395 interactions (14.8%) included a completed exit survey. Four of the eight survey questions contained questions designed to gauge a user’s satisfaction with the interaction:

  • The service provided by the librarian was
    • Excellent
    • Good
    • Satisfactory
    • Poor
    • Very poor
  • The librarian provided me with
    • Just the right amount of assistance
    • Too little assistance
    • Too much assistance
  • This chat service is
    • My preferred way of getting library help
    • A good way of getting library help
    • A satisfactory way of getting library help
    • A poor way of getting library help
    • A last resort for getting library help
  • Would you use this service again?
    • Yes
    • No

Responses in bold were identified as dissatisfied while those in italics were identified as neutral. Those with no text effects were deemed satisfied. The researchers noted in an Excel spreadsheet which interactions contained only satisfied responses and which included neutral or dissatisfied responses.

Two samples were selected for the present study:

  1. 256 interactions with satisfied exit survey responses were randomly selected using Excel. This represents 18 percent of all eligible interactions in the period (n = 1,395) with completed exit surveys. The confidence interval is 5.52 with a confidence level of 95 percent.
  2. All interactions with dissatisfied or neutral exit survey responses (n = 217) were purposively selected. Homogeneous purposive sampling was determined to be appropriate, as very few of the interactions with completed exit surveys displayed neutral or dissatisfied (16%) sentiments, so all available interactions would provide valuable data.

Data Preparation

Once the sample interactions were identified, we anonymized the spreadsheet data and interaction transcripts using a checklist provided by the consortium’s Data Working Group. We removed any information that would identify the user, the operator, or institutional affiliation of either party.



For the transcripts, we surveyed the literature for variables that would be relevant and created variables we hypothesized would be worth investigation. The result was a codebook with thirty variables. Only those variables included in this study’s analysis will be described:

  1. Opening Behaviors: Behaviors that usually occur near the beginning of the chat.

    1.1 Clarification: Did the operator ask at least one open- or closed-ended question about the user’s information need?

    1.2 Confirmation: Did the operator confirm a mutual understanding of the user’s information need?

    1.3 Attempt to resolve: Did the operator try to help the user with their information need?

  2. Closing Behaviors: Behaviors that usually occur near the end of the chat.

    2.1 Satisfaction check: Did the operator make sure that the user was satisfied with the answer they received?

    2.2 Invitation to return: Did the operator invite the user to come back if they had more questions?

    2.3 Chat ended mutually: Is there evidence that both the user and operator knew and agreed that the chat was ending?

  3. Anytime Behaviors: Behaviors that can occur at any time during the chat.

    3.1 Institution match reveal: Did the operator reveal that they did not work at the same institution as the user?

    3.2 Transfer: Was the chat transferred from one operator to another?

    3.3 Tone: Was the operator professional and courteous?

    3.4 Referral: Did the operator recommend that the user contact another service point or individual?

    3.5 Interest and empathy: Did the operator make it clear that they cared about the user and/or the user’s question?

    3.6 Informality: Did the operator use an informal writing style (such as sentence fragments, emoji, contractions)?

    3.7 “No”: Did the operator make the user aware that their information need could not be completed?

A more detailed explanation of each variable is available as an appendix.

Following best practices established in the field, all four members of the research team coded a test set of 15 transcripts using a draft codebook and coding form, which fed to a spreadsheet created using Google Forms.41 We then met to discuss discrepancies in the choices, refined the codebook and coding form, and coded a further 10 transcripts. We analyzed the intercoder reliability for all variables, having predetermined a threshold of 80 percent average pairwise percent agreement. A few variables fell below this threshold, so we repeated the process a third time with an additional 15 transcripts. After a third round, two variables included in this study were still below 80 percent agreement: formality (64%); and interest/empathy (67%).

Having established a strong level of intercoder reliability for most of the variables, the researchers moved on to the transcript coding stage. All variables were coded by a single researcher, with the exception of the two variables that did not have acceptable pairwise percent agreement. These were coded over three rounds by at least two researchers to address concerns with intercoder reliability:

  • Round 1: Each researcher independently coded their assigned transcripts for all variables.
  • Round 2: Each researcher was assigned a set of transcripts that they had not yet seen and independently coded for only the two variables that did not have acceptable pairwise percent agreement.
  • Round 3: Each researcher was assigned a set of transcripts they had not seen in either of the previous rounds and resolved any conflicts between the previous coders for the two variables with poor intercoder reliability.

This procedure was informed by best practices in qualitative research. Barbour advises that multiple coding can address concerns with intercoder reliability and increase thoroughness.42

Exit Survey Free Text

The researchers examined all free text responses included in the sample’s exit surveys. We classified comments related to operator behavior using the codes we employed in the transcript analysis portion of the study.

Data Compilation

Once each transcript had been coded, we combined the coded data spreadsheet with the prechat and exit surveys, metadata, and exit survey free text themes in a single spreadsheet and prepared it for SPSS input. A research design consultant based at the Education Commons at the University of Toronto’s Ontario Institute for Studies in Education advised us which statistical tests to run.


Demographic Characteristics

Since our study used exit surveys that were self-selected, we investigated the demographic characteristics of the eligible and ineligible segments of the population to determine if they were skewed. A comparison is presented in table 1, which shows that the percent share of each demographic group is very similar for both the eligible and the ineligible groups.


Comparison of Demographic Characteristics of Eligible and Ineligible Chat Interactions during the Study Period

User status

Ineligible (No completed exit survey)

Eligible (Completed exit survey)










Faculty Member





Graduate Student















A chi-square test of independence revealed that there was no significant association between user status and the presence of a completed exit survey (χ2 = 10.062, p = 0.074). A Pearson chi square is a test of independence that determines if there is a statistically significant relationship between categorical variables (for example, variables with no hierarchy or order, only names).

There were too few French-language chats to obtain a Pearson chi square, so we used a test appropriate for small samples, Fisher’s Exact test, which was also not significant at p = 0.783.

Coded Variables

The researchers used chi square tests of independence to determine if there was a relationship between user dissatisfaction and the observed operator behaviors. As shown in table 2, 11 variables had a significant relationship with dissatisfaction at a p < 0.05 level. Only Confirmation and Invitation to return were not significantly associated with dissatisfaction.


Summary of Chi-square Tests of Independence by Variable






Opening Behaviors









Attempt to Resolve




Closing Behaviors

Satisfaction Check




Invitation to Return




Chat Ended Mutually




Anytime Behaviors

Institution Match Reveal
















Interest and Empathy












We entered the 11 significantly associated variables into a binary logistic regression model to determine the strength of the variable’s effect and whether the association was positive or negative. These 11 variables were clarification, attempt to resolve, satisfaction check, mutual chat ending, institution match reveal, tone, transfer, referral, interest and empathy, and informality.

The overall model was statistically significant at χ2 (11) = 99.045 with a p-value of < 0.001. This means that the model was statistically reliable in distinguishing between satisfied and dissatisfied patrons. The Nagelkerke R2 was used to determine how useful the variables were in predicting dissatisfaction. The Nagelkerke R2 was 0.252, indicating that the model has sufficient explanatory power but does not have strong predictive power. It was correct in predicting the outcome 68.5 percent of the time. Those results are summarized in table 3.


Summary of Binary Logistic Regression Model




Std. Error




Opening Behaviors

Attempt to Resolve












Closing Behaviors

Satisfaction Check






Chat Ended Mutually






Anytime Behaviors

Institution Match Reveal
























Interest and Empathy


















R2 = 0.189 (Cox & Snell), 0.252 (Nagelkerke)

Model (11) = 99.045, p < 0.001

Opening Behaviors

Of the three opening behaviors, only two were significantly associated with dissatisfaction in the chi squares: clarification (p < 0.05) and attempt to resolve (p < 0.001). The regression model showed that attempting to resolve the question was not a significant explanatory variable (β = –0.52, p = 0.115). Clarification was a positive, statistically significant variable in the model (β = 0.679, p = 0.002). When a coefficient (β value) is positive in the regression model, it indicates that the presence of the variable in the chat explained increases in user dissatisfaction. Conversely, a negative coefficient would explain decreases in dissatisfaction.

In the exit survey, users expressed frustration when they perceived that the operator did not understand their information need, something we interpreted as relating to clarification:

I might use this service again (hopefully the person I talk to is more helpful next time).… even though I gave the person enough info, he wasn’t super helpful and he didn’t really understand what I wanted.

We found that users mentioned that the operator had attempted to resolve their issue more frequently than they criticized a lack of operator effort:

Present: Chat didn’t solve my issue. The librarian did try though so I was satisfied that she made the effort.
Absent: My issue remains unsolved and they were not able to help because Ask a Librarian was closing in 8 minutes. A little upset that my answer was that through trial and error I’ll find online articles.

Closing Behaviors

Invitation to return was not significantly related to dissatisfaction in the chi squares, but satisfaction check (p < 0.05) and mutual chat ending (p < 0.001) were both related. However, in the regression model only mutual chat ending was a statistically significant variable (p < 0.001). The coefficient of mutual chat ending was negative, indicating that the variable explained decreases in dissatisfaction (β = –0.92).

The exit surveys confirmed that the way the chat ended was important for users:

Librarian left too quickly. I was not able to ask any additional questions and the librarian immediately left.
I couldn’t read the librarian’s reply before the chat ended.

Anytime Behaviors

The chi-square tests of independence suggested that all behaviors in this category were related to dissatisfaction. Revealing an institutional mismatch, using an informal communication style, and saying “no” were not significant explanators of dissatisfaction in our regression model, however. Transfers (β = 1.031, p = 0.012) and referrals (β = 0.528, p = 0.033) were both positive, significant variables in the model, meaning that their presence explained increases in dissatisfaction. The presence of professional tone (β = –1.287, p = 0.002) and interest or empathy (β = –0.689, p = 0.001) were both strongly negatively associated with dissatisfaction, suggesting that they explain decreases in dissatisfaction.

Exit survey comments corroborated that an operator’s manner was important to users. The absence of a professional tone and a lack of warmth or empathy were both cited as reasons they were dissatisfied:

This was my first time using this chat but I’ve used other live chats before and even though they couldn’t always help me they were a lot warmer with their reply. ‘I doubt it’ isn’t the best to use when trying to help someone during a chat service.
This was extremely unhelpful. I have a quiz and I had difficulty finding an article. I thought the response was extremely rude and unhelpful.

Delays in having their question answered—whether because users were referred to another service point or transferred to another operator—were also common themes:

This was my second time using this option and both times I am told the shift is ending and request to transfer me.
It seemed to take longer than it should (about 30 minutes with 2 librarians) to find out that Library didn’t have an online subscription to a journal.

Several comments were made about the operator not being from the user’s institution:

The operators are not [University] librarians, and hence are not aware of the resources at [University], which was the subject of my question.
I thought I was connecting with someone at my institution. I think this service is good for general question[s] but not really for institution specific questions. So it was a bit of a let down.

Discussion of Findings

The purpose of this study was to identify operator behaviors that contribute to user dissatisfaction. A series of chi-square tests of independence on 473 chat transcripts with completed exit surveys (of which 217 had dissatisfied responses) found 11 behaviors that were significantly associated with dissatisfaction. Further investigation with a binary logistic regression revealed that only six of these had strong explanatory power. Three of these behaviors had positive associations, meaning that their presence in the chat explained increases in user dissatisfaction. Those behaviors were (1) clarification; (2) operator transferring the chat to another operator; and (3) operator referring the user to another service point. A further three behaviors were negative explanators, meaning that their presence explained decreases in user dissatisfaction. Put another way, the absence of these behaviors explained increases in dissatisfaction: (4) ending the chat mutually; (5) maintaining a professional tone; and (6) showing interest in the question or empathy with the user.

These results add an interesting layer to Nilsen’s Library Visit Study.43 Phase III of that study concluded that bypassing the reference interview, providing unmonitored referrals, and failing to ask follow-up questions were associated with user dissatisfaction. Of the two variables included in our study that relate to the reference interview, clarification and confirmation, only clarification was significantly associated with dissatisfaction, but it was a positive association. In the exit survey comments, dissatisfied users said they were frustrated when the operator did not understand their information need, similar to comments collected by Nilsen.44 This suggests that users want to be clearly understood but did not want to spend time explaining themselves in depth. We hypothesize that this is a function of chat as a reference medium. It can be difficult to express complex concepts textually, so perhaps users are frustrated at being unable to make their needs clear in a fast, easy way.

Our study also corroborates Nilsen’s designation of referrals as a dissatisfying behavior. We found that the presence of a referral in a chat was a strong positive explainer of user dissatisfaction. Nilsen’s third behavior, asking follow-up questions, was represented in our study by two variables, satisfaction check and invitation to return. Though satisfaction check was not independent from dissatisfaction, it ultimately was not found to be a statistically significant predictor of dissatisfaction. Invitation to return had no association with dissatisfaction. It should be noted, though, that Nilsen did not count automated messages as true invitations to return.45 Our study allowed these “canned” messages because operators on the service were trained to use them as a time-saving measure. Since so many operators use these automated messages as directed, the researchers felt discounting them would result in a less usable dataset.

The way a chat ends explains the user’s dissatisfaction, according to our findings. Specifically, chat endings that were not mutual explained user dissatisfaction. Previous stages of the Library Visit Study have focused on chat termination, drawing on Nolan’s theories.46 Nolan offers time as a policy-institutional factor that influences an operator’s decision to terminate a chat. As with transfers, chat operators may wish to rush a closing or simply leave a chat because their shift is ending. The medium of chat might also cause a user to be slow in responding to an operator if they are multitasking with multiple browser windows open, causing an operator to believe they no longer wish to continue the interaction. Though the present research project cannot explain why these unsatisfactory terminations occurred, it suggests that mutual chat endings will help avoid dissatisfaction in users.

The difference in findings between our study and Nilsen’s could be accounted for by the methodology. The Library Visit Study was unobtrusive; individuals recruited by the researchers to act as users initiated and then reported on reference interactions with operators who did not know they were being observed.47 The “users” in that study were always aware that their purpose was to evaluate the operator, even though they were directed to ask a question that mattered to them personally. They may have reflected on the interaction differently if it had occurred organically. Further, our study used obtrusive means to measure user dissatisfaction. The interactions collected were in no way influenced by the researchers, though the users were aware they were evaluating the operator in their exit surveys. We also suspect that the user’s investment in reference as a professional practice is important. In the Library Visit Study, the participants were recruited from an MLIS program. Library science students may have different expectations about the operator’s behavior than “civilians” would. They may have knowledge of the reference interview, giving them a rubric with which to judge the interaction that users in our study might not have.

Logically, behaviors associated with delaying completion of the user’s information need could result in dissatisfaction. Connaway, Dickey, and Radford found that convenience and immediacy were the most valued factors of chat reference.48 Transferring the user to another operator or referring the operator to another service point delay the user from getting a definite answer. Our research suggests that they both explain dissatisfaction. Unfortunately, the operator often has good reasons for these behaviors. At Ask a Librarian, transfers usually happen at shift change time when the operator needs to leave but the user wants to continue chatting. Similarly, referrals are common in consortia when the operator may not have subject expertise or local knowledge needed to complete the question. The operator may provide a referral to save the user’s time, something they may appreciate in the near future even if it is frustrating in the present. Both of these scenarios may feel to the user like unnecessary delays, despite the operator’s best intentions.

We suspect that telling the user “no” was not an explanatory variable because it does not delay the user. It provides a definite answer that the user can act upon immediately. This is good for operators, as saying “no” is something that often cannot be avoided. A common example from our practice is when the user’s institution does not have access to a particular article or book. Having confirmation that this is the case, the user can choose to place an interlibrary loan request or find another source, both decisions that can be done immediately.

Our results showed that the operator’s manner had a strong influence on user dissatisfaction. Interactions where the operator was rude or abrupt, and/or failed to show empathy to the user or express interest in their question were more likely to result in dissatisfied scores. Maintaining a professional demeanor is a basic expectation of customer service. Instances where it was not maintained in the sampled transcripts were dismaying to the researchers. We found several chats where the operator’s first response to the user’s query was “I doubt it” or “we can try,” which came off as very abrupt ways to begin a chat. Failing to show interest and empathy, too, was much more common than we had anticipated. It was absent in 103 (40%) satisfied chats and 135 (62%) dissatisfied chats. We hypothesize that these instances may be borne of the operator’s desire to “get down to business,” something that might be more common if the operator is chatting with multiple users or if their shift is about to end. It can be easy to forget about the user’s relational needs as a chat operator when they are not physically in front of you and you are stressed out from managing multiple chats at once. There seemed to be more leeway in the operator’s communication style, happily. Informality was not a predictor of dissatisfaction in our study, though it is possible that some types of users could prefer one style more than others. The present study did not distinguish types of users, but this might be a fruitful avenue for future research.


Our methodology carries a few limitations that should be noted when considering the generalizability of our findings. We used exit surveys to gauge the user’s satisfaction or dissatisfaction with a chat interaction. This approach can be problematic because it only measures the user’s feelings in the moment. The exit survey response rate is also a consideration. Our chat software only presented the exit survey to users who completed the chat and did not prematurely close the browser window. Interactions where the user left by closing the browser window without clicking an “end chat” button would not have been invited to participate. Next, user satisfaction is only one measure of a chat’s success. Other transcript analysis studies have assessed the quality of the answer provided by the operator.49 Finally, our quantitative analysis does not include potential confounding variables in the regression model.


There are many factors that can cause a user to leave an interaction less satisfied than operators might like. Though it is impossible to control for all of them, our research suggests that there are some things operators can do to decrease the likelihood that a user will leave dissatisfied:

  • Avoid being abrupt or rude. The user has no visual or tonal cues, so ensuring that your words are polite and welcoming is even more important than in face-to-face reference.
  • Avoid being “all business” during the chat. Users appreciate your interest and empathy.
  • Avoid transferring the user to another operator. Though not always possible, staying on with the user as long as you can reduces delay for them.
  • Avoid referring the user to another service point or staff member. You might be able to contact other service points on their behalf instead.
  • Avoid terminating the chat before the user is ready. Wait until they acknowledge your closing messages before you leave.

Our research also indicates that there are some unavoidable behaviors that are associated with dissatisfaction. Asking clarifying questions is necessary to understanding the user’s information need and thus must be employed.

Finally, our research suggests that operators should worry less about revealing that they are from a different institution than the user, how hard they attempt to resolve the question, telling the user “no,” how formal or informal their communication style is, inviting the user to return, and performing a satisfaction check. While these behaviors might make a difference to satisfaction, our study found that they made no significant difference to dissatisfaction.


Thank you to Olesya Falenchuk at the OISE Education Commons for her help with planning the statistical analysis and interpreting SPSS’s outputs. Thank you to Amy Greenberg for her contributions to the research team, especially coding the chat transcripts.

APPENDIX. Variables Coded in Transcripts




Codebook Description


Opening Behaviors


RUSA 3.1.850

The operator asked an open- or closed-ended question about the user’s information need.

Operator: Could you tell me a little more about your topic and what you have found so far, [Patron]?


RUSA 3.1.551

The operator confirmed that their understanding of the user’s information need was correct, usually by paraphrase or closed-ended question.

Operator: Okay, you are asking about citing online archival material, specifically whether you should be indicating that your sources are online ones. Is that correct?

Attempt to Resolve

Keyes and Dworak found that, in 5 percent of interactions in their study, the operator failed to make sufficient effort.52

The operator provided a bare minimum of support to the user. The operator’s effort should have matched the complexity of the question. Common examples of non-attempts include:

  • Providing a link with no context as an answer
  • Trying something obvious then giving up
  • Not looking for local instructions

Operator: Sorry, I can’t access that information online. Are you able to visit the info desk at [Branch]? That would be the best way to find out.

Operator: What do you mean a booking?
User: Like when I am booking a study room.
Operator: I’m not from [University] and I don’t see any operators from [University]. I looked this up and could not find information. You should call the library and ask. Sorry.

Closing Behaviors

Satisfaction Check

RUSA 5.1.153

The operator checked to see if they answered the user’s question or if they were satisfied with some element of the service.

Operator: Is that what you’re looking for?

Invitation to Return

RUSA 5.1.254

The operator invited the user to return either with a “canned message” or in the operator’s own words.

Operator: Thank you for using Ask a Librarian chat. Remember to come back if you have more questions.

Chat Ended Mutually

Duinkerken, Stephens, and MacDonald included premature endings in their study. The categories were drawn from our professional practice.55

Both the operator and the user acknowledged and agreed that the chat was ending.

Operator: Is there anything else I can do for you today?
User: No, thank you. Have a good day :)
Operator: You as well!
<User closed chat>

Anytime Behaviors

Institution Match Reveal

Bishop, Kwon, and other researchers in consortial chat settings examines nonlocal operators.56 Our professional practice made us wonder if the outcomes would be the same whether a user realizes they are chatting with a nonlocal operator.

The operator revealed to the user that they are not from the user’s institution or campus if within the same institution. The operator must have explicitly stated this; coder inferences are not sufficient.

Operator: Are you looking for [an] article then? Is CCT Computer and Tech, sorry I’m not from [University].


RUSA 3.1.257

The operator maintained a professional and courteous tone. They were never rude, abrupt, inappropriate, or unprofessional.

Operator: how you doing, User?
User: just great. you?
Operator: :0

Operator: You’ll need to sift through the results. i’ll give you the lib guide in a sec…
Operator: So offtopic but i live in [Town] too!!
User: I think I found an article that could be good
Operator: Great!
User: whoa: where does it say I’m from [Town]?
User: but cool!
Operator: It tells me the country, city, [state/province], internet provider and that you use [Telecom]. :)
Operator: And that you’re an undergrad.
Operator: And on chrome. :)


Our professional practice includes transferring users to operators at the end of chat shifts. We were curious to see if this affected dissatisfaction.

The user is transferred from at least one operator to another during the course of the chat. No warning messages are required to qualify.

Operator1: Great; thanks! I’ll take a look, but just to let you know my shift is ending in a few minutes. I’d be happy to transfer you to another librarian, though, who can help you further
User: That’s awesome; thank you!
Operator1: Ok, I’ll transfer you over to [Operator2]. Please give me a few moments…
User: Okay. Thanks :)
System: Please wait while I transfer the chat to ‘[Operator2].’
System: You are now chatting with ‘[Operator2].’


Ward and Jacoby among many others studied referrals in chat reference.58

The operator shared contact information or advised the user to contact another staff member or a different service point to complete the question.

Operator: Also, if still no luck by tomorrow. We do have a librarian who may know about spss. Unfortunately, she isn’t working tonight. But you can find her contact info here. <a href=“URL”>[Librarian]</a>

Interest and Empathy

RUSA 2.059

The operator made it clear that they cared about the user and/or the user’s question. Behaviors like “small talk” (such as how are you, talking about the weather), exhibiting kindness (examples: sympathizing with problems, acknowledging difficulties), showing support (encouraging user), and offering the user congratulations are examples of interest and empathy.

Operator: That looks like an interesting question! Which course or department is this for? I want to get you the best resources. :)

Operator: Sorry. That’s frustrating; they should give PDF copies.


Waugh had subjects compare a chat with a formal operator and one with an informal operator and collected impressions.60

The operator tended to use more informal language during the chat including:

  • Sentence fragments
  • Emojis
  • Contractions
  • Abbreviations
  • Lack of punctuation
  • Lack of capitalization
  • “Prosodic features” like ellipsis for passage of time
  • Reactions (like “lol”)
  • Multiple punctuation for emphasis (such as more than question mark at the end of a question or exclamation point at the end of a statement)

Operator: Okay. No worries. Technology troubles again :)

Operator: Aaaaand it won’t because we only have the digital content for this journal from 1965–1984
Patron: :’(
Operator: Yup, that sucks. But that’s how you can get around the technical problem that’s happening right now. Proquest still wouldn’t have been able to find this article. If you really really need it, you can request it via interlibrary loan.


Our professional practice made us curious as to whether users were only dissatisfied because they could not complete their information needs.

At some point in the chat, the user found out they could not do something they wanted due to technical, policy, library collection, or any other reason. The operator did not actually have to say the word “no.”

Operator: Okay. I’m afraid I don’t think there is anything I can do to help with this. I don’t have access to the back end to be able to see why it is missing.

Operator: The bad news is we don’t have this article online—even though we do have more recent online volumes of the journal. The good news, though, is you can get the article in print at the [Branch Science Library].


1. Nahyun Kwon and Vicki L. Gregory, “The Effects of Librarians’ Behavioral Performance on User Satisfaction in Chat Reference Services,” Reference & User Services Quarterly 47, no. 2 (2007): 137–48; Klara Maidenberg and Dana Thomas, “Do Patrons Appreciate the Reference Interview? Virtual Reference, RUSA Guidelines and User Satisfaction” (2014 Library Assessment Conference, Seattle, WA, 2014), 697–705; Steven Baumgart, Erin Carrillo, and Laura Schmidli, “Iterative Chat Transcript Analysis: Making Meaning from Existing Data,” Evidence Based Library and Information Practice 11, no. 2 (June 20, 2016): 39–55, https://doi.org/10.18438/B8X63B.

2. Reference & User Services Association (RUSA), “Guidelines for Behavioral Performance of Reference and Information Service Providers,” Reference & User Services Association (RUSA), (Sept. 29, 2008), available online at www.ala.org/rusa/resources/guidelines/guidelinesbehavioral [accessed 30 November 2018].

3. Matthew R. Marsteller and Danianne Mizzy, “Exploring the Synchronous Digital Reference Interaction for Query Types, Question Negotiation, and Patron Response,” Internet Reference Services Quarterly 8, no. 1/2 (2003): 149–65, https://doi.org/10.1300/J136v08n01_13.

4. RUSA, “Guidelines for Behavioral Performance of Reference and Information Service Providers.”

5. RUSA, “Guidelines for Behavioral Performance of Reference and Information Service Providers.”

6. Wyoma van Duinkerken, Jane Stephens, and Karen I. MacDonald, “The Chat Reference Interview: Seeking Evidence Based on RUSA’s Guidelines,” New Library World 110, no. 3/4 (2009): 107–21, https://doi.org/10.1108/03074800910941310.

7. Kelsey Keyes and Ellie Dworak, “Staffing Chat Reference with Undergraduate Student Assistants at an Academic Library: A Standards-Based Assessment,” Journal of Academic Librarianship 43, no. 6 (2017): 469–78.

8. Keyes and Dworak, “Staffing Chat Reference with Undergraduate Student Assistants at an Academic Library.”

9. RUSA, “Guidelines for Behavioral Performance of Reference and Information Service Providers.”

10. Kwon and Gregory, “The Effects of Librarians’ Behavioral Performance on User Satisfaction in Chat Reference Services,” 145.

11. Greta Valentine and Brian D. Moss, “Assessing Reference Service Quality: A Chat Transcript Analysis,” in At the Helm: Leading Transformation (Baltimore, MD: ACRL, 2017), 67–75.

12. Maidenberg and Thomas, “Do Patrons Appreciate the Reference Interview?”

13. Jeffrey Pomerantz, Lili Luo, and Charles R. McClure, “Peer Review of Chat Reference Transcripts: Approaches and Strategies,” Library & Information Science Research 28, no. 1 (2006): 24–48.

14. Pomerantz, Luo, and McClure, “Peer Review of Chat Reference Transcripts”; RUSA, “Guidelines for Behavioral Performance of Reference and Information Service Providers.”

15. Adolfo G. Prieto, “Humanistic Perspectives in Virtual Reference,” Library Review 66, no. 8/9 (2017): 695–710, https://doi.org/10.1108/LR-01-2017-0005.

16. Prieto, “Humanistic Perspectives in Virtual Reference,” 701.

17. Jennifer Waugh, “Formality in Chat Reference: Perceptions of 17- to 25-Year-Old University Students,” Evidence Based Library and Information Practice 8, no. 1 (2013): 19–34, https://doi.org/10.18438/B8WS48.

18. Waugh, “Formality in Chat Reference.”

19. Jack M. Maness, “A Linguistic Analysis of Chat Reference Conversations with 18–24 Year-Old College Students,” Journal of Academic Librarianship 34, no. 1 (2008): 31–38, https://doi.org/10.1016/j.acalib.2007.11.008.

20. Marie L. Radford and Gary P. Radford, Library Conversations: Reclaiming Interpersonal Communication Theory for Understanding Professional Encounters (Chicago, IL: Neal-Schuman, an imprint of the American Library Association, 2017).

21. RUSA, “Guidelines for Behavioral Performance of Reference and Information Service Providers.”

22. Kwon and Gregory, “The Effects of Librarians’ Behavioral Performance on User Satisfaction in Chat Reference Services.”

23. RUSA, “Guidelines for Behavioral Performance of Reference and Information Service Providers.”

24. Vera J. Lux and Linda Rich, “Can Student Assistants Effectively Provide Chat Reference Services? Student Transcripts vs. Librarian Transcripts,” Internet Reference Services Quarterly 21, no. 3/4 (2016): 115–39, https://doi.org/10.1080/10875301.2016.1248585.

25. RUSA, “Guidelines for Behavioral Performance of Reference and Information Service Providers.”

26. Nahyun Kwon, “User Satisfaction with Referrals at a Collaborative Virtual Reference Service,” Information Research 11, no. 2 (2006): 70–91.

27. David Ward and JoAnn Jacoby, “A Rubric and Methodology for Benchmarking Referral Goals,” Reference Services Review 46, no. 1 (2018): 110–27, https://doi.org/10.1108/RSR-04-2017-0011.

28. Bradley Wade Bishop, “Can Consortial Reference Partners Answer Your Local Users’ Library Questions?” portal: Libraries & the Academy 12, no. 4 (2012): 355–70.

29. Bishop, “Can Consortial Reference Partners Answer Your Local Users’ Library Questions?” 367.

30. Pomerantz, Luo, and McClure, “Peer Review of Chat Reference Transcripts.”

31. Stephanie J. Graves and Christina M. Desai, “Instruction via Chat Reference: Does Co-Browse Help?” Reference Services Review 34, no. 3 (2006): 340–57.

32. Marsteller and Mizzy, “Exploring the Synchronous Digital Reference Interaction for Query Types, Question Negotiation, and Patron Response.”

33. Jo Kibbee, David Ward, and Wei Ma, “Virtual Service, Real Data: Results of a Pilot Study,” Reference Services Review 30, no. 1 (2002): 25–36, https://doi.org/10.1108/00907320210416519; Joan C. Durrance, “Reference Success: Does the 55 Percent Rule Tell the Whole Story?” Library Journal 114, no. 7 (Apr. 15, 1989): 31–36.

34. Cassidy R. Sugimoto, “Evaluating Reference Transactions in Academic Music Libraries,” Music Reference Services Quarterly 11, no. 1 (2008): 1–32, https://doi.org/10.1080/10588160802157124.

35. Sugimoto, “Evaluating Reference Transactions in Academic Music Libraries.”

36. Kwon, “User Satisfaction with Referrals at a Collaborative Virtual Reference Service.”

37. Kirsti Nilsen, “The Library Visit Study: User Experiences at the Virtual Reference Desk,” Information Research 9, no. 2 (2004), available online at www.informationr.net/ir/9-2/paper171.html [accessed 25 September 2018]; Kirsti Nilsen, “Comparing Users’ Perspectives of in-Person and Virtual Reference,” New Library World 107, no. 3/4 (2006): 91–104, https://doi.org/10.1108/03074800610654871.

38. Nilsen, “The Library Visit Study”; Nilsen, “Comparing Users’ Perspectives of in-Person and Virtual Reference.”

39. Nilsen, “The Library Visit Study.”

40. Nilsen, “Comparing Users’ Perspectives of in-Person and Virtual Reference,” 96.

41. Baumgart, Carrillo, and Schmidli, “Iterative Chat Transcript Analysis.”

42. Rosaline S. Barbour, “Checklists for Improving Rigour in Qualitative Research: A Case of the Tail Wagging the Dog?” BMJ 322, no. 7294 (May 5, 2001): 1115–17, https://doi.org/10.1136/bmj.322.7294.1115.

43. Nilsen, “The Library Visit Study”; Nilsen, “Comparing Users’ Perspectives of in-Person and Virtual Reference.”

44. Nilsen, “The Library Visit Study.”

45. Nilsen, “The Library Visit Study.”

46. Catherine Sheldrick Ross and Patricia Dewdney, “Negative Closure: Strategies and Counter-Strategies in the Reference Transaction,” Reference & User Services Quarterly 38, no. 2 (1998): 151–63; Christopher W. Nolan, “Closing the Reference Interview: Implications for Policy and Practice,” RQ (1992).

47. Nilsen, “The Library Visit Study.”

48. Lynn Sillipigni Connaway, Timothy J. Dickey, and Marie L. Radford, “‘If It Is Too Inconvenient I’m Not Going after It’: Convenience as a Critical Factor in Information-Seeking Behaviors,” Library & Information Science Research 33, no. 3 (2011): 179–90.

49. Deborah L. Meert and Lisa M. Given, “Measuring Quality in Chat Reference Consortia: A Comparative Analysis of Responses to Users’ Queries,” College & Research Libraries 70, no. 1 (Jan. 1, 2009): 71–84, https://doi.org/10.5860/crl.70.1.71; Kate Fuller and Nancy H. Dryden, “Chat Reference Analysis to Determine Accuracy and Staffing Needs at One Academic Library,” Internet Reference Services Quarterly 20, no. 3/4 (2015): 163–81; Marie L. Radford and Lynn Silipigni Connaway, “Not Dead Yet! A Longitudinal Study of Query Type and Ready Reference Accuracy in Live Chat and IM Reference,” Library & Information Science Research 35, no. 1 (2013): 2–13.

50. RUSA, “Guidelines for Behavioral Performance of Reference and Information Service Providers.”

51. RUSA, “Guidelines for Behavioral Performance of Reference and Information Service Providers.”

52. Keyes and Dworak, “Staffing Chat Reference with Undergraduate Student Assistants at an Academic Library.”

53. RUSA, “Guidelines for Behavioral Performance of Reference and Information Service Providers.”

54. RUSA, “Guidelines for Behavioral Performance of Reference and Information Service Providers.”

55. Duinkerken, Stephens, and MacDonald, “The Chat Reference Interview.”

56. Bishop, “Can Consortial Reference Partners Answer Your Local Users’ Library Questions?”; Kwon, “User Satisfaction with Referrals at a Collaborative Virtual Reference Service.”

57. RUSA, “Guidelines for Behavioral Performance of Reference and Information Service Providers.”

58. Ward and Jacoby, “A Rubric and Methodology for Benchmarking Referral Goals.”

59. RUSA, “Guidelines for Behavioral Performance of Reference and Information Service Providers.”

60. Waugh, “Formality in Chat Reference.”

*Judith Logan is User Services Librarian at the University of Toronto, email: judith.logan@utoronto.ca; Kathryn Barrett is Social Sciences Liaison Librarian at the University of Toronto Scarborough Library, email: kathryn.barrett@utoronto.ca; Sabina Pagotto is Client Services & Assessment Librarian at Scholars Portal, email: sabina@scholarsportal.info. ©2019 Judith Logan, Kathryn Barrett, and Sabina Pagotto, Attribution-NonCommercial (http://creativecommons.org/licenses/by-nc/4.0/) CC BY-NC.

Copyright Judith Logan, Kathryn Barrett, Sabina Pagotto

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

Article Views (Last 12 Months)

No data available

Contact ACRL for article usage statistics from 2010-April 2017.

Article Views (By Year/Month)

January: 86
February: 102
March: 85
January: 0
February: 0
March: 0
April: 0
May: 0
June: 0
July: 0
August: 0
September: 0
October: 10
November: 1324
December: 227