03_Cook

Is the Library One-Shot Effective? A Meta-Analytic Study

The one-shot instruction session is a dominant mode of teaching in academic libraries. While many conference presentations and articles about methods have been shared, there is little consensus about whether a single library session promotes student learning about information literacy topics. This meta-analysis gathers studies that employ quantitative measures of student learning in an attempt to determine if the one-shot is an effective modality for learning. Results indicate the need for a more critical look at the grouping of “one-shot” as a methodology and the need for further robust research on acquisition of student learning outcomes in the one-shot context.

In academic libraries, the most pervasive method of providing information literacy instruction is the so-called “one-shot,” where students in a given course attend a single session facilitated by a librarian during their academic term to learn about conducting library research (Nicholson, 2016). Since the one-shot is so broadly used in the academic library context, there is a clear need for a meta-analytic review of the literature, so that library practitioners can make informed choices about how to spend their time, resources, and efforts in supporting undergraduate student learning. The aim of this meta-analysis is to attempt to answer the research question: Does the academic library one-shot result in improved information literacy knowledge and/or skills of undergraduate students?

So-called “bibliographic instruction” emerged in the 1980s; and, by the 1990s, it was firmly entrenched as a key component of academic librarians’ work (Martin & Jacobson, 1995). Almost as early, the one-shot began to receive criticism as an effective teaching tool (Gavin, 1995). As Bowles-Terry and Donovan put it, “one-shot instruction sessions were born out of necessity,” yet they have continued to be the dominant mode of work for instruction librarians (Bowles-Terry & Donovan, 2016, 137; for more recent statistics, please see Hsiesh, Dawson & Yang, 2021, 3). While many librarians may prefer other ways of engaging with students, the constraints on library instructional engagement remain. Due to the prevalence of one-shot sessions (which encompass a wide variety of pedagogical approaches), there is a significant and divisive literature about the overall goals of the one-shot model of information literacy instruction, as well as the method’s efficacy. Can meaningful learning occur in a single librarian-mediated session? Some practitioners claim that this method is “preposterous for many librarians” (Walker and Pearce, 2014), while others assert that the one-shot is a valuable tool for student learning (one example: Wang, 2016). In her recent editorial, Pagowsky (2021) claims that the one-shot is “transactional and keeps us in cycles of ineffectiveness.” The literature ranges from self-reports of student attitudes and behaviors toward the library, to pre- and post-testing of specific knowledge, to authentic assessments1 of information literacy in student work at the end of a term. While the profession was called upon to robustly assess learning outcomes in instruction as early as Davinson (1984) and Barclay (1993) and the literature has grown large since then, there continues to be little agreement on which outcomes are meaningful measures, best practices for experimental design, and assessment instruments.

The category of the “one-shot” includes a diverse range of pedagogical approaches, goals, and outcome measures, all bound together by the common qualities of being a single standalone, time-limited (often 45- or 60-minute, though may be as short as 15 minutes) session. Within the academic library context, the one-shot is meant to deliver information literacy knowledge, which can be measured and understood in a variety of ways. Sobel & Sugimoto (2012) found that instruction librarians use a wide variety of assessment tools and outcome measures, with a focus on access and resource selection. Between 2000 and 2016, many studies used the ACRL Information Literacy Competency Standards (2000) to determine outcome variables (for instance: Hsieh & Holden, 2010; Rosenblatt, 2010), and other studies continue to use specialized subject matter information literacy standards from ACRL (such as Tran et al., 2018). After the adoption of the ACRL Framework for Information Literacy in 2016, the literature moved away from Standards-based outcomes assessment to concept-driven outcomes (examples: Hurley & Potter, 2017; Tomaszeski, 2021). Yet other studies employ entirely different frameworks for information literacy assessment, such as the Research Readiness-Focused Assessment (Wang, 2016), or focus on the diversity of source types used (Howard et al., 2014). Despite the wide variety of outcome measures and assessment types, these studies are often framed as measuring the efficacy of the one-shot in teaching information literacy–related outcomes.

Within the literature examining one-shot library instruction, there is a broad range of quality of studies as well. Early assessment literature from the 2000s has been criticized for its reliance on convenience sampling, as well as a lack of emphasis on long-term retention of skills (Spievak & Hayes-Bohanan, 2013). However, more recent literature tends to be more expertly designed and executed, though longitudinal studies remain scarce. Additionally, most of the literature focuses on early-career undergraduate populations, such as introductory writing courses or college-preparedness courses, with very few studies examining graduate populations.

Methods

Meta-analysis was selected as the most relevant method for conducting the work of examining the aggregate impact (or not) of the one-shot. A meta-analysis is a formal review that collects relevant studies that examine the same topic (but may have different methods or specific outcomes) and synthesizes their results using statistical methods.

Relevant studies were identified by searching LISTA (Library, Information Science, and Technology Abstracts), Library Literature and Information Science Full Text (H.W. Wilson), ERIC, and Education Source in October 2021. The search strategy combined the terms (“one shot” OR “one-shot” OR “single-session” OR “bibliographic instruction”) AND (assess* OR evaluat* OR measur* OR test*) AND (undergraduate OR college OR university) AND (librar* OR information literacy), and was not restricted by date range. The bibliographies of studies that met the search criteria were mined to identify additional studies.

Case studies, quasi-experimental studies, and experimental studies were all considered for this meta-analysis. To be included in the meta-analysis, studies had to test the educational effectiveness of a one-shot library session, meaning that researchers examined a dependent variable that measured knowledge acquisition gained from the one-shot experience. For the purposes of this meta-analysis, the researcher did not include studies that examined GPA or course grade as outcome variables, as these are controversial metrics with many potential confounding variables (examples: Fisher, 2018; Robertshaw & Asher, 2019). Studies that measured affect, confidence, or anxiety were excluded. Studies that investigated the effectiveness of online modules, multisession instruction, or term-long courses were excluded. Only publications from journals were considered for inclusion in the meta-analysis. Prior to making this decision, recent conference proceedings from LOEX and ACRL were examined. Eligible participant populations were undergraduate students at community colleges, four-year colleges, and universities. Studies were not limited by publication date. Due to the researcher’s language limitations, only studies in English were included. Only peer-reviewed journal publications were considered for inclusion. The researcher reached out to a small group of authors to ask for additional information about their studies to determine if they could be included. While there is an emerging best practice in the social sciences to include two independent reviewers in conducting a meta-analytic review (Siddaway, Wood & Hedges, 2019), the present meta-analysis was conducted by a single reviewer.

As shown in figure 1, 3,058 potentially relevant citations were retrieved from the structured literature search. Titles and abstracts were reviewed to determine the relevance of the article, as well as to preliminarily identify whether the article was a research study. Of those, 66 articles were subject to in-depth reviews, including examination of the methods and data reported in the results and discussion. To be included in the meta-analysis, study design needed to be experimental or quasi-experimental and include a measure that indicated difference due to the one-shot intervention. Nine of those articles included relevant data, such as a t-statistic, chi-squared statistic, or mean and standard deviation for the researcher to calculate an effect size estimate. Articles that did not include the size of the population, n, were not included. Two other articles were identified by citation mining.

FIGURE 1

Flow Diagram of the Meta-analytic Review Process

Figure 1. Flow Diagram of the Meta-analytic Review Process

After identifying the 11 articles for the present meta-analysis, the researcher entered data extraction elements in an Excel spreadsheet. Some articles reported more than one study or variable that could be used (such as M.E. Cohen et al., 2016). Elements included were independent variable, dependent variable, study size n, degrees of freedom, one-tailed t-value, p-value, and effect size r. For articles that reported t-statistics, the effect size r was calculated using Cohen’s equation:

(Cohen, 1965). Only two articles in the meta-analysis reported chi-squared statistics, and the effect size was calculated using:

(Rosenthal, 1991). The articles were also coded to represent population characteristics (in other words, major or general education, class standing), institution type, length of the one-shot session, and experimental design (that is, authentic assessments versus pre- and post-testing) to conduct moderator analyses. Please see appendix for the codebook. The total n of these studies is 1,572. From 11 identified studies, 16 effect sizes were able to be calculated. Fisher’s transformation (zr) was also calculated by the researcher. A summary table is available in figure 2.

All data were analyzed using Jamovi 1.6 and the MAJOR meta-analysis package for Jamovi.

FIGURE 2

Summary Table of Articles Included in Meta-analysis

Article

Dependent Variable

n

df

t

Effect Size r

zr

M.E. Cohen et al. 2016

Score on information literacy (IL) content quiz

64

67

–6.94

0.646698

0.7696

M.E. Cohen et al. 2016

Score on IL content quiz

24

23

–1.51

0.30032

0.3099

Howard et al. 2014

Number of sources used

207

205

0.94

0.06551

0.0656

Howard et al. 2014

Simpson Diversity Index

199

197

1.2

0.08519

0.0854

Howard et al. 2014

Usage of library sources

207

205

3.48

0.2362

0.2407

Hurst and Leonard 2007

Number of source types used

184

182

–5.35

0.3686

0.3868

Lantzy 2016

Score on IL content quiz

31

31

9.46

0.8618

1.3003

Lantzy 2016

Score on IL content quiz

26

24

4.69

0.6915

0.8509

Martin 2008

Type of information resource used

200

1

N/A (=1.612)

0.0898

0.0900

Mery, Newby, and Peng 2012

Score on IL content quiz

32

62

2.6465

0.3186

0.3301

Portmann and Roush 2004

Score on IL content quiz about source usage

38

37

–0.055

0.0090

0.0090

Spievak and Hayes-Bohanan 2013

Selection of a reliable source

119

1

N/A (=3.85)

0.1508

0.1519

Tewell 2014

Score on IL content quiz

90

89

1.536

0.1607

0.1621

Walker and Pearce 2014

Score on IL content quiz

69

68

–6.134

0.5968

0.6882

Walker and Pearce 2014

Score on IL content quiz

69

68

–6.754

0.6336

0.7475

Wilhite 2004

Score on IL content quiz

13

12

–2.3

0.5531

0.6229

Results

Of the 11 papers under examination in this meta-analysis, five conducted paired sample t-tests, four conducted independent-samples t-tests, and two conducted a chi-squared test for their statistical analyses. When researchers did not report the type of t-test conducted, unless it was explicitly stated that individual scores were paired, the present researcher assumed an independent samples t-test.

Only one study was truly experimental, taking a sample from a population and dividing it into treatment and control groups (that is, Howard et al., 2014). The majority of the studies (n = 8) in the analysis were quasi-experimental, and used a pre-test/post-test design. Most study populations included convenience samples to some degree, which may impact the outcomes of the studies. For example, we could imagine that instructors who opt their courses into a study on educational effectiveness may care more about teaching, and thus be better teachers; or that students who agree to take an optional class and assessment may care more about learning.

All 11 of the studies in this meta-analysis yielded p-values in the expected, positive direction, indicating that, at the least, one-shot library instruction does not seem to damage student learning. Heterogeneity testing indicates that the students are significantly heterogeneous (τ = 0.326, p< .001). This means that there is significant variation in the results of the various studies, which is expected due to the wide variety of methods and outcome measures in these studies. In a fixed effects model, the overall effect size r is 0.268. In a random effects model (K = 16), z = 4.53, p ≤.001, 95% CI: (.229, .577).2 Converting Fisher’s z back to r for ease of interpretation, r = .383. As can be seen in the forest plot in figure 3, half of the studies’ 95% confidence interval includes 0. When a confidence interval includes zero, that indicates that there is a chance that there is no treatment effect and, in this case, that the one-shot makes no difference. Examining the moderator variables, there does not appear to be a specific characteristic that unifies the studies that have confidence intervals that do not include zero—they all share characteristics with other studies that do include zero.

FIGURE 3

Forest Plot for the Random Effects Model

Figure 3. Forest Plot for the Random Effects Model

One study (Lantzy, 2016.1) may be overly influential, as can be seen in the funnel plot in figure 4. That single study creates asymmetry in the funnel plot. A symmetrical funnel plot would indicate that precision of the studies increases as sample sizes get larger. This funnel plot indicates that the literature may be missing smaller studies. It is likely that the asymmetry in this funnel plot is due to the strong heterogeneity of the studies under review.

FIGURE 4

Funnel Plot

Figure 4. Funnel Plot

Moderator variables of interest were also considered. Institution type does appear to have an effect (p = .007), with midsize public institutions demonstrating the largest effect sizes. The researcher also investigated if the effect measured is a meaningful moderator variable. Does looking at a specific outcome (such as the number of web sources cited) have a different effect size than a more diffuse variable (for instance, change in information literacy score)? Comparing Howard, Nicholas, Hayes & Appelt (2014) with Walker & Pearce (2014) and Portmann & Roush (2004) indicates that the effect size is significantly different at the p < .05 level for these two groups (t1 = 17.0, p = .04). The analysis showed that change in information literacy score on a standardized test was more apt to be changed than the number of web sources cited. For the practitioner, this may indicate that measures of learning that require students to apply their knowledge in new contexts may be more difficult to attain than answering test questions specifically aimed at information literacy concepts taught in the class.

Finally, a publication bias analysis was performed. Publication bias examines whether studies that have significant results are more likely to be published than those that show null results. As librarianship is a heavily practitioner-focused field, there may be several factors at play here: Practitioners may be more likely to contribute to the gray literature (though a review of the recent proceedings of the LOEX conference yielded no additional candidates for inclusion); practitioners may not be incentivized to publish when their methods do produce a statistically significant result; or there is not an incentive for practitioners to conduct robust assessment of one-shot instruction sessions. The Begg and Mazumdar Rank Correlation was rejected as an approach due to the limited number of studies available (Begg & Mazumdar, 1994). However, the Rosenthal Method for Fail Safe N finds that approximately 874 null-result studies would be required to have a significant effect (Rosenthal, 1979), meaning that a relatively large number of studies would have to have not been included to drive the overall effect size to zero.

Discussion

While it is not possible to draw causal inferences from the studies included in the meta-analysis, as most of the studies were quasi-experimental in design, large amounts of evidence could begin to make a case for causality. Since only 11 studies were considered in this meta-analysis, the researcher hesitates to draw any kind of causal conclusion; the wide range of the 95% confidence interval could also be narrowed by the inclusion of more studies. The heterogeneity of the methods is a benefit of the studies meta-analyzed in this article, as it allows us to feel more confident in generalizing the findings (Rosenthal, 1991, 129).

The overall effect size of the one-shot intervention on measures of learning in these 11 studies is approximately .383 (this varies depending on fixed or random effect models). According to Cohen (1977), this is a medium effect size, meaning that it is likely to be noticeable. The 95% confidence interval goes as low as .226, meaning that it is possible that the effect size may be much smaller. Empirically, this meta-analysis indicates that there is some positive effect of the one-shot instruction session. That being said, it is important to note that in every study but one, outcomes were measured directly after the library instruction session. The literature reviewed for this meta-analysis, then, cannot comment on how efficacious one-shot instruction might be over time. Additional investigation and more robust studies are needed.

It should also be noted that effect sizes were universally small (r < .2) in studies where an authentic assessment was employed. This is an important feature, as it may indicate that one-shot instruction has limited effectiveness in actual skill-building, as opposed to being able to answer factual questions on a quiz. For interest, the researcher performed meta-analytic processes on two smaller data sets: one consisting of the articles that employed information literacy-related tests, and the other of the articles that used an authentic assessment. For the studies that employed tests, a random effects model yielded (K = 10), z = 4.71, p ≤.001, 95% CI: (.335, .814). For studies that employed authentic assessments, such as analysis of source selection, a random effects model yielded (K = 6), z = 3.29, p ≤.001, 95% CI: (.069, .271). This indicates that one-shots that are targeted at specific skills to be measured on a test are more likely to have an effect than those that ask students to perform authentic tasks. As most research requires students to perform novel searches, evaluation, and synthesis, it should be considered if the one-shot adequately builds those skills.

This meta-analysis does not review the literature of other instructional models, such as embedding librarians in courses, developing comprehensive online learning modules, or scaffolding information literacy instruction over an entire course in collaboration with faculty instructors. Meta-analyses of these literatures would also be warranted, and effect sizes compared.

In the present meta-analysis, two moderator variables were investigated: The type of dependent variable measured, and the type of institution where the study was conducted. Both variables were significant. There was a significant difference between specific variables examined and more general outcomes, which indicates that the field may benefit from clearly defining desired learning outcomes for one-shot library instruction, and measuring them across multiple studies. One potential moderator that would be interesting to explore, though not possible with the present literature, is the teaching experience of the librarian.

Perhaps the most important lesson learned from conducting this meta-analysis, though, is not about the effect of one-shot instruction on student learning, but rather about the state of the library instruction literature (or perhaps the preparation that librarians receive in research methods). Of the 3,058 articles initially identified in the structured literature search, only 66 (or ~2%) were studies of educational effectiveness. Of that 66, only 9 (or 13.6%, or 0.2% of the original set) reported enough data and conducted sufficient statistical analyses to be included in the present meta-analysis. Even within the 11 articles ultimately included in the meta-analysis, there were small problems with data reported; one article misreported the degrees of freedom in a chi-squared test, causing the researcher to independently redo the analysis. Another article transposed greater-than and less-than signs, so that when they reported that a finding was not statistically significant and had a small t-value, they claimed p < .05. These transcription errors cause some concern for the validity of the data presented in these articles; but, because the literature is so sparse, the researcher did opt to include them in the meta-analysis. Two studies were rejected when percentages of students reported resulted in fractions of humans, indicating that missing values were not adequately reported. The field would benefit greatly from additional studies that clearly and accurately report statistical tests, including precise p-values, effect sizes, and confidence intervals. The vast majority of studies in this field report descriptive statistics (primarily mean values and frequency tables) and a smaller number report percentages that fall into specific categories (“relative risk”), but more meaningful and informative analyses could and should be performed. In addition, well-designed experimental studies of the effect of the one-shot intervention would enrich the field and introduce the possibility of drawing causal inferences. The preponderance of the case study makes it challenging to trust the results of the meta-analysis.

Given the continued prevalence of the one-shot in library instruction, the field should consider an increased focus on robust assessments of their efficacy. This may require additional statistical or methodological training and support for library practitioners. Further, the results of the present meta-analysis should encourage the field to reconsider the classification of the one-shot as a “method.” The one-shot is not a monolith, but instead encompasses a wide variety of outcome variables, pedagogical strategies, timing, and populations, and being able to compare studies that are more alike would yield more meaningful and actionable results for library practitioners. This study indicates that there is likely more to unpack in terms of what differentiates an effective one-shot from an ineffective one. Replication studies at different institutions may also be warranted. In conclusion, the researcher hopes that this meta-analysis spurs additional investigation into the field and that a more directional meta-analysis may be possible in the future.

APPENDIX

Codebook

Variable

Possible Values

Definition

Student Category

Community college students; undergraduate students

The population studied, as identified by the original study authors.

Discipline

English; education; business; sociology; unknown/general

The type of course that the one-shot was associated with, as described by original study authors.

Institution Type

Community college; small liberal arts college; mid-sized public; large public

The type of college or university where the study took place, as identified by the original study authors.

Experimental Design

Pre- and post-test; quasi-experimental; survey; portfolio assessment

The type of study that was conducted, as reported by the original study authors. Some studies used more than one design and were coded as such.

Length of Session

45 minutes; 50 minutes; 60 minutes; 75 minutes; 90 minutes; not described

The length of the one-shot session where outcomes were being measured. Not all studies included the length and were marked as such.

Analysis Type

Chi-squared; independent sample t-test; paired sample t-test

The type of analysis conducted on the data collected to report on the outcome variable.

Dependent Variable Type

Test; authentic assessment

Describes the type of measure that was used to describe the change in the dependent variable. These were selected and coded by the present researcher.

Notes

1. According to Wiggins (1989), the creator of this phrase, authentic assessment is a “true test” of student learning, where students are asked to demonstrate application of knowledge in an exemplary and nonstandardized task. Examples include essays, portfolios, and research projects.

2. Fixed effects models assume there is a single central tendency and can only be generalized to other populations of the same ilk. Random effects models generally have a smaller effect size but are more generalizable and amenable to heterogeneity of studies. This is the case in this analysis.

References

American Library Association. (2000). ACRL standards: Information literacy competency standards for higher education. College & Research Libraries News, 61(3). https://doi.org/10.5860/crln.61.3.207

Association of College & Research Libraries. (2015). Framework for Information Literacy for Higher Education. https://www.ala.org/acrl/standards/ilframework

Barclay, D. (1993). Evaluating library instruction: Doing the best with what you have. Reference Quarterly, 33(2): 195–202.

Begg, C. B., & Mazumdar, M. (1994). Operating characteristics of a rank correlation test for publication bias. Biometrics 50(4): 1088–1101. https://doi.org/10.2307/2533446

Bowles-Terry, M., & Donovan, C. (2016). Serving notice on the one-shot: Changing roles for instruction librarians. International Information & Library Review 48(2): 137–142. https://doi.org/10.1080/10572317.2016.1176457

Cohen, J. (1965). Some statistical issues in psychological research. In Wolman, B.B. (Ed.), Handbook of clinical psychology. McGraw-Hill.

Cohen, J. (1977). Statistical power analysis for the behavioral sciences. Academic Press.

*Cohen, M.E., Poggiali, J., Lehner-Quam, A., Wright, R., & West, R.K. (2016). Flipping the classroom in business and education one-shot sessions: A research study. Journal of Information Literacy 10(2): 40. https://doi.org/10.11645/10.2.2127

Davinson, D. (1984). Never mind the quality, feel the width. Reference Librarian 3(10), 29–37. https://doi.org/10.1300/J120v03n10_04

Fisher, Z. (2018). Who succeeds in higher education? Questioning the connection between academic libraries and student success. In Proceedings of the 2018 CARL Conference. Redwood City, CA. http://conf2018.carl-acrl.org/conference-proceedings/

Gavin, C. (1994). Guiding students along the information highway: Librarians collaborating with composition instructors. Journal of Teaching Writing, 13(1 & 2), 225–236.

*Howard, K., Nicholas, T., Hayes, T., & Appelt, C.W. (2014). Evaluating one-shot library sessions: Impact on the quality and diversity of student source use. Community & Junior College Libraries 20(1–2): 27–38. https://doi.org/10.1080/02763915.2014.1009749

Hsieh, M.L., Dawson, P.H., & Yang, S.Q. (2021). The ACRL Framework successes and challenges since 2016: A survey. Journal of Academic Librarianship, 47(2).

Hsieh, M.L., & Holden, H.A. (2010). The effectiveness of a university’s single-session information literacy instruction. Reference Services Review, 38(3), 458–473. https://doi.org/10.1108/00907321011070937

Hurley, D.A., & Potter, R. (2017). Teaching with the Framework: A Cephalonian approach. Reference Services Review, 45(1), 117–130. https://doi.org/10.1108/RSR-07-2016-0044

*Hurst, S. & Leonard. J. (2007). Garbage in, garbage out: The effect of library instruction on the quality of students’ term papers.” Electronic Journal of Academic and Special Librarianship, 8(1). http://southernlibrarianship.icaap.org/content/v08n01/hurst_s01.htm

The jamovi project (2021). jamovi. (Version 1.6) [Computer Software]. Retrieved from https://www.jamovi.org

*Lantzy, T. (2016). Health literacy education: The impact of synchronous instruction. Reference Services Review, 44(2): 100–121. https://doi.org/10.1108/RSR-02-2016-0007

*Martin, J. (2008). The information seeking behavior of undergraduate education majors: Does library instruction play a role? Evidence Based Library and Information Practice, 3(4). https://doi.org/10.18438/B8HK7X

Martin, L.M. & Jacobson, T.E. (1995). Reflections on maturity: Introduction to library instruction revisited: Bibliographic instruction comes of age. Reference Librarian, 24(51–52): 5–13. https://doi.org/10.1300/J120v24n51_02

*Mery, Y., Newby, J., & Peng, K. (2012). Why one-shot information literacy sessions are not the future of instruction: A case for online credit courses. College & Research Libraries, 73(4): 366–377. https://doi.org/10.5860/crl-271

Nicholson, K.P. (2016). “Taking back” information literacy: Time and the one-shot in the neoliberal university. In N. Pagowsky & K. McElroy (Eds.), Critical library pedagogy handbook (Vol. 1, pp. 25–39). ACRL Press.

Pagowsky, N. (2021). The contested one-shot: Deconstructing power structures to imagine new futures. College & Research Libraries, 82(3). https://doi.org/10.5860/crl.82.3.300

*Portmann, C.A., & Roush, A.J. (2004). Assessing the effects of library instruction. Journal of Academic Librarianship, 30(6): 461–465.

R Core Team. (2020). R: A language and environment for statistical computing. (Version 4.0) [Computer software]. Retrieved from https://cran.r-project.org (R packages retrieved from MRAN snapshot 2020-08-24).

Robertshaw, M.B., & Asher, A. (2019). Unethical numbers? A meta-analysis of library learning analytics studies. Library Trends, 68(1): 76–101.

Rosenblatt, S. (2010). They can find it but they don’t know what to do with it: Describing the use of scholarly literature by undergraduate students. Journal of Information Literacy, 4(2), 50–61. https://doi.org/10.11645/4.2.1486

Rosenthal, R. (1979). The file drawer problem and tolerance for null results. Psychological Bulletin, 86(3): 638–641.

Rosenthal, R. (1991). Meta-analytic procedures for social research. SAGE Publications, Inc.

Siddaway, A.P., Wood, A.M., & Hedges, L.V. (2019). How to do a systematic review: A best practice guide for conducting and reporting narrative reviews, meta-analyses, and meta-syntheses. Annual Review of Psychology, 70(1), 747–770. https://doi.org/10.1146/annurev-psych-010418-102803

Sobel, K., & Sugimoto, C.R. (2012). Assessment of learning during library instruction: Practices, prevalence, and preparation. Journal of Academic Librarianship, 38(4), 191–204. https://doi.org/10.1016/j.acalib.2012.04.004

*Spievak, E.R., & Hayes-Bohanan, P. (2013). Just enough of a good thing: Indications of long-term efficacy in one-shot library instruction. Journal of Academic Librarianship, 39(6): 488–499. https://doi.org/10.1016/j.acalib.2013.08.013

*Tewell, E.C. (2014). Tying television comedies to information literacy: A mixed-methods investigation. Journal of Academic Librarianship 40(2): 134–141. https://doi.org/10.1016/j.acalib.2014.02.004

Tran, C.Y., Miller, C.-A., & Aveni, D. (2018). Baseline assessment: Understanding WISE freshman students’ information literacy skills in a one-shot library session. Science & Technology Libraries, 37(3), 302–321. https://doi.org/10.1080/0194262X.2018.1460651

Tomaszewski, R. (2021). A STEM e-class in action: A case study for asynchronous one-shot library instruction. Journal of Academic Librarianship, 47(5), 102414. https://doi.org/10.1016/j.acalib.2021.102414

*Walker, K.W., & Pearce, M. (2014). Student engagement in one-shot library instruction. Journal of Academic Librarianship, 40(3): 281–290. https://doi.org/10.1016/j.acalib.2014.04.004

Wang, R. (2016). Assessment for one-shot library instruction: A conceptual approach. portal: Libraries and the Academy, 16(3): 619–648. https://doi.org/10.1353/pla.2016.0042

Wiggins, G. (1989). A true test: Toward more authentic and equitable assessment. Phi Delta Kappan, 70(9), 703–713.

*Wilhite, J.M. (2004). Internet versus live: Assessment of government documents bibliographic instruction. Journal of Government Information, 30(5–6): 561–574. https://doi.org/10.1016/j.jgi.2004.10.002

* = included in the meta-analysis

* Dani Brecher Cook is Associate University Librarian, Learning and User Experience at the UC San Diego Library, email: danicook@ucsd.edu. © 2022 Dani Cook, Attribution-NonCommercial (https://creativecommons.org/licenses/by-nc/4.0/) CC BY-NC.

Copyright Dani Brecher Cook


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

Article Views (Last 12 Months)

No data available

Contact ACRL for article usage statistics from 2010-April 2017.

Article Views (By Year/Month)

2022
January: 0
February: 0
March: 0
April: 0
May: 0
June: 0
July: 0
August: 5
September: 1025
October: 382
November: 431
December: 13