06_FlierlMaybeeBonem

Developing the Informed Learning Scale: Measuring Information Literacy in Higher Education

Academic libraries continue to face challenges communicating their value. One dimension of this challenge is in demonstrating how information literacy relates to important measures of student learning, like course grades and motivation. This study documents the development and exploratory pilot testing of the Informed Learning scale—which is intended to produce data for institutional reporting purposes at scale in alignment with contemporary IL theory, specifically Informed Learning. Preliminary findings include small correlations between the Informed Learning scale and course grades and moderate correlations between the scale and student perceptions of their learning climate and self-determined motivation.

Introduction

Documenting and communicating the value of academic libraries is an essential challenge of the profession.1 Part of this challenge lies in how Information Literacy (IL) supports the teaching and learning missions of colleges and universities. Using self-perception data from scales measuring IL has been a common and useful approach to demonstrating the impact of IL educational efforts. Self-perception scales provide a solution to the formidable challenge of scalability—how to measure IL across thousands of students. However, many extant IL scales are based in the now-rescinded ACRL IL Standards.2

Theoretical and practical problems with the Standards are well documented.3 The Standards have been described as treating research as a linear, reductive process rather than iterative and dynamic,4 lacking transferability and ignoring disciplinary aspects of IL,5 and promoting a deficit-model of instruction.6 There is a clear need to measure IL in accordance with contemporary theoretical developments in IL, such as sociocultural,7 critical,8 or Informed Learning9 approaches, which accentuate the contextual nature of IL as many IL assessment instruments currently in use rely on a standards-based conception of IL. Results from a 2017 systematic review indicate that the two most popular IL instruments measuring students’ IL self-efficacy, ILSES and IL-HUMASS, are both based in a standards-based conception of IL.10 Informed Learning, the theory that underpins the scale that is the focus of this research, emphasizes the role of engaging with information within the learning process.11 Seeking data on how students use information or the library’s resources is useful and important for librarians and administrators, as this provides evidence for how academic libraries support desired student outcomes like improved general education performance and retention.12 However, there are additional opportunities to learn about the role IL plays in student learning by examining the relationship between IL, course grades, and other variables related to student learning.

The authors present a new IL scale with two purposes in mind. The first purpose is to create an instrument where IL is viewed as relational and contextually dependent (as opposed to a skills-based conception not taking into account disciplinary and other factors), as aligned with an Informed Learning theoretical approach. In this new instrument, IL is related to learning, and the knowledge or practices to be learned are drawn from a disciplinary context, such as engineering, or history. The second purpose is to pilot the collection of IL data in such a way that will correlate with other important institutional measures of student learning, including course grade, student motivation, and learning climate. This paper will describe how and why the Informed Learning scale was developed as well as sharing preliminary findings to demonstrate the value of such a scale.

Problem Statement

Existing scales for measuring IL are primarily grounded in skills-based conceptualizations of IL.13 Such scales were primarily created during a time when the ACRL (2000) Information Literacy Competency Standards for Higher Education guided IL work in academic libraries. Rescinded in 2016, the five standards describe IL as: 1) determining needed information; 2) accessing information efficiently; 3) evaluating information; 4) using information effectively; and 5) using information ethically and legally. The IL scales developed during this time tend to measure discrete skills that align with the Standards, such as the ability to frame an information need or evaluate an information source. These kinds of measures align with the data-reporting practices that emerged in the 1980s, which marked the beginning of what Drabinski refers to as the “time of compliance” in which educational institutions provided data to prove they were meeting outcomes to accrediting agencies and funding bodies.14

While there is value in collecting data related to learners’ abilities to perform basic information skills, it is difficult to find efforts in library and information science literature to examine data related to more complex theoretical understandings of IL. For example, Informed Learning, the theoretical perspective on IL developed by Christine Bruce (used in this research), views IL as part of the process of learning.15 An Informed Learning perspective views IL within a larger theoretical context that cannot be examined using scales designed only to measure specific information skills, but rather requires examining how information is used in relationship to disciplinary learning. Skills-based IL scales are not sufficient to measure IL within the learning process.

Audiences external to academic libraries are often concerned with evidence of student learning and success for large numbers of students. Developing instruments that reveal how IL data relates to student outcomes and success metrics may more closely align with data considered meaningful by administrators, accrediting agencies, and funding bodies.16 To address these needs, the current exploratory research aims to develop a scale that produces data for institutional reporting purposes at scale, but also provides data more in alignment with contemporary theoretical developments in IL—providing insight concerning how students understand IL in the context of learning within undergraduate courses.

Literature Review

Introduced by Zurkowski in 1974, the academic library community adopted the concept of information literacy as necessary for developing the U.S. workforce, framing it as a set of skills needed by college students for academic and professional success.17 While early views of IL were behaviorist, a number of process models were created that were grounded in cognitivist theory, especially constructivism. One of the most influential models in higher education is the Information Search Process (ISP) developed by Kuhlthau (1993), which takes a constructivist approach that considers students’ affective states as they engage in a research process. Views of IL as a set of skills or a process underpinned the development of the ACRL Standards.18 In turn, the Standards framed the development of several instruments used to measure IL in higher education. However, new approaches have emerged over the last several decades that ground IL using specific theoretical lenses, such as critical, sociocultural, and Informed Learning.19 Nevertheless, extant instruments for measuring IL remain largely associated with the now rescinded ACRL Standards.20

Instruments used to measure learners’ abilities to use information in various contexts tend to rely on self-reported data.21 A recent systematic review of IL scales found 45 studies reporting on 22 scales.22 Such instruments include: IL-HUMASS, Project SAILS, the IL Self-Efficacy Scale (ILSES), B-TILED, and the IL Test (ILT). While some of these instruments may draw from additional theoretical foundations, such as IL-HUMASS drawing from the Society of College, National and University Libraries (SCONUL)—and ILSES from the Big 6 skills, Seven Pillars, and ANCIL standards—all of these measures are primarily based on the ACRL Standards. Additional instruments based on the ACRL Standards include the IL Test for Higher Education,23 Locally Developed IL Test,24 the Virtual Orientation Seminars IL Assessment (VOILA),25 and the Information Skills Survey.26

It should be noted that some measures use performance and behavioral data or evidence of cognitive development to assess the mastery of IL skills to bypass issues typically associated with self-reported data, such as students overreporting their abilities.27 Multiple-choice tests,28 information search tasks,29 and rubrics analyzing student work30 are some examples of these types of measures. Concept maps, portfolios, and many other forms of measuring student work have been used to measure students’ IL abilities as well.31

A number of different conceptualizations of IL are being adopted in higher education that offer a different definition of IL than outlined in the Standards.32 Based on work by Mackey and Jacobson,33 the ACRL Framework for Information Literacy for Higher Education argues that IL is a metaliteracy, a composite of “behavioral, affective, cognitive, and metacognitive” engagements within an “information ecosystem.”34 Metacognition enables learners to analyze their own cognitive processes, such as how they understand information skills and processes. The Framework conceptualizes IL more as “knowledge-based learning and discovery” than a demonstrable set of skills.35 New instruments are being developed that aim to measure the concepts outlined in the Framework. For instance, the Threshold Achievement Test contains disposition items to indicate “students’ willingness to consistently apply the skills they have learned in one setting to novel problems in new settings.”36

Other emerging approaches link IL to the context in which information is being used. Lloyd outlines a sociocultural approach to IL in which information practices are considered part of a larger “practice” within a discipline or professional setting—such as enthusiast car restorers and renal nurses.37 In contrast to focusing on the context of a practice, Informed Learning emphasizes the learning environment as the context of IL.38 Informed Learning defines learning as experiencing changes in awareness related to both using information and disciplinary content. Adopting an Informed Learning approach in higher education emphasizes students learning to use information within a discipline—which in turn better prepares students to use information to learn in their personal and professional lives outside of educational settings. Three principles guide the development of Informed Learning pedagogy:

  1. Build on learners’ previous Informed Learning experiences;
  2. Promote simultaneous learning about disciplinary content and the information-using process;
  3. Enable learners to experience using information and disciplinary content in new ways.39

While it is being implemented as an approach to IL education in higher education, questions remain about how to measure the results of applying an Informed Learning approach.40 The situated and context-dependent nature of Informed Learning make answering fundamental questions about how students use information difficult. Comparable to other theoretically grounded approaches, measuring IL from an Informed Learning perspective requires a different approach from measuring discernable skills, practices, and dispositions, as instruments based in the Standards or Framework attempt to do. An Informed Learning approach to IL is less concerned about specific skills that could be measured in an assignment than with how students describe their experiences using information in a disciplinary context. Therefore, to investigate IL from an Informed Learning lens, we need to evaluate student perceptions of information use in a disciplinary context. Measuring IL from a context-dependent theoretical lens at scale, across disciplines, presents further challenges. Accordingly, the Informed Learning scale attempts to discern undergraduate students’ self-perceptions of their ability to use information to learn in various disciplinary contexts.

Methods

Initial Question Design

The purpose of this Informed Learning scale is to measure student perceptions of using information as it relates to learning in a course. Therefore,12 of the initial set of 16 statements deployed in the scale (shown in table 1) were created by drawing key concepts from the three core principles of the Informed Learning model.41 Previous IL research conducted by the research team informed the other four statements used in the initial scale.42

TABLE 1

Factor Analysis Constructs

Factor Loading

Item

1

2

Factor 1: Informed Learning (α = .962)

16. I feel confident in my ability to use information to learn subject content in this course.

.85

–.14

9. I build upon my previous experiences of using information to learn subject content in this course.

.84

.07

10. I think my previous experiences of using information support my learning subject content in this course.

.84

.06

2. I understand how my previous experiences of using information support my learning subject content in this course.

.83

.14

12. My instructor encourages me to use information for specific purposes.

.82

.11

14. I feel confident in my ability to use information in this course.

.80

–.002

11. I will be able to use information to learn in my future course work.

.80

.02

5. I believe I can learn in this course by using information.

.80

.15

15. My instructor encourages me to use information in new ways to complete assignments.

.78

.08

1. My instructor encourages me to learn subject content by using information.

.77

.14

13. When I consider my life after college, I feel confident in my ability to learn when engaging with information sources.

.77

–.49

3. I believe it is important for me to carefully evaluate the information I use in this course.

.77

.15

8. I believe it is important for me to learn to use information.

.77

–.08

7. For this course my instructor encourages me to use my prior experience of using information.

.76

.17

6. I feel confident in my ability to synthesize information from different sources.

.76

–.48

4. I think that learning subject content and using information are the same thing.

.37

.17

Note: The short scale is composed of the bolded items.

The 12 statements that were part of the initial scale that related to Informed Learning were in three sets of four statements each evaluated on a 7-point Likert scale ranging from 1 (“Strongly Disagree”) to 7 (“Strongly Agree”). Each of the three sets aligned with one of the core principles of Informed Learning.43 For example, the statement “I build upon my previous experiences of using information to learn subject content in this course” relates to the first core principle that focuses on building on learners’ previous Informed Learning experiences. The statement “I believe I can learn in this course by using information” relates to the second principle, which emphasizes simultaneous learning about disciplinary content and the information-using process. The statement “My instructor encourages me to use information in new ways to complete assignments” relates to the third principle of enabling learners to experience using information and disciplinary content in new ways. Originally, these three sets of four statements were intended to create subscales to measure different aspects of student perceptions about their ability to use information to learn disciplinary content.

To allow for comparison with the part of the initial scale related to Informed Learning, four statements were also included that were grounded in a local description of IL derived from the AACU Value Rubric for IL, which emphasizes five skills: 1) determining a need, 2) access, 3) evaluating, 4) effectively using, and 5) ethical use of information.44 The AACU Value Rubric was adapted to define IL for Purdue University’s Core Curriculum, so these four questions are particularly important in the authors’ local context.45

Drawn from previous research, the four statements that were included in the scale were intended to measure student perceptions of using information generally; that is to say, not in relationship to learning. For example, one statement focused on the importance of evaluating information within the course, while another emphasized feeling confident in one’s “ability to synthesize information from different sources.” The inclusion of these statements was intended to provide data to show how students perceived using information generally and how that compared to their perceptions of using information to learn.

FIGURE 1

Informed Learning Scale Development

Figure 1. Informed Learning Scale Development

Expert Validation

To determine content validity for the scale, a team of five IL experts provided feedback on the initial 16 questions. The five experts were selected for the quality and quantity of their IL-related scholarship, as well as representing diverse theoretical perspectives on IL. The experts were asked to rate each item on a 3-point scale—either “This item is valid,” “This item has some validity,” or “This item is not valid.” The survey also included an open-ended text box so that qualitative feedback and suggestions could be collected for each question. A final prompt asked the five experts to share questions and comments as an opportunity to provide holistic feedback about the scale overall. Wording was modified to address the expert’s qualitative feedback provided on individual questions as well as the final open-ended prompt. All questions were rated as being valid, or having some validity, by at least four of the five experts.

Data Collection and Participants

Data were collected at a large, public, research-intensive university in the United States across two semesters (fall 2018 and spring 2019) from undergraduate students and the registrar. Student data were collected through end-of-semester student-perception surveys sent to all students enrolled in a course that had completed a large-scale course redesign program, Instruction Matters: Purdue Academic Course Transformation (IMPACT),46 in which faculty and staff from the Purdue University Libraries, the Center for Instructional Excellence, and other units at Purdue participate. Levesque-Bristol et al. describe how IMPACT data are collected and compared with various information related to student performance, including course grades, converting typical letter grades to a 4-point scale, and course failure or drop (known as DFW) rates (2019).47 The authors currently or have served in leadership positions in IMPACT and accordingly wanted to investigate how students who completed the program were using information to learn in their courses. Drawing from this sample met an institutional need to show the libraries’ value for participating in IMPACT and also provided access to a larger and more diverse dataset than would have been possible otherwise. Informed Learning is a part of IMPACT’s curricula, so this instrument was created, in part, to measure changes in student perceptions about how information is used to learn in their course. There were a total of 18,927 student-perception surveys sent out to students enrolled in 151 courses across the two semesters, and 7,992 surveys were completed by 6,791 unique students (42% response rate).

Measures

The student-perception surveys included the Learning Climate Questionnaire,48 the Situational Motivation Scale,49 students’ Basic Psychological Need Satisfaction and Frustration Scale,50 and the newly created Informed Learning scale. During the fall 2018 semester, the full 16-item Informed Learning scale was used. Based on initial analyses from fall 2018, a short version of the Informed Learning scale, consisting of 8 items, was used in spring 2019. University records were accessed to provide student demographics and numerical course grade data for the students who participated in the survey in either semester.

Factor Analysis

It is common to use factor analysis to analyze data for exploratory scales in library and information science research.51 Factor analysis is useful with survey data to model the interrelationships between variables with the aim to reduce the number of variables by identifying underlying factors. Factor analysis can also provide some support for the construct validity of the scale by providing evidence that the scale is measuring the correct underlying constructs. To determine whether a factor analysis was appropriate for the Informed Learning scale, several well-recognized criteria were assessed using the fall 2018 data with the Kaiser-Meyer-Olkin (KMO) and Bartlett’s tests. Both tests provide insight as to whether data are appropriate for a factor analysis—specifically if multiple variables can be meaningfully reduced into a fewer number of factors. The KMO measure of sampling adequacy was .96, well above the recommended value of .6. The KMO test examines the proportion of variance among variables that may be shared variance to determine whether there may be distinct underlying factors. Statements that have a lot of shared variance (such as I like dogs and I like puppies) are likely measuring the same construct (enjoying dogs) whereas two statements that have low shared variance (such as I like dogs and I like kittens) are likely measuring separate constructs. Bartlett’s test of sphericity was also significant (χ2 (120) = 46929.69, p < .01) indicating that the variables are not completely unrelated. Finally, correlations were above .30 for all of the Informed Learning scale questions except one (“I think that learning subject content and using information are the same thing”); that item had correlations ranging from .21 to .40 but was kept in the factor analysis due to theoretical considerations. However, it is plausible that this question is more opinion-based than other questions, which tended to focus on students’ opportunities or abilities related to using information to learn, and the item was eventually excluded from the final scale.

An exploratory factor analysis using a principal-axis factor extraction was then conducted to investigate the number of constructs measured in the scale in an effort to find the fewest number of factors that can account for the variance expressed in the data. Two factors were extracted with eigenvalues over 1.00—indicating that these factors explain more than a single variable; however, since all of the items that loaded onto the second factor had higher loadings on the first factor, only one factor will be discussed. This provides evidence that the Informed Learning Scale is predominantly measuring one phenomenon or experience as expressed in the survey. One item was removed (question #4 on the Informed Learning long scale) as it did not load onto any factors. The single factor explained 60.39 percent of the variance. Loadings of variables on factors are shown in table 1.

Reliability Analysis

Cronbach’s alpha reliability coefficient was calculated to determine the internal consistency of the Informed Learning scale. High levels of reliability indicate that participants are answering scale items similarly, indicating that there is a common, underlying factor for the scale. With the 15 items identified by the factor analysis, the reliability was high for all items (α = .962). The 8-item shortened version of the scale also demonstrated a high level of reliability (α = .928).

Findings

Correlation Matrix

To determine how the Informed Learning scale related to other variables associated with student learning, we examined correlations between the Informed Learning scale and the other measures in the student perceptions survey using the spring 2019 data (see table 2). The short version of the Informed Learning scale showed moderate correlations with student perceptions of the learning climate (r = .620, p <.001) and their self-determined motivation (r = .615, p <.001). There was also a small correlation between the Informed Learning scale and student achievement, as measured by overall course grades (r = .191, p <.001). These findings suggest that the way students perceive how they use information to learn disciplinary content may be an important link to how they perceive their classroom, how motivated they are to learn, and, ultimately, how they perform on graded assessments.

TABLE 2

Correlation Coefficients

This table shows the correlation coefficients between the Informed Learning scale and other student outcomes including learning climate, student motivation, and academic performance as measured by final grades.

IL Scale

Learning Climate

Self-Determined Motivation

Academic Performance (Final Grade)

Informed Learning Scale

1.00

Learning Climate

.620**

1.00

Self-Determined Motivation

.615**

.526**

1.00

Academic Performance

.191**

.165**

.165**

1.00

**p < .01

Discussion

The Informed Learning approach to IL views learning as changes in awareness of aspects related to both using information and disciplinary content.52 The Informed Learning scale is a tool that aims to bridge the divide between an educational theory—Informed Learning pedagogic theory—and educational practice in the form of academic libraries’ assessment activities. The scale sheds light on student perceptions of IL as it relates to learning across undergraduate courses.

IL theories that emphasize the situated nature of IL, such as Informed Learning, are not commonly associated with standardized assessment techniques like surveys. Instead, they typically require the analysis of student products or reflections. We argue that this stance is problematic. Measurements of student work products are only practical on a small scale, and so are unlikely to provide adequate evidence to demonstrate the value of IL to student learning at a curricular level, across thousands of students from different majors. Given quantitative assessment demands (often tied to funding and other resources in higher education), there is a real need to investigate whether academic librarians, through their work in IL instruction, further disciplinary learning goals. The Informed Learning scale addresses this need by providing quantifiable assessment data of student perceptions of their ability to use information to learn. While other scales use more questions—we suggest that a short instrument is beneficial as it allows the Informed Learning scale to be used in tandem with other scales. Collecting data about student learning at the same time and in a similar way as one collects data about IL better enables exploring how using information relates to student motivation, learning climate, and academic performance.

Given that the Standards focused on developing a set of IL skills, many measures aimed to assess student behavior and performance regarding IL-related work. This is a valid and useful way to measure IL from this theoretical perspective. However, using a definition of IL based on Informed Learning theory, we argue that student perceptions of their ability to use information in their disciplinary context is equally important—as Informed Learning is concerned with whether students experience using information in ways that are conducive to learning subject content. Additionally, the results of our exploratory study indicate that the Informed Learning scale is correlated with course grades, suggesting that the scale might act as an acceptable proxy in situations in which no other measure of student performance related to IL is feasible. A liaison to an engineering department would likely find it easier to ask a civil engineering administrator to add eight questions to an existing student survey than to work with various instructors on how information enables students to learn disciplinary content across multiple courses. Informed Learning Scale results could be shared with an academic unit—pointing to areas where library resources should be used. As the Informed Learning scale is a short, context-independent measure, it could be administered broadly to target specific courses or contexts that might benefit from closer engagement with IL.

The Informed Learning scale provides academic libraries a platform to share data that is of interest to a broad audience by allowing a comparison of student perceptions of using information to learn with student learning outcomes as defined by grades or other learning measures like learning climate and motivation. Instructors could use these data to refine learning outcomes, assessments, or classroom activities. Administrators both internal and external to an academic library could better determine the value of academic libraries in furthering student learning across a curriculum, department, or college. Too many variables can account for differences in overall GPA—Kuh et al. note that high school academic preparation, full- and part-time status, student engagement, and first-year experience in the classroom, among others, are all predictors of desired educational success.53 Black and Murphy also note that GPA “may not always be the most appropriate measure for communicating a library’s impact.”54 While individual assignment grades may not be of interest to a broader institutional audience, course-level grade data across different majors may be ideal for assessment efforts aiming to provide actionable data for instructors, administrators, and funding agencies.

In linking IL to measurements of student learning like course grades, the Informed Learning scale may be used in a variety of ways to address specific assessment needs. In the initial pilot research, the scale was completed by 6,791 unique students (42% response rate) enrolled in 151 courses, but it could also be implemented in one course or a sequence of courses. If implemented in a pre-/post- fashion in which students complete the scale before and after undergoing instruction, the data collected could shed light on changes of students’ perceptions of using information to learn as a result of instruction. The scale could also be used longitudinally to track large cohorts of students’ perceptions of using information to learn as they advance through a curriculum.

Data gathered through the implementation of the scale may be linked to demographic data to measure certain student populations and their perceptions of using information to learn. Successful instructors, courses, or programs who are able to foster more sophisticated understandings of using information to learn could be identified and their successful methods and approaches replicated. Results from the Informed Learning scale could also provide targets for embedding IL into curricula by indicating courses where students do not feel confident in their ability to use information to learn disciplinary content. Linking data from the Informed Learning scale with data from other aspects of student learning, like motivation, could provide academic librarians with evidence for how embedding IL into their course could improve student motivation. This could shift librarian-instructor conversations toward motivation and learning climate—and the role IL may play in such aspects of student learning. If IL can be discussed in relation to other important measures of student success that faculty use for promotion and tenure, this may provide useful data in support of the importance of embedding IL into curricula.

This research is exploratory in nature, and the preliminary findings are not generalizable to other contexts. Further analysis—for instance, seeking discriminant and convergent validity—could be useful. Additionally, Informed Learning is one IL theory among many that argue for a situated and context-dependent approach to IL. This limits the findings to focusing on students’ perceptions of their ability to use information to learn as our theoretical lens focuses on this exact phenomenon. Nonetheless, other self-reported scales could capture data relating to different aspects of IL in the disciplinary classroom, such as sociocultural elements or power dynamics. Such instruments would further enrich scholarly inquiry about how IL relates to student learning. Future research using similar data could be analyzed in terms of student demographics—providing insight on the possibility of substantial discrepancies of outcomes for certain student populations.

The Informed Learning scale attempts to address two major issues: 1) demonstrating how IL data can be related to other important measures of student learning; and 2) measuring IL in alignment with recent theoretical advancements where IL is context dependent. Such an approach may yield valuable insights into the relationship between IL and student learning—as suggested by our initial findings identifying correlations between students’ self-perceptions of their ability to use information to learn and measures of student learning like course grade, motivation, and learning climate across thousands of students. Though this research describes an exploratory pilot of the Informed Learning scale, preliminary results suggest that the scale can advance academic library assessment efforts by using contemporary theoretical developments in IL while linking IL data with important measures of student learning.

Conclusion

This paper describes the development and initial implementation of the Informed Learning scale. Initial findings suggest that it may be a useful tool to measure how students perceive their ability to use information to learn in various disciplinary contexts at scale in higher education. It is also proof of concept for how an IL scale can be developed from a theoretical perspective that views IL as more than just a set of skills. While still preliminary, findings suggest that such a scale may offer useful evidence for providing a demonstrable link between how students use information to learn disciplinary content and important metrics of student learning like motivation, learning climate, and course grade.

In practice, this may provide justification for IL instruction efforts to focus more on helping students achieve disciplinary learning goals. Using the Informed Learning scale may also help to foster close partnerships between academic librarians and instructors by using metrics that instructors care about, namely student learning and motivation gains. Perhaps, most importantly, librarians who use a scale based in a relational approach to IL may be better equipped to articulate their role as educators by showing how their instructional efforts impact measures of student learning and success in concrete ways.

Notes

1. Association of College & Research Libraries (ACRL), Value of Academic Libraries: A Comprehensive Research Review and Report, researched by Megan Oakleaf (Chicago, IL: Association of College & Research Libraries, 2010), www.acrl.ala.org/value/?page_id=21 [accessed 2 February 2020]; Lynn Silipigni Connaway et al., Academic Library Impact: Improving Practice and Essential Areas to Research (Chicago, IL: ACRL, 2017), www.ala.org/acrl/sites/ala.org.acrl/files/content/publications/whitepapers/academiclib.pdf [accessed 2 February 2020].

2. ACRL, Information Literacy Competency Standards for Higher Education (2000), www.ala.org/Template.cfm?Section=Home&template=/ContentManagement/ContentDisplay.cfm&ContentID=33553 [accessed 2 February 2020].

3. A. Sample, “Historical Development of Definitions of Information Literacy: A Literature Review of Selected Resources,” Journal of Academic Librarianship 46, no. 2 (2020), 102–16, https://doi.org/10.1016/j.acalib.2020.102116.

4. Brett B. Bodemer, “The Importance of Search as Intertextual Practice for Undergraduate Research,” College & Research Libraries 73, no. 4 (2012): 336–48, https://doi.org/10.5860/crl-245.

5. Alison Hicks, “Cultural Shifts: Putting Critical Information Literacy into Practice,” Communications in Information Literacy 7, no. 1 (2013): 49–65, https://doi.org/10.15760/comminfolit.2013.7.1.134.

6. Nancy M. Foasberg, “From Standards to Frameworks for IL: How the ACRL Framework Addresses Critiques of the Standards,” portal: Libraries and the Academy 15, no. 4 (2015): 699–717, https://doi.org/10.1353/pla.2015.0045.

7. Annemaree Lloyd, Information Literacy Landscapes: Information Literacy in Education, Workplace and Everyday Contexts (Oxford, UK: Chandos Publishing, 2010).

8. James Elmborg, “Critical Information Literacy: Definitions and Challenges,” in Transforming Information Literacy Programs: Intersecting Frontiers of Self, Library Culture, and Campus Community, eds. Carroll Wetzel Wilkinson and Courtney Bruch (Chicago, IL: ACRL, 2012), 75–95.

9. Christine Susan Bruce, Informed Learning (Chicago, IL: ACRL, 2008).

10. Khalid Mahmood, “Reliability and Validity of Self-Efficacy Scales Assessing Students’ Information Literacy Skills,” Electronic Library 35, no. 5 (2017): 1035–51, https://doi.org/10.1108/el-03-2016-0056; Maria Pinto, “Design of the IL-HUMASS Survey on Information Literacy in Higher Education: A Self-assessment Approach,” Journal of Information Science 36, no. 1 (2010): 86-103, https://doi.org/10.1177/0165551509351198.

11. Bruce, Informed Learning.

12. Krista M. Soria, Jan Fransen, and Shane Nackerud, “Library Use and Undergraduate Student Outcomes: New Evidence for Students’ Retention and Academic Success,” portal: Libraries and the Academy 13, no. 2 (2013): 147–64, https://doi.org/10.1353/pla.2013.0010; ACRL, Academic Library Impact on Student Learning and Success: Findings from Assessment in Action Team Projects, prepared by Karen Brown with contributions by Kara J. Malenfant (Chicago, IL: ACRL, 2017).

13. Amy J. Catalano, Streamlining LIS Research: A Compendium of Tried and True Tests, Measurements, and Other Instruments (Santa Barbara, CA: Pearson Education, 2016).

14. Emily Drabinski, “A Kairos of the Critical: Teaching Critically in a Time of Compliance,” Communications in Information Literacy 11, no. 1 (2017): 76–94, https://doi.org/10.15760/comminfolit.2017.11.1.35.

15. Bruce, Informed Learning.

16. Eric Ackermann, “Program Assessment in Academic Libraries: An Introduction for Assessment Practitioners,” Research & Practice in Assessment 2 (2007): 18–23, http://www.rpajournal.com/dev/wp-content/uploads/2012/05/A23.pdf.

17. Paul G. Zurkowski, “The Information Service Environment Relationships and Priorities, Related Paper No. 5,” ERIC (National Commission on Librarians and Information Science, October 31, 1974), https://eric.ed.gov/?id=ED100391.

18. ACRL, Standards.

19. Louise Limberg, Olof Sundin, and Sanna Talja, “Three Theoretical Perspectives on Information Literacy,” Human IT, no. 2 (2012): 93, https://humanit.hb.se/article/view/69.

20. ACRL, Standards.

21. See Catalano, Streamlining LIS Research.

22. Mahmood, “Reliability and Validity of Self-Efficacy Scales Assessing Students’ Information Literacy Skills,” 1035–51.

23. Bojana Boh Podgornik et al., “Development, Testing, and Validation of an Information Literacy Test (ILT) for Higher Education,” Journal of the Association for Information Science and Technology 67, no. 10 (2016): 2420–36, https://doi.org/10.1002/asi.23586.

24. Yvonne Mery, Jill Newby, and Ke Peng, “Assessing the Reliability and Validity of Locally Developed Information Literacy Test Items,” Reference Services Review 39, no. 1 (2011): 98–122, https://doi.org/10.1108/00907321111108141.

25. Anita Ondrusek et al., “A Longitudinal Study of the Development and Evaluation of an Information Literacy Test,” Reference Services Review 33, no. 4 (2005): 388–417, https://doi.org/10.1108/00907320510631544.

26. Ralph Catts, Information Skills Survey for Assessment of Information Literacy in Higher Education: Administration Manual (Canberra: Council of Australian University Librarians, 2003).

27. Khalid Mahmood, “Do People Overestimate Their Information Literacy Skills? A Systematic Review of Empirical Evidence on the Dunning-Kruger Effect,” Communications in Information Literacy 10, no. 2 (2016): 199, https://doi.org/10.15760/comminfolit.2016.10.2.24; Marc A. Brackett and John D. Mayer, “Convergent, Discriminant, and Incremental Validity of Competing Measures of Emotional Intelligence,” Personality and Social Psychology Bulletin 29, no. 9 (2003): 1147–58, https://doi.org/10.1177/0146167203254596.

28. Matthew Swain, Donna L. Sundre, and Kathy Clarke, The ILT Test Manual (Beaumont, TX: Mometrix, 2014), https://www.madisonassessment.com/uploads/ILT%20Test%20Manual%20May%202014%20pdf_3.pdf [accessed 21 February 2020]; Nikolas Leichner et al., “Assessing Information Literacy among German Psychology Students,” Reference Services Review 41, no. 4 (2013): 660–74, https://doi.org/10.1108/rsr-11-2012-0076.

29. Lana Ivanitskaya, Irene O’Boyle, and Anne Marie Casey, “Health Information Literacy and Competencies of Information Age Students: Results from the Interactive Online Research Readiness Self-Assessment (RRSA),” Journal of Medical Internet Research 8, no. 2 (2006), https://doi.org/10.2196/jmir.8.2.e6; Leichner et al., “Assessing Information Literacy among German Psychology Students”; Tom Rosman, Anne-Kathrin Mayer, and Günter Krampen, “Measuring Psychology Students’ Information-Seeking Skills in a Situational Judgment Test Format: Construction and Validation of the PIKE-P Test,” European Journal of Psychological Assessment 32, no. 3 (2016): 220–29, https://doi.org/10.1027/1015-5759/a000239.

30. Megan Oakleaf, “Using Rubrics to Assess Information Literacy: An Examination of Methodology and Interrater Reliability,” Journal of the American Society for Information Science and Technology 60, no. 5 (2009): 969–83, https://doi.org/10.1002/asi.21030.

31. Megan Oakleaf and Neal Kaske, “Guiding Questions for Assessing Information Literacy in Higher Education,” portal: Libraries and the Academy 9, no. 2 (2009): 273–86, https://doi.org/10.1353/pla.0.0046.

32. ACRL, Standards.

33. T.P. Mackey and T. Jacobson, Metaliteracy: Reinventing Information Literacy to Empower Learners (Chicago, IL: ALA Neal-Schuman, 2014).

34. ACRL, Framework for Information Literacy for Higher Education (2016), available online at www.ala.org/acrl/standards/ilframework [accessed 2 February 2020].

35. Maureen Knapp and Stewart Brower, “The ACRL Framework for Information Literacy in Higher Education: Implications for Health Sciences Librarianship,” Medical Reference Services Quarterly 33, no. 4 (February 2014): 460–68, https://doi.org/10.1080/02763869.2014.957098.

36. “THRESHOLD ACHIEVEMENT,” Threshold Achievement Test of IL, https://thresholdachievement.com/ [accessed 21 February 2020].

37. Lloyd, Information Literacy Landscapes; Annemaree Lloyd and Michael Olsson, “Untangling the Knot: The Information Practices of Enthusiast Car Restorers,” Journal of the Association for Information Science and Technology 70, no. 12 (2019): 1311–23, https://doi.org/10.1002/asi.24284; Ann Bonner and Annemaree Lloyd, “What Information Counts at the Moment of Practice? Information Practices of Renal Nurses,” Journal of Advanced Nursing 67, no. 6 (June 2011): 1213–21, https://doi.org/10.1111/j.1365-2648.2011.05613.x.

38. Lloyd and Olsson. “Untangling the Knot: The Information Practices of Enthusiast Car Restorers,” Journal of the Association for Information Science and Technology 70, no. 12 (2019): 1311–23, https://doi.org/10.1002/asi.24284.

39. Hilary Hughes and Christine Bruce, “Snapshots of Informed Learning: LIS and Beyond,” Education for Information 29, no. 3/4 (2012): 253–69, https://doi.org/10.3233/efi-130940.

40. Clarence Maybee, IMPACT Learning: Librarians at the Forefront of Change in Higher Education (Oxford, UK: Chandos, 2018); Kim L. Ranger, Informed Learning Applications: Insights from Research and Practice (London, UK: Emerald Publishing, 2019).

41. Hughes and Bruce, “Snapshots of Informed Learning.”

42. M. Flierl et al., “Information Literacy Supporting Student Motivation and Performance: Course-Level Analyses,” Library & Information Science Research 40, no. 1 (2018): 30–37, https://doi.org/10.1016/j.lisr.2018.03.001.

43. Hughes and Bruce, “Snapshots of Informed Learning.”

44. Association of American Colleges & Universities, Information Literacy VALUE Rubric (2018), https://www.aacu.org/value/rubrics/information-literacy.

45. Purdue University and Senate Educational Policy Committee, University Senate document 11-7 (2012), https://www.purdue.edu/provost/students/s-initiatives/curriculum/documents/Senate_Document_11-7_Final_Appendices_Revised_3.pdf.

46. Purdue University, Impact. Instruction Matters: Purdue Academic Course Transformation (n.d.), https://www.purdue.edu/impact/.

47. C. Levesque-Bristol et al., “Creating Student-centered Learning Environments and Changing Teaching Culture: Purdue University’s IMPACT Program” (Occasional Paper 38) (Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment [NILOA], 2019), https://www.learningoutcomesassessment.org/wp-content/uploads/2019/05/OccasionalPaper38.pdf.

48. Geoffrey C. Williams and Edward L. Deci, “Internalization of Biopsychosocial Values by Medical Students: A Test of Self-Determination Theory,” Journal of Personality and Social Psychology 70, no. 4 (1996): 767–79, https://doi.org/10.1037/0022-3514.70.4.767.

49. Frédéric Guay, Robert J. Vallerand, and Céline Blanchard, “On the Assessment of Situational Intrinsic and Extrinsic Motivation: The Situational Motivation Scale (SIMS),” Motivation and Emotion 24, no. 3 (2000): 175–213, https://doi.org/10.1023/A:1005614228250.

50. Beiwen Chen et al., “Basic Psychological Need Satisfaction, Need Frustration, and Need Strength across Four Cultures,” Motivation and Emotion 39, no. 2 (December 2014): 216–36, https://doi.org/10.1007/s11031-014-9450-1.

51. P. Montiel-Overall, “Teachers’ Perceptions of Teacher and Librarian Collaboration: Instrumentation Development and Validation,” Library and Information Science Research 31, no. 3 (2009): 182–91, https://doi.org/10.1016/J.LISR.2009.04.001; M. Shahbazi et al., “Development of a Scale for Data Quality Assessment in Automated Library Systems,” Library and Information Science Research 41, no. 1 (2019): 78–84, https://doi.org/10.1016/J.LISR.2019.02.005; J. Siegel, M. Morris, and G.A. Stevens, (2020). “Perceptions of Academic Librarians toward LGBTQ Information Needs: An Exploratory Study,” College & Research Libraries 81, no. 1 (2020): 122–48, https://doi.org/10.5860/crl.81.1.122.

52. Bruce, Informed Learning.

53. George D. Kuh et al., “Piecing Together the Student Success Puzzle: Research, Propositions, and Recommendations: ASHE Higher Education Report,” ASHE Higher Education Report 32, no. 5 (2007): 1–182, https://doi.org/10.1002/aehe.3205.

54. Elizabeth L. Black and Sarah Anne Murphy, “The Out Loud Assignment: Articulating Library Contributions to First-Year Student Success,” Journal of Academic Librarianship 43, no. 5 (2017): 409–16, https://doi.org/10.1016/j.acalib.2017.06.008.

* Michael Flierl is a Visiting Assistant Professor and Information Literacy & Research Engagement Librarian at The Ohio State University; email: flierl.1@osu.edu. Clarence Maybee is a Professor and the W. Wayne Booker Chair in Information Literacy at Purdue University Libraries and School of Information Studies; email: cmaybee@purdue.edu. Emily Bonem is Assistant Director for Scholarship of Teaching & Learning at Purdue University’s Center for Instructional Excellence; email: ebonem@purdue.edu. ©2021 Michael Flierl, Clarence Maybee, and Emily Bonem, Attribution-NonCommercial (https://creativecommons.org/licenses/by-nc/4.0/) CC BY-NC.

Copyright Michael Flierl, Clarence Maybee, Emily Bonem


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

Article Views (Last 12 Months)

No data available

Contact ACRL for article usage statistics from 2010-April 2017.

Article Views (By Year/Month)

2022
January: 42
2021
January: 0
February: 0
March: 0
April: 0
May: 0
June: 0
July: 0
August: 0
September: 0
October: 6
November: 761
December: 247