Teaching Expert Information Literacy Behaviors through Decision-Based Learning
Standards for information literacy challenge institutions to create expert depth of knowledge in students. One potential way to do this is through an instructional method called Decision-Based Learning, which seeks to build conceptual, procedural, and conditional knowledge explicitly. This paper details the results of a multisemester study involving groups of engineering and technology students taught using this method. Students tended to engage with a pre-class learning module based on the new method more fully than the comparable groups of students used pre-class instructional videos. Those taught with the new method also showed significant improvement in their performance in post-tests.
Introduction
University curricula often include information literacy (IL) instruction in order to equip students with skills necessary to engage wisely and ethically with information and to facilitate the creation of new knowledge.1 At Brigham Young University, a large private university in the western United States, upper-division writing courses provide one common framework for teaching IL skills. These core-required courses challenge undergraduate students to perform library research on a topic of their choosing and present their findings in a literature review or persuasive paper.
Academic librarians at this institution provide IL training in support of this literature review assignment during a single fifty-minute session (a “one-shot”) held in the library. This training provides an opportunity for students to get individualized help relating to the selection, scoping, and searching of research topics. In addition, it may also include a discussion of information search strategies, search language, and evaluation and management of sources. In these latter areas, the Association of College and Research Libraries’ Framework for Information literacy for Higher Education (or simply “the Framework”)2 provides guidelines for delivery of content. In any given course, an IL instructor may determine a few aspects from the Framework that are appropriate for focus within the given context. Of particular interest to the instructional sessions for advanced writing are the frames “Authority Is Constructed and Contextual,” which speaks to principles of source evaluation, and “Searching as Strategic Exploration,” which informs the teaching of search strategy.
Each of the frames in the Framework defines desirable IL competencies in terms of expert behavior. Experts are distinguished from novices by the manner in which they think and reason. Defining characteristics of an expert include deep and organized content knowledge and conditionalized knowledge that informs when to apply facts and methods.3 As noted by Seeber,4 application of conditional considerations is central to IL behaviors, and the recognition of the influence of conditional knowledge on decision-making in this domain is one of the key contributions of the Framework. Indeed, the wording found within the Framework supports this notion of conditional knowledge as an essential characteristic of IL experts: e.g., “Experts select from various search strategies, depending on the sources, scope and context of the information need.”5
The lofty goal of building expertise in the IL domain is not easy to achieve, nor do educational institutions presume that students will exit their doors having fully developed it. Certain levels of expertise take deliberate practice over time.6 Also, a central challenge is finding space within curricula to provide adequate focus on IL principles while not overloading students during limited class time. Certainly, the format of a one-shot provides limited opportunities to build expert-level depth of knowledge during the short instructional period. However, the language of the Framework challenges institutions to do better in this respect.
One possible way to improve the chances of building expertise is to expand the scope of the IL instruction by increasing the level of integration of IL concepts within the hosting course (in this case, advanced writing). However, the process of course integration can be quite difficult and may achieve varying degrees of success, often due to differences in priorities of individual writing instructors or curricula. Thus, some researchers recommend a flexible collaborative approach tailored to each individual instructor and institutional culture.7 In the case represented in the present study, class members in a single library session may represent a variety of host classes, making deeper integration of IL training into these several classes quite complex. Because of this, other alternatives for improving student expertise levels have become of great interest.
Another approach to improving the depth of learning that has captured the attention of several IL instructors is the use of a flipped classroom model. Some researchers note that the flipped classroom could “extend…interactions with students,” overcoming some of the time constraints of a one-shot.8 Arnold-Garza adds that benefits specific to library instruction include the ability to “focus on efficient use of class time which accommodates different learners.”9 Indeed, the ability to learn at one’s own pace before class, while offering instructors greater flexibility to improve in-class teaching,10 may be an effective way to combat disengagement by students during research-focused classes or feelings of incompetence with research resources, such as “library anxiety.”11
However, simply flipping a class alone does not appear to lead to deeper learning and expertise. Quantitative and qualitative testing of IL instruction employing a flipped classroom model has yielded mixed results. Some researchers note that students have preferred aspects of the flipped classroom model,12 and some instructors have found that the model yields higher quality student work.13 Yet others have found no measurable difference in student performance, or even inferior results compared to traditional methodologies.14 At least part of the reason for these mixed findings could be “multiple conceptions” of the flipped classroom approach15 and differences in approach, execution, or affective influences such as teacher enthusiasm.16 Another challenge noted in some of the studies was that of accountability for pre-class work in flipped classroom models.17 Some IL programs were able to integrate with host courses to provide motivation through graded assignments, which was valued as being a key success factor.18 Others rely on more internally focused motivations for completing out-of-class work. Lacking or uncertain student engagement in pre-class work adds ambiguity to what the aforementioned study results really indicate.
Thus, while flipped classroom approaches may provide IL instructors with a promising framework for deeper learning in a one-shot environment—opportunities to extend instructional time with students, provide self-paced learning, and employ more active in-class learning techniques—whether it hits the mark depends on what happens within that framework. In other words, capturing the promise relies on success factors including student motivation for pre-class work, the methods chosen for pre-class and in-class learning, and their execution. The following sections seek to investigate these concepts further.
Decision-Based Learning
The instructional techniques exemplified in the literature above are highly varied. Most focus on increasing student engagement, but in addition it is instructive to return to the Framework and ask what methods might best create the expert knowledge described, including both organized content knowledge and conditionalized knowledge.
Biggs suggests that while much focus in the academic environment is placed on teaching conceptual and procedural knowledge (the “what,” “why,” and “how”), inadequate focus is placed on explicitly teaching conditional knowledge (the “when” or “under what conditions”).19 For example, even though a class of students may effectively learn a number of useful analytical methods for solving a variety of different types of problems over the course of a semester, these students often have difficulty choosing which method is appropriate to use in a “real world” scenario.20 One contributing factor to this is that the “real world” usually lacks the context that is naturally present during university instruction: methods to apply to a particular problem are often obvious based on the context of the most recent instruction given. Lacking explicit focus on making a “functional connection” between conceptual or procedural knowledge and the conditions for applying such, students may not build this type of expert behavior during their university experience.21
Swan, Plummer, and Lush assert that, if proper attention is given to building conditional knowledge and schematizing this way of thinking, at least some level of expertise can be achieved prior to graduation.22 One teaching methodology that focuses on schematizing conditional knowledge as a primary learning activity is Decision-Based Learning (DBL).23 This method exposes students to an expert’s thought process (e.g., figure 1); the students then learn this process by making a series of connected decisions that the expert would make.
Sansom, Suh, and Plummer report on the use of a DBL model to teach a short unit on heat and enthalpy to students as part of a full-semester general chemistry course.24 In this study, researchers found that student performance on a midterm exam improved significantly with limited use of an expert decision model (two class periods). Moreover, they found that the best results were obtained when students engaged with DBL models at an optimal level. Specifically, students who were introduced to the DBL model in class and then worked five to ten problems outside of class performed better on their exams than students that either worked zero out-of-class problems or worked twenty or more problems. While these results provide a level of optimism that the DBL method can improve learning, even when used briefly during the semester, they expose the reality that the environment of teaching and learning is complex–researchers are still seeking understanding of how to apply the methodology.
Plummer, Taeger, and Burton studied the use of DBL during a semester-long class in the religious studies domain.25 In this study, the students used the expert’s process model more extensively throughout the semester. Student perceptions were generally positive in this qualitative study, with a strongly recurrent theme that the DBL method helped students organize scriptural information and add a sense of detail and realism to their readings.
In the IL domain, Katz has performed the only known work using DBL.26 This researcher tested a DBL instructional module in connection with library sessions that are part of a college-level writing course for first-year students. Katz found that students who were taught using the DBL method adopted higher-level source evaluation strategies than other students taught using an existing method.
The present work seeks to expand on the current state of knowledge relating to the DBL method by investigating its use in a classroom experience typically limited to a single in-person interaction with students. This study complements previous work in the IL domain by focusing on a broader set of competencies suitable for more experienced students, including source evaluation and search strategies. It also seeks to quantify how this teaching method influences student engagement in pre-class material in a flipped classroom setting.
Methods
In this study, the author used a quasi-experimental design (see discussion of participant selection below). Data gathering instruments included student self-assessments relating to the level of engagement in pre-class tutorials, as well as pre-instruction and post-instruction tests, all delivered in an online survey format. Students provided qualitative insights by answering a few open-ended survey questions regarding their perceptions and use of the materials.
Selection and Grouping of Participants
The target population for this study comprised students enrolled in an advanced writing course who signed up for a library session with the author. These students were organized into multiple library sessions, each limited to ten or fewer students. During each semester in the study (three full semesters and two terms), half of the sessions received DBL content (twenty-nine total) while the other half (thirty total) received a lecture-based treatment. All students within a given library session received the same treatment.
The sampling of students to attend specific sessions was both purposeful, where students were arranged by major, and voluntary, where students self-selected possible sessions based on their availability. Session assignment was done through a custom scheduling tool used at the author’s institution. Using this tool, students first indicate all potential time slots for which they are available during the teaching window. Next, the instructor set maximum class size and selected a specific discipline or group of disciplines to be displayed; based on this information, the tool then displays the number of students available in each time slot. When the instructor selects a time slot, the tool assigns a random sample of students within that time slot to the session and removes those students from any other locations they may occupy on the calendar. Though the study population was already somewhat academically homogeneous (comprising a subset of engineering and technology majors that the author serves), the author made further attempts wherever practical to form sessions consisting of a single engineering discipline. This preserved an instructional objective of greater in-session focus on search tools and examples most relevant to the students’ specific areas of study. In some cases, sparse representation of some disciplines in the class or tight schedule availability for some students required creating sessions that comprised a mix of disciplines, resulting in less optimal focus on discipline-specific in-class tools. Notwithstanding these individual differences in class constitution, the basic competencies and principles taught (and assessments given) were independent of the specific focus on discipline-specific tools and examples.
To remove potential performance bias that may follow from students in different majors being disproportionately assigned to a given treatment, the author identified pairs of sessions with similar majors (or mixes of majors) and scheduled at similar times of the day. Then, he assigned one session of each pair to the DBL treatment using a random number generator (the other was assigned a lecture treatment).27
Participation
To ensure ethical treatment of research subjects, all interactions with students were accomplished through methods and instruments (email, surveys, written and oral statements) approved by the institution’s Institutional Review Board (IRB).
As previously mentioned, all the students received one or another of the tested educational treatments, and pre- and post- testing was part of the instruction for all students; however, participation in the study itself by allowing test data to be used and by providing student experience feedback was voluntary. Students were not required to make their test data and survey comments available to the study in order to satisfy their course requirements. Helping to minimize the potential for perceived coercion to participate is the fact that the author was only involved in providing training during the one-shot library class, and he was not responsible for grading of any student work–student attendance at the library session was recorded by a teaching assistant and transmitted to the students’ advanced writing instructors. Each student who chose to participate opted into the study by signing an informed consent form as approved by the IRB, which they left in the instruction lab at the end of the session.
The author excluded from the study all students who elected not to provide consent. Some students provided consent but elected not to participate in one aspect or another of the study. In these cases, the author evaluated the available student data where appropriate. For example, where either pre- or post-instruction test scores were not available for a student, this student was not included in the analysis of pre- and post-test scores but was included in analysis of participation levels where that data was available.
Students received no incentives for participation, other than the potential benefits reaped from the instructional modules and engagement with the pretest. In lieu of incentives, the author tried to remove as many barriers to participation as possible, including ensuring confidentiality of participation and minimizing the time commitment to complete surveys related only to the study, which comprised student perception questions that were given following the instruction period and at the end of the semester (see Assessment of Treatments below).
In total, 260 students out of a possible 318 attending the class (82%) consented to participate in the study. Slightly more students in the lecture group consented to participate (132 of 160 assigned to the group, or 83%) versus those assigned to the DBL group (128 of 158 assigned, or 81%). Two hundred and twenty-five provided full pre-instruction and post-instruction test data (71% of the total possible); 113 such students (50.2%) came from the DBL group and 112 (49.8%) came from the lecture-based instructional treatment (see Instructional Content section below). A total of twenty-five students who originally signed up for a session did not attend any session; of those ten were from those sessions assigned to the DBL group and fifteen were from those assigned to the lecture group. Noting that this is a small sample of data, it may be concluded that the attrition rates among groups are at least similar in magnitude.
Interactions with Participants
Notifications
The author (instructor) sent notification of assigned sessions to all students via email approximately a week before the beginning of their session. This email provided links to the pre-session materials with instructions for completing them, and contained a statement that the session was part of a study of teaching methodologies. The email also directed students to complete all tasks prior to class, first completing the pre-quiz without assistance, which they self-certified. The email informed students that the quiz results would not have any impact on their course grade. Students received a reminder email approximately twenty-four hours prior to their scheduled class.
Follow-up
Following completion of the semester, the instructor sent a final survey to all students who elected to be part of the study, via an email message approved by the IRB, in order to assess their experience using the materials given pre-session and to gather other user-offered feedback. This final survey was sent within five days of the last day of classes and was left open for thirty days.
Instructional Content
The instructor designed learning experiences in this study such that they would present a similar scope of content to both DBL and lecture-based groups. The overall content was divided between pre-session and in-session delivery mechanisms in proportions appropriate for the teaching method.
Pre-session Content
Pre-session assignments included a pretest and a pre-class activity. For their pre-class activity, the lecture-based group received links to four online tutorials that instructors have previously used as preparation for their library sessions.28 These tutorials cover concepts including use of keywords, constructing searches with Boolean operators, assessing authority and reliability of sources, and following a citation trail; all four take less than five minutes to view. In lieu of these four tutorials, the DBL group received a link to a web-based interactive learning exercise using the DBL method (approximately twenty minutes to accomplish).
In-session Treatment
At the beginning of the library sessions, the instructor fielded questions regarding concepts encountered in the pre-class material. The students in the lecture-based group then received instruction on essential material that their pre-class assignment did not cover. The instructor then provided both groups with a live tutorial on how to use a library database appropriate for their discipline, using a class member’s research topic as an example.
Following this demonstration, the instructor gave students in both groups a short post-test on the material and then spent any time remaining in the class period providing individualized attention to student projects.
Similarity of Content
Because the DBL and lecture delivery methods have fundamental differences, the instructor took care to ensure critical content was essentially the same. The study aside, it was the instructor’s desire and responsibility to provide the best possible learning experience for each group, regardless of the assigned treatment. Nevertheless, it was not practical to make the pre-session training experiences identical in content. For example, some portions of the videos used by the lecture-based group were incorporated into the DBL modules, but some of the concepts in the videos were outside of the learning objectives of these particular library sessions. Likewise, DBL modules contained more extensive information in some areas than was possible to cover in the lecture-based treatment. In these cases, pre- and post-test questions ignored any outlying aspects; assessments were focused only on principles that were treated equivalently in the two groups.
Table 1 maps the various concepts to when they were taught for each method. As shown in the table, the DBL method provided more detailed pre-session information delivery on some topics. In contrast, the lecture method delivered more detailed content in session, although the short lecture-based online videos viewed prior to the session did introduce several focus topics. This provided for some level of equity in expectations of the students in both classes—both were expected to perform pre-class assignments that would inform their classroom experience. Note from the previous section that the two groups did have different time expectations—one twenty minutes and one five, although actual times spent on each assignment were not collected. This generally recovered more time for individualized help in DBL classes, as the lecture portion generally finished approximately ten minutes faster than lecture-based sessions.
|
Table 1 |
||||||
|
Partitioning of Content for Treatment Groups |
||||||
|
Concept |
Pre-session |
In-session |
In-session accommodation for LEC group |
Assessed in pre-/ post-test |
||
|
DBL |
LEC |
DBL |
LEC |
|||
|
Database selection |
✓✓ |
✓ |
✓ |
✓✓ |
Different databases described in detail |
Y |
|
Keyword vs. subject search |
✓✓ |
|
|
✓✓ |
Search types introduced & compared |
Y |
|
Choosing keywords |
✓ |
✓ |
✓ |
✓ |
|
Y |
|
Formulating search strings |
|
|
✓ |
✓ |
|
Y |
|
Managing search results |
✓✓ |
✓ |
✓ |
✓✓ |
Rules of thumb for search provided |
N |
|
Broadening/narrowing techniques |
✓ |
✓ |
|
N |
||
|
Using database filters |
✓ |
✓ |
|
N |
||
|
Following a citation trail |
✓ |
✓ |
|
|
|
Y |
|
Using citation indexes (practical) |
✓ |
✓ |
|
N |
||
|
Levels of peer review |
✓✓ |
✓ |
✓ |
✓✓ |
Levels of review for conference vs. journal papers discussed |
Y |
|
Assessing level of peer review (practical) |
✓ |
✓ |
✓ |
(None) |
N |
|
|
Author credibility, source bias |
✓ |
✓ |
✓ |
✓ |
|
Y |
|
Currency of information |
✓ |
✓ |
✓ |
✓ |
|
Y |
|
Key: ✓ = basic content ✓✓ = detailed content |
||||||
Decision-Based Learning Content Development
A DBL instructional module comprises three main parts: an “expert decision model” (EDM), a problem bank, and a set of short topical training modules.29 The University’s Center for Teaching and Learning provided instructional design guidance and a custom software package that facilitated creation and presentation of the DBL instructional module. An alternative mode of implementation for the EDM is a hyperlinked slideshow format.30
The EDM reflects the knowledge of the instructor in the chosen instructional domain. Figure 1 shows a top-level view of the EDM used in the study. As shown, the expert model includes a series of connected decision points that successively lead the student to an endpoint, where the model suggests a course of action based on the decisions made.
|
Figure 1 |
|
Top Level View of EDM; (inset) Detail of Decision Paths |
|
|
For the present study, the author aligned the scope of the EDM with instructional objectives for this session, which broadly include the IL competencies of topic development, search strategy, and source evaluation. The three main branches in the model represent each of these areas. The higher-level granularity of the decisions that the model presents to students reflects the more advanced level of the students (typically juniors and seniors) and the short allotted instructional time.
The second part of the DBL instructional module, the problem bank, provides practice problems that exercise the students’ decision-making abilities within the framework of the model. This helps students build their own schemas, which will inform future decision-making. The problem bank scenarios expose different paths in the EDM; in the present case, students practiced two paths in pre-class work, including:
- A researcher looking for new sources in the engineering realm using a subject search. The researcher finds a conference paper that is relevant, current, and has credible authors.
- A researcher looking for new sources in the engineering realm using a keyword search. The researcher finds a peer-reviewed journal article that is relevant, current, and has credible authors.
The instructor selected these scenarios to provide exposure to two different types of searches and two different types of sources. Note that the paths also contain similarities in order to provide some repetition while still offering some breadth, which is important in schema forming.31 Both scenarios were carefully chosen to provide clear-cut answers at each decision point. On the other hand, the in-class scenario was a “live” example from a student in attendance who offered a topic for discussion, giving students experience with a less controlled, “real-world” application.
To assist the learner in making correct decisions through the scenarios in the problem bank, topical training modules were available at each key decision point. A key method used in connection with these modules is “just in time, just enough” training,32 where subject matter related to this decision making is segmented into small, digestible pieces and presented to the student at the time of need. The method chosen for presenting this information in this study is a simple slideshow with one to four pages of content. An example page from an instructional module can be seen in figure 2.
|
Figure 2 |
|
Top Level View of EDM; (inset) Detail of Decision Paths |
|
|
Assessment of Treatments
To assess specific outcomes of this particular IL training, the author employed a pre-/post-testing strategy using course-specific test questions. In order to minimize potential barriers to engagement in the study, pre- and post-tests focused on a few essential competencies (see last column of table 1). Each test required approximately five minutes to complete.
The test design process took care to ensure equivalence between pre- and post-tests in order to establish a valid basis for comparison. In this process, two options for testing were considered:
- use of the same questions for both tests; and
- use of different but similar questions (sets “A” and “B”) and applying the “A” set of questions for one of the tests, and similarly applying the “B” test for the other.
Each of these methods offers positives and negatives. Option 1 ensures equivalence of the questions but may introduce test bias due to test question familiarity.33 This type of bias is indeed a concern in this study, since it is presently impractical under current scheduling constraints to ensure a substantial time buffer between pre- and post-test. Using this option would call into question the internal validity of the testing. On the other hand, option 2 minimizes the effects of test bias but does not ensure equivalence of questions, constituting an “instrumentation” threat to validity of the testing.34
While both of these threats may obscure measurement of true change in individual students’ abilities, they do not prevent comparison of two treatment groups, if such groups are equivalent at the outset. Further, option 2 does allow a measurement approaching true change in ability of the overall group, if the test questions used for pre- and post-tests are swapped for various subgroups of students. This helps to separate changes in measured student performance due to differences in test question difficulty from those due to the treatment, and leaves a reasonable (averaged) measure for overall improvement in performance. Because of these affordances, the author selected option 2.
Testing
Preliminary Module Testing
The author provided new course materials, including the EDM, problem scenarios, and supporting topical modules, to faculty peers and student library employees for initial testing. Two student assistants provided helpful feedback during early rapid prototyping of the model and scenarios, shedding light on the time burden and the clarity of the materials. This feedback helped improve clarity and relatability of the content. Then, two instructional librarians and two trusted and experienced teaching colleagues provided further critique of the complete prototype of the module.
Pilot Study
Following initial module testing and refinement, the author conducted a pilot study in a live classroom setting, including eleven fifty-minute instructional sessions comprising seventy students in total. Following the pilot study, the qualitative feedback received from students and colleagues and quantitative feedback from quizzes helped inform adjustments to course content and delivery for the main study.
Formal Testing
Formal testing extended over the course of one full year and an additional semester, encompassing winter semester, spring and summer terms, and the fall semester of 2019, and concluding with winter semester ending in April 2020. During this phase of the project, the author taught fifty-nine instructional sessions. Twenty-nine sessions (49%) received the DBL training and thirty (51%) received standard lecture treatment.
Midway through the study, the author evaluated pre- and post-tests for their effectiveness: questions that were less discriminative of student behavior (e.g., those with high scores from both pre- and post-) were replaced, and an effort was made to rebalance the difficulty of tests A and B (hereafter distinguished as tests C and D).
Results
Pre- and post-tests each comprised five multiple choice and true/false questions. Some questions had multiple parts; others had multiple correct answers. In these cases, each part was treated as a separate response for scoring. All responses received equal value, and no weighting factors were applied to distinguish questions based on difficulty. An analyst at the library conducted t-tests and analysis of variance on the collected data, using a general linear model (GLM) procedure in the SAS® statistical package.
Influence of Instruction on Overall Student Performance
Using a paired samples t-test comparing pre-test and post-test scores, the research team found a mean increase in student test performance amounting to 9.6 percentage points for the whole group following instructional treatments (see table 2). A p-value less than 0.001 indicates that this increase is statistically significant (for the purposes of this study, a p-value <0.05 is considered statistically significant), as would be hoped in an instructional setting. Likewise, the magnitude of change (approaching 10%) indicates a practical difference as well. Here, a “practical” difference is defined as a difference in test performance that is not only statistically significant, but that is large enough to be meaningful in terms of desired student outcomes for the course. Into this desired outcome must also be factored a recognition that development of a new teaching method does require additional effort for the instructor; therefore, an instructor must determine whether the magnitude of the gains in student performance justifies the additional time spent preparing and teaching the new models. In an educational environment, a difference in performance becomes more practical to a student as it helps improve the student’s letter grade. While a letter grade was not provided in this particular study, this serves as a good guideline for determining practicality in this setting. Thus, an improvement of ten percentage points can be considered practical, as it generally moves the student to a higher letter grade.
|
Table 2 |
||||
|
Change in Mean Test Scores, Pre- vs. Post- |
||||
|
Test Group |
Pre-test Mean Score |
Post-test Mean Score |
Difference |
p |
|
DBL |
0.6664 |
0.7760 |
0.1096 |
<.0001 |
|
Lecture |
0.6518 |
0.7342 |
0.0824 |
<.0001 |
|
All |
0.659 |
0.755 |
0.096 |
<.001 |
Therefore, we can conclude that students’ understanding of those IL principles captured in the tests improves after this instruction. This is not a surprising finding in light of the goals of the instruction and the fact that the pre- post-test instrument is designed to reflect on those specific goals.
Equivalence of Test Groups
Mean test scores for students taking the pretest were analyzed using an independent samples t-test comparing the populations selected to receive the DBL and the lecture treatments. As shown in table 3, there was a mean difference in the pre-test scores of 1.5 percent in favor of the DBL group; however, this difference is not statistically significant (two-tailed p = 0.467). This supports the premise that we can consider post-test results as the defining difference for the groups undergoing the different treatments.
Influence of Teaching Method on Student Performance
An independent samples t-test of post-test scores shows that the DBL group performed better than the lecture group on the post-tests, with a mean difference of 4.2 percent (table 3). The two-tailed p-value of 0.038 indicates that this is a statistically significant difference, and the magnitude of the difference suggests a borderline practical difference as well.
|
Table 3 |
||||
|
Difference between Mean Test Scores of Study Groups |
||||
|
Test Means |
DBL |
Lecture |
Difference |
p |
|
Pre-test |
0.6664 |
0.6518 |
0.0146 |
0.467 |
|
Post-test, unadjusted |
0.7760 |
0.7342 |
0.0418 |
.0377 |
|
Post-test, adjusted for test version |
0.7886 |
0.7191 |
0.0695 |
.0028 |
Other Variables/Covariates
Comparing the means for the four versions of tests shows that students scored significantly higher when taking test “B” as compared to other test versions—especially when taken as a pre-test (figure 3). This observation for tests A and B prompted the aforementioned rebalancing of tests and led to the use of tests C and D thereafter. This also compels an analysis of variance, controlling for post-test version with their associated variations in sample size, in order to understand the effect of the different tests on the results of the study. Referring back to table 3, when accounting for these variations the difference between the means and the associated significance increases in favor of the DBL method. This indicates an even stronger practical difference between teaching methods when controlling for the test version.
|
Figure 3 |
|
Comparison of Test Versions Used in Study |
|
|
Engagement in Pre-class Work
Students were asked to self-report their use of the assigned pre-class modules on a scale of 1-4, with 1 representing the phrase “I did not use the tutorial” and 4 representing the phrase “I completed all sections of the tutorial.” Responses of 2 or 3 represented gradations of use between these two extremes. Participants were also asked to rate how appealing the tutorials were on a scale of 1-4, with 1 representing the phrase “I did not use the tutorial” and 4 representing the phrase “I found the tutorial both interesting and applicable to my needs.” Figure 4 shows these student responses.
|
Figure 4 |
|
User Reported Use & Appeal for Pre-class Modules |
|
|
As shown, self-reported usage of the DBL module at either a “full” (4 on the scale) or “substantial” (3 on the scale) level approaches 94 percent of the respondents (118 of the 126 in that group), approximately ten percentage points greater than the video module usage (100 of 120 respondents).35 The percentage of students judging the appeal of the respective modules to be both interesting and applicable (4 on the scale) are comparable (82 of 126, or 65%, in the DBL group vs. 76 of 120, or 63% in the video/lecture group); when combining those judging the appeal of the modules to be 3 or 4 on the scale, student perceptions slightly favor the videos over the DBL modules (92 of 120, or 77% of the video/lecture group, vs. 93 of 126, or 74% of the DBL group). This suggests some room for improvement, perhaps in the level of polish of delivery and focus of the DBL module.
It should be noted that, in these and the post-semester questions, students may have been motivated to inflate the score of either their level of participation in, or the appeal of, the tutorials in order to please the instructor. Reducing this potential effect are the instructor’s lack of grading authority in the class, the one-shot nature of the class (that tends to substantially reduce the depth of the teacher-student relationship), and the fact that the origin of the tutorials was not identified (indeed, some elements of their tutorials were created previously by an instructional design team; others were created by the instructor). More significantly, if students felt any level of influence to alter their assessments, both DBL and control groups would have been equally motivated to do so. Thus, the differences between the experiences of each of the study groups, which is the primary quantity of interest, should still reflect a valid comparison.
Other Findings
Factors Influencing Lack of Participation
Ninety-two students (35% of total students providing consent), including forty-seven from the DBL group (37% participation rate) and forty-five from the lecture group (34% participation rate), responded to post-semester survey questions relating to their overall perceptions of the various learning resources offered, including the pre-class assignment, the in-class instruction, and after-class discussions with librarians. Eight respondents (9% of respondents) indicated that they did not make use of the pre-class assignment; all but two indicated that time was a key factor in their lack of participation. The other two stated forgetting and lack of understanding the assignment as their primary reasons. The last response was from a student assigned the DBL module, indicating a possible need for better explanation of the DBL method.
Use of Resources after Class
Post-semester surveys asked whether students used the online learning resources (DBL or videos, as appropriate) after the in-class session had concluded. Sixty-two of the respondents (67%) indicated they did not use the resources after class, most citing either that time was a constraint, or that they had no need for visiting the material further. Several stated in various ways that they had learned what they needed from their initial encounter with the material.
The remainder of the respondents had some further interaction with the learning resources they were given. Most indicated minimal use, perhaps to refresh their memory on how to find sources, although a few (six, comprising 7% of respondents) classified their use of the material as “substantial.” Of those six respondents, five were from the lecture group who were given the short videos as their learning resource, suggesting that the short format (one-minute videos) may be more useful to students for reference purposes than refreshing one’s memory using the current DBL module.
Perceptions of Helpfulness
When asked to rate the helpfulness of the pre-class assignments vs. in-class work and after-class consultations, responding students from both study groups ranked the in-class session most helpful, as viewed from the end of the semester. More than 90 percent of students indicated the class was very helpful or modestly helpful (figure 5). Interestingly, those given the DBL pre-class assignment tended to find the in-class work very helpful somewhat more often (96% of respondents, or 27/28 responses) than those given the video pre-class assignment did (92% of respondents, or 22/24 responses).36 On the other hand, those given the video pre-class assignment tended to rank the assignments themselves as modestly helpful or very helpful slightly more often (79% of respondents, or 36/46 responses) than those given the DBL assignment did (72% of respondents, or 34/47 responses).
|
Figure 5 |
|
Student Perceptions of Helpfulness for Different Learning Resources |
|
|
Discussion
Based on the post-test scores given above, the group of students receiving the DBL treatment exhibits a statistically significant improvement in post-test performance over those receiving the lecture treatment. This improvement represents a practical difference, which is important when considering that there is a cost to development and delivery of out-of-class study materials such as those employed in this study. Indeed, employing the DBL method comes with its own learning curve, as do other innovations in instructional technique, including a flipped classroom approach.
Supportive of this performance difference is evidence of greater student usage of the DBL pre-class assignment, as opposed to the usage of the videos associated with the lecture method (figure 4). As mentioned previously, student engagement in out-of-class work is an important factor affecting the efficacy of flipped classroom teaching. It is interesting to note the differences in time commitment for these two alternatives: the DBL module, which was billed as a twenty-minute activity, received greater attention than the short videos, which were billed as a five-minute activity. Possibly, mention in the introductory email of the short length of the videos may have biased perceptions of the students regarding the potential benefit of viewing them; or, perhaps the commonplace nature of the video format is less motivating to students. Alternatively, perhaps the active learning aspect of the DBL module, which includes student decision-making inside scenarios encountered in a student’s life, holds a student’s interest better than the more passive watching of videos. Whatever the reasons, this higher engagement is significant in the context of a flipped classroom, and more specifically, information literacy instruction. Often in IL instruction, intrinsic forms of student motivation are beneficial, if not essential, to maximizing the benefit of out-of-class work. Specifically in this study, no grades were attached to completion of out-of-class work associated with the instruction, increasing reliance on intrinsic motivation.
Interestingly, student perception of how interesting and useful the various assignments were, including use for reference after the session, favors the standard videos (figure 4, figure 5). While this helpfully points to a number of possible factors for consideration in the improvement of instructional materials—including length, ease of access, and degree of polish—it also indicates somewhat of a disconnect between the actual effectiveness of instructional materials and student preference, at least in this case.
One interesting finding relating to the perception of students about the usefulness of in-class instruction is that the students assigned the DBL module as a pre-class assignment were more likely than the control group to find the in-class instruction “very helpful.” This could mean that the depth of learning with the DBL pre-class assignment helped students to be better prepared to learn in class, as is the hope with a flipped classroom. Anecdotally, the author observed that the pre-class assignments for all groups (including pre-test and modules/videos) were successful in that they did prompt student questions at the beginning of class, increasing student engagement in the session. The DBL method used in this study certainly placed more depth of knowledge in front of the student prior to the in-class portion of instruction, which opened the door for deeper and more individually paced learning. Though how much and how deeply students used supplementary “just-in-time, just enough” information was not measured, the reported high level of engagement in the module (figure 4) confirms that a high percentage of students did do enough learning to successfully complete the module. This could have helped students better prepare for in-class instruction and perform better on the post-test. Further work is needed to understand this possible connection.
While each instructional method had a pre-class aspect and an in-class aspect, there were key dissimilarities in their approaches. For example, the instructional videos do not explicitly present their material in context with a working problem, which may explain some of the differences in student performance. The DBL module contained this context and also required students to make decisions based on the conditions surrounding the information need. In so doing, it required active engagement of the student in order to progress through the module. The video assignment intrinsically assured no such engagement.
Certainly, an advantage of the DBL method, or any other new method, is its novelty; thus, the new learning approach could have encouraged the greater engagement shown in figure 4. However, simple curiosity seems less likely to drive nearly 80 percent of the students to full completion of the more lengthy DBL assignment, as opposed to just over 70 percent of students completing very short video instruction, unless deeper motivational factors are at play.
In closing, one must note that the each of the tested teaching methods is a composite of techniques. In the case of DBL, some techniques are intrinsic/unique to the method, while others are simply good instructional practices that are not necessarily unique to DBL (e.g., “just in time” content). To tease out the particular contribution of each of these aspects requires more work. That said, the study clearly shows benefit to using the set of techniques associated with the DBL method. Aside from the student-centric performance gains mentioned above, other potential benefits may be attached to the DBL method. For example, the instructor may benefit from the process of creating an expert decision model. Creating the model is a form of mind mapping or documentation that, in this researcher’s personal experience, yields clarification and organization of the domain expertise of the instructor.
Validity
Measures taken to minimize threats to validity due to pre- and post-test questions have already been discussed. Other threats to the validity of the study are addressed in part by the nature of the instructional sessions. Each instructional session is comprised of several different “home” sections of advanced writing. Thus, students from a particular section of advanced writing are typically spread across several sections and instructors, and those in a given library session are typically unaware of who will be in attendance in their session. Because they are not taught in a single cohort, they have little opportunity to collaborate. Furthermore, those that do connect with others in different sessions are more likely to compare notes about specific projects that they were researching rather than methods used to teach search skills. Indeed, as important as learning information literacy principles and methods is, the central focus of the library session is to help students with a research project. There is little if any motivation for students to share test questions, or to share or compete on the acquisition of the technical information, since their advanced writing class grades come from individual projects. These factors reduce the likelihood of cross-contamination and competition among groups. As to instructor effects, it might be argued that a potential source of bias that could strengthen the observed difference in student performance may be the instructor’s vested interest in the success of course materials that cost time to develop. The author was aware of this possible bias in instruction from the outset of the study and took steps to promote unbiased delivery of content. As discussed above, care was taken in the instructional design process to ensure similarity of content such that neither group was disadvantaged (see table 1). During delivery of the content every effort was made to provide every student with the best possible resource to help students achieve learning objectives, regardless of the teaching method assigned to the session. This deliberate approach minimized the likelihood of unconscious bias. Notwithstanding these measures, the author recognizes that other possible measures could be taken in future studies to put more distance between those instructing and those carrying out the study, or perhaps by employing an observer in the instructional sessions to note possible instructor bias. In this study, it was not practical to disassociate the development of the instructional content completely from its delivery, since the content reflects the unique offering of the instructor. Other practical issues, including cost and staffing, favored the author’s assumption of multiple roles and remains a limitation of the study.
During the conduct of the study other sources of potential instructor bias were reduced or eliminated. Specifically, tests were designed with multiple choice and true/false answers to avoid the need for judgment-based test scoring. Also, the data analysis task was outsourced to an impartial third party—the institution’s library assessment team.
Limitations and Future Work
Beyond the limitations of the study just highlighted, another limitation is that the study’s scope was limited to students within the instructional reach of the author (engineering and technology students at the author’s institution). The organization of the DBL methodology, including its process-based thinking, may be better aligned with the learning styles of students in these disciplines, as opposed to those in other disciplines. Further work with advanced writing students in other disciplines is needed to understand this possibility.
Furthermore, the study does not take a longitudinal view of learning, namely retention. Towards the end of the current study, the author launched a pilot study to assess this aspect, and results suggested this might be an area for fruitful effort in the future.
Finally, as has already been noted, the author chose a rather high-level EDM for this testing. One strength of this type of model is that it exposes the student to the bigger-picture process; thus, it models and contextualizes decision making within the overall process. However, a limitation of this decision is that model paths became lengthy, thereby making it more difficult to provide much repetition of decision paths in student exercises, particularly when factoring in participation cost for students. Likewise, the opportunity to provide a broad range of problems that would help the student transfer knowledge to different scenarios is limited. This makes the process of expert schema-building less ideal.37 The author is presently restructuring a DBL model to shorten decision paths, providing for further breadth and repetitions of decision-making.
Acknowledgement
The author expresses deep appreciation to Dr. Kenneth Plummer from the University’s Center for Teaching and Learning, who provided training relating to the DBL method and substantial insight to guide content development. Other key contributors include Alysia Larsen, a student assistant who provided much feedback and testing support during the development stages of the DBL module, and Mr. Brian Roberts and Dr. Holt Zaugg, both from the Library Assessment Office, who offered critique and input to the data collection methods used, and assisted with the analysis.
Notes
1. “Framework for Information Literacy for Higher Education,” Association of College and Research Libraries (ACRL), January 11, 2016, accessed August 12, 2021, http://www.ala.org/acrl/standards/ilframework.
2. Ibid.
3. “How Experts Differ from Novices,” in How People Learn: Brain, Mind, Experience, and School, expanded Ed., eds. John D. Bransford, Ann L. Brown, and Rodney R. Cocking (Washington, DC: National Academy Press, 2000), 31–50, https://doi.org/10.17226/6160.
4. Kevin Seeber, “This Is Really Happening: Criticality and Discussions of Context in ACRL’s Framework for Information Literacy,” Communications in Information Literacy 9, no. 2 (2015): 157–63, https://doi.org/10.15760/comminfolit.2015.9.2.192.
5. “Framework for Information Literacy,” ACRL.
6. Fernand Gobet, “Chunking Models of Expertise: Implications for Education,” Applied Cognitive Psychology 19, no. 2 (2005): 183–204, https://doi.org/10.1002/acp.1110.
7. Barbara M. Junisbai, Sara Lowe, and Natalie Tagge, “A Pragmatic and Flexible Approach to Information Literacy: Findings from a Three-Year Study of Faculty-Librarian Collaboration,” The Journal of Academic Librarianship 42, no. 5 (2016): 604–11, https://doi.org/10.1016/j.acalib.2016.07.001.
8. Becky Ramsey Leporati, Pam Bach, and Lisa Hong, “Learning to Evaluate Sources: Comparing Teaching Modalities and Student Outcomes,” portal: Libraries and the Academy 19, no. 2 (2019): 233–52, https://doi.org/10.1353/pla.2019.0014.
9. Sara Arnold-Garza, “The Flipped Classroom Teaching Model and Its Use for Information Literacy Instruction,” Communications in Information Literacy 8, no. 1 (2014): 7–22, https://doi.org/10.15760/comminfolit.2014.8.1.161.
10. Olivia Castello and Alex Pfundt, “Engaging Learners through Self-Guided Tutorials: Implementing and Assessing a Flipped Classroom Model for Information Literacy Instruction,” The Journal of Creative Library Practice (May 4, 2017), https://creativelibrarypractice.org/2017/05/04/engaging-learners-through-self-guided-tutorials/.
11. Natalia Tingle, “Taking Care of Business (before Class): Information Literacy in a Flipped Classroom,” Journal of Business & Finance Librarianship 23, no. 2 (Nov 30, 2018): 183–98, https://doi.org/10.1080/08963568.2018.1510254.
12. Andrea Brooks, “Information Literacy and the Flipped Classroom: Examining the Impact of a One-Shot Flipped Class on Student Learning and Perceptions,” Communications in Information Literacy 8, no. 2 (2014): 225–35, https://doi.org/10.15760/comminfolit.2014.8.2.168; Michael C. Goates, Gregory M. Nelson, and Megan Frost, “Search Strategy Development in a Flipped Library Classroom: A Student-Focused Assessment,” College & Research Libraries 78, no. 3 (2017): 382-395, https://doi.org/10.5860/crl.78.3.382.
13. Rebecca Hill Renirie and Shelley Harper, “Flipped Library Instruction and Scholarly Resources: A Citation Analysis,” Journal of Library & Information Services in Distance Learning 13, no. 4 (2019): 339–52, https://doi.org/10.1080/1533290X.2020.1713277.
14. Eduardo Rivera, “Flipping the Classroom in Freshman English Library Instruction: A Comparison Study of a Flipped Class versus a Traditional Lecture Method,” New Review of Academic Librarianship 23, no. 1 (2017): 18–27, https://doi.org/10.1080/13614533.2016.1244770; Goates, Nelson, and Frost, “Search Strategy Development,” 382–95; Brooks, “Information Literacy and the Flipped Classroom,” 225–35.
15. Gerardo Gómez-García, Francisco-Javier Hinojo-Lucena, María-Pilar Cáceres-Reche, and Magdalena Ramos Navas-Parejo, “The Contribution of the Flipped Classroom Method to the Development of Information Literacy: A Systematic Review,” Sustainability 12, no. 18 (2020): 7273, https://doi.org/10.3390/su12187273.
16. Leporati, Bach, and Hong, “Learning to Evaluate Sources,” 233–52.
17. Aurelia Minuti, Karen Sorensen, Rachel Schwartz, Winifred S. King, Nancy R. Glassman, and Racheline G. Habousha, “Librarians Flip for Students: Teaching Searching Skills to Medical Students Using a Flipped Classroom Approach,” Medical Reference Services Quarterly 37, no. 2 (2018): 119–31, https://doi.org/10.1080/02763869.2018.1439184; Arnold Garza, “The Flipped Classroom Teaching Model,” 7–22; Goates, Nelson, and Frost, “Search Strategy Development,” 382–95.
18. Tingle, “Taking Care of Business (before Class),” 183–98; Rivera, “Flipping the Classroom in Freshman English,” 18–27; Leporati, Bach, and Hong, “Learning to Evaluate Sources,” 233–52.
19. John B. Biggs, Teaching for Quality Learning at University, 2nd ed. (Philadelphia, PA: Society for Research into Higher Education & Open University Press, 2003), 42–43.
20. Rebecca L. Sansom, Erica Suh, and Kenneth J. Plummer, “Decision-Based Learning: ″If I just Knew Which Equation to Use, I Know I Could Solve This Problem!” Journal of Chemical Education 96, no. 3 (Mar 12, 2019): 445–54, https://doi.org/10.1021/acs.jchemed.8b00754.
21. Gobet, “Chunking Models of Expertise,” 183–204.
22. Richard H. Swan, Kenneth J. Plummer, and Richard E. West, “Toward Functional Expertise through Formal Education: Identifying an Opportunity for Higher Education,” Educational Technology Research and Development 68, no. 5 (Oct. 2020): 2551–68, https://doi.org/10.1007/s11423-020-09778-1.
23. Kenneth Plummer, Richard H. Swan, and N. Lush, “Introduction to Decision-Based Learning,” (address, 11th International Technology, Education and Development Conference, Valencia Spain, March 2017).
24. Sansom, Suh, and Plummer, “Decision-Based Learning,” 445–54.
25. Kenneth Plummer, Stephan Taeger, and Melissa Burton, “Decision‐Based Learning in Religious Education,” Teaching Theology & Religion 23, no. 2 (June 2020): 110–25, https://doi.org/10.1111/teth.12538.
26. Anna Katz, “Fact or Fiction: Comparing BYU Library’s Decision Based Learning and Ysearch Source Evaluation Modules,” (master’s thesis, Brigham Young University, Provo, Utah, 2019).
27. The author notes that the scheduling system used in the study gave students some mobility to exchange their session time for another open time. This opened the potential for students to change from a DBL session to a lecture-based session after they had received pre-session assignments. When this occurred more than 24 hours prior to the session, the author sent pre-session material for their new session type to the affected student and noted the change in the dataset. If students changed into a different type of session less than 24 hours prior to the session, the author excluded them from the study. While a certain number of students every semester did indeed swap sessions, the number actually swapping session types was minimal (8 of the 225 fully participating students). By minimizing the number of open seats in each session, the author was able to keep class constitution reasonably static.
28. “Advanced writing: Before my library session,” Harold B. Lee Library, Jun 17, 2021, https://guides.lib.byu.edu/c.php?g=216565&p=1429520
29. David S. Pixton, “Information Literacy and Decision-Based Learning.” In Decision-Based Learning: An Innovative Pedagogy that Unpacks Expert Knowledge for the Novice Learner, edited by Wentworth, Nancy, Kenneth Plummer and Richard Swan (Bingley, UK: Emerald Group, 2021), 136-142.
30. Plummer, Taeger, and Burton, “Decision-based Learning in Religious Education,” 110-125.
31. Gobet, “Chunking Models of Expertise,” 183–204.
32. Richard H. Swan, “Why Decision-Based Learning Is Different.” In Decision-Based Learning: An Innovative Pedagogy That Unpacks Expert Knowledge for the Novice Learner, edited by Nancy Wentworth, Kenneth Plummer, and Richard Swan (Bingley, UK: Emerald Group, 2021), 6.
33. Donald T. Campbell and Julian C. Stanley, Experimental and Quasi-Experimental Designs for Research (Chicago, IL: Rand McNally College Publishing Co, 1963), 7–9.
34. Ibid.; also see Phillip Rumrill and James Bellini, Research in Rehabilitation Counseling: A Guide to Design, Methodology, and Utilization, 3rd ed. (Springfield, IL: Charles C Thomas, 2018).
35. The reader will note that the number of students in each treatment group that provided feedback on their engagement with the pre-class materials (246) is greater than the number that completed both pre- and post-tests (225). This difference largely represents those who did not complete a pre-test, but who attended the class and offered feedback.
36. The question relating to the student’s perception of the lecture was not originally included in the post-semester questions but was added midway through the study during the rebalancing of tests mentioned above. Therefore, student feedback in this area is more sparse.
37. Gobet, “Chunking Models of Expertise,” 183– 204.

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Article Views (By Year/Month)
| 2026 |
| January: 47 |
| 2025 |
| January: 41 |
| February: 49 |
| March: 44 |
| April: 52 |
| May: 39 |
| June: 27 |
| July: 51 |
| August: 78 |
| September: 132 |
| October: 185 |
| November: 77 |
| December: 80 |
| 2024 |
| January: 126 |
| February: 73 |
| March: 70 |
| April: 65 |
| May: 77 |
| June: 54 |
| July: 69 |
| August: 37 |
| September: 35 |
| October: 39 |
| November: 34 |
| December: 37 |
| 2023 |
| January: 0 |
| February: 0 |
| March: 0 |
| April: 0 |
| May: 0 |
| June: 0 |
| July: 0 |
| August: 0 |
| September: 0 |
| October: 11 |
| November: 884 |
| December: 211 |