Pedersen, S. (2004). Designing and researching enhancements for online learning: A commentary on Veal, Brantley, and Zulli. Contemporary Issues in Technology and Teacher Education [Online serial], 4(2). https://citejournal.org/volume-4/issue-2-04/science/designing-and-researching-enhancements-for-online-learning-a-commentary-on-veal-brantley-and-zulli

Designing and Researching Enhancements for Online Learning: A Commentary on Veal, Brantley, and Zulli

by Susan Pedersen, Texas A&M University

This commentary on the recent article, “Developing an Online Geology Course for Preservice and Inservice Teachers: Enhancements for Online Learning” by Veal, Brantley, and Zulli (2004) examines two issues related to the design of online courses. First, the concept of affordances is used to compare face-to-face and online classes for insights into which instructional strategies are likely to be useful in both settings and which are not. Second, design-based research, the methodology employed by Veal et al. in their study, is considered for its potential to guide the investigation of innovations in online classes.

 

As a faculty member teaching graduate classes online, I read Veal, Brantley, and Zulli’s (2004) recent article, “Developing an Online Geology Course for Preservice and Inservice Teachers: Enhancements for Online Learning,” with great interest. In the rush to capitalize on the emerging potentials of new technologies and meet the needs of a new audience – the distance student – instructors often must design their courses in the absence of knowledge about effective techniques for distance education. Information about the challenges faced by other educators and the strategies they used to enhance learning successfully within this new medium is, therefore, needed. Veal et al. described a two-phase study in which they used the literature on successful practices in face-to-face classes to identify and then implement 10 enhancements to an online course. They examined the impact of this online course on teachers’ content knowledge, then investigated teachers’ perceptions of the effectiveness of these enhancements to their own learning and their potential usefulness for their future teaching. There is a strong need for such research. Important differences exist between online and face-to-face classes in terms of the types of instructional strategies and interactions possible, so assuming that strategies effective in one setting would successfully generalize to the other is unwise.

As an instructional designer, I found this article interesting because it highlights a shortcoming in my own field. Instructional design models have been employed for over half a century in order to reliably create effective instructional materials. These models provide a combination of procedures and heuristics which, if used properly, result in instruction that helps learners reach established learning goals. For most of the models, however, the choice of delivery medium is made late in the design process or given little attention. For example, the most well-known instructional design model, the Dick and Carey model (Dick, Carey, & Carey, 2001), leaves the selection of delivery medium until the stage in which the instructional strategy is designed, after objectives have been established and evaluation instruments developed. Morrison, Ross, and Kemp (2004) posited in their model that instructional design proceeds in a flexible, nonlinear manner, with decisions about any of the given elements (e.g., learner analysis, task analysis, objectives, sequencing) affecting decisions about the others in a circular, iterative manner. Even in this model, however, where delivery medium may be considered early in the design plan, the authors give it only a superficial treatment as part of an analysis of the instructional context. Even rapid prototyping (Nixon & Lee, 2001; Tripp & Bichelmeyer, 1990), with its emphasis on development from the early stages of a project, gives little consideration to the relative merits or impact of different delivery media on design decisions.

In real world design, the delivery medium is often stipulated at the very beginning of the project, perhaps as early as the topic itself. For example, a company that decides it needs to offer training on new procedures may specify that it be delivered by an onsite instructor at their training center. Or, as is happening at universities across the United States, a decision is made to offer a course online without any consideration of how this will impact course objectives or content. Yet, as was illustrated in Veal et al.’s article by teachers’ complaint that a course in geology should include hands-on activities, the delivery medium can affect what objectives are set for the course, what instructional strategies are used, and how assessment is conducted. As a result of the lack of attention to the impact of the delivery medium on the choices an instructor or an instructional designer must make, we are ill equipped to make good use of what we already know about best practices in the design of online courses.

Finally, as a researcher of instructional innovations, I believe that Veal et al.’s choice of a design-based research methodology bears consideration. Especially because of the speed at which distance education is becoming widespread and the lack of an existing body of adequate research on this topic (Moore, Winograd, & Lange, 2001), a research methodology that examines innovation as it happens is needed.

In the remainder of this commentary, I offer my own views on two topics relevant to Veal et al.’s article. First, I draw on the concept of affordances to consider which instructional strategies that have been shown to be effective in face-to-face classes are likely to be effective in online courses and where we might look for some differences. Second, I examine the research methodology they use in more depth, commenting on what I see as its profound potential to provide meaningful insights on how to impact learning in online courses.

Drawing on Best Practices in Traditional Classes

As with any innovation, we must ask how much of what we already know is applicable to the new conditions it creates. For distance education, how much of what we know about good instruction in traditional face-to-face settings is relevant to online settings? Is “good” instruction good in any setting? Veal et al. argued that simply “importing existing classroom-based models of instruction to an online format is not appropriate,” yet they rightly draw heavily on these existing models to find methods of enhancing their online course.

To hypothesize about which practices shown to be effective in face-to-face settings are likely to also be effective in online courses, a comparison of the two would seem to be in order. It is necessary, however, to start with a disclaimer. Such a comparison can seemingly reduce instruction into constituent parts, giving the false impression that we educators can help ourselves, smorgasbord-style, to whatever instructional moves seem most expedient. This is not the case. The use of a variety of instructional strategies together creates a synergy, resulting in a greater impact than can be accounted for by any one strategy used independently. Therefore, it is necessary to use such a comparison between face-to-face and online settings only as a starting point for investigation, submitting the hypotheses it offers to rigorous examination and building along the way new understandings of how a particular strategy interacts with the instructional approach as a whole.

With that limitation in mind, let us compare face-to-face and online instruction. Rather than viewing them only as alternate methods of conveying information, it is necessary, as Jonassen, Campbell, and Davidson (1994) argued, to examine each medium for the ways in which its affordances can be exploited to facilitate learners’ construction of knowledge. Affordances are the properties of a given object or environment that it offers to those who use it (Gibson, 1977). For example, water affords floating to objects less dense than it is. A chair affords sitting and reaching objects higher than one’s grasp. For the purposes of this comparison, I examine online courses similar to the one described by Veal et al. Such classes make use of a web site with links to a variety of resources on other sites and some type of conferencing software (e.g., WebCT, FirstClass, discussion boards) that allows interaction among the members of the class and the instructor. This comparison could, of course, be extended to classes that utilize other technologies, such as videoconferencing.

Online classes afford many of the same instructional strategies as face-to-face classes. For example, both make use of well-structured text to present information. Therefore, we might expect several of the enhancements that Veal et al. used, such as advance organizers and review sections, to be effective in both settings, which their study suggested to be the case. In contrast, face-to-face classes afford the easy distribution and sharing of equipment and other materials needed in hands-on activities. They also afford observation of students’ actions by an instructor, with flexible questioning and feedback between the two that can correct or extend students’ performance. Online courses do not readily offer these affordances, as a result, impacting the types of hands-on activities that can be used and perhaps whether they can be used at all. Thus, our understanding of how to support active learning based on research conducted in face-to-face classes may differ for online classes. Recognizing this fact can help us to anticipate the types of decisions an instructor will make and the subsequent complaint Veal et al. reported about a lack of hands-on activities.

So what else do these two settings have in common that would lead us to expect the instructional strategies gleaned from research on face-to-face classes to be effective in online classes? The similarities are actually far greater than the differences. Both afford the use of multimedia – video, audio recordings, and static images coordinated with narration can be used to gain attention, activate prior knowledge, and present information effectively in both settings. The use of multimedia in online classes requires a greater front-end investment of time for development, but once created, it can be reused, whereas a lecture with slides must be repeated each time a course is offered. Demonstrations are also afforded by both, though they are likely to be live in one setting and videotaped in the other. Instructor modeling, which is a key strategy in a cognitive apprenticeship approach (Collins, Brown, & Newman, 1989) can easily be accomplished through video. And a whole host of strategies that are effective in helping learners process verbal information (whether presented orally or in text) should work equally well in both settings. These include mnemonics, paraphrasing, outlining, categorizing, self-questioning, and the use of diagrams.

Still, there are affordances offered by a face-to-face delivery method that are unavailable or awkward for use in online classes. The rapid exchange of ideas possible in a face-to-face class is only available if the online instructor schedules “live” sessions, an option which is still awkward if only text-based conferencing is used. Though still possible, a Socratic dialog may suffer in the slow exchanges characteristic of asynchronous discussions. In the example of modeling given above, it is far more difficult to support student questioning about the modeling in online classes than in face-to-face ones (though repetition of the modeling is much easier, as video places this under student control). Coaching, another instructional strategy advocated in the literature on cognitive apprenticeship, in which the instructor watches the learners’ early performances and provides feedback and suggestions, is more likely to be used in face-to-face classes than in online ones, because face-to-face courses readily afford observation of student performances.

It is easy to assume during this early period of distance education that face-to-face classes offer more affordances to support instruction and learning. Yet in my own teaching, I have found that online courses offer a number of affordances not available in face-to-face classes, ones I would be loathe to part with. I’ve come to view face-to-face classes as too constrained by time to permit adequate reflection by students. Also, face-to-face classes are, in my opinion, marred by power structures in which some students dominate discussion while others remain silent. When the instructor requires participation in an online class, all students are afforded “air time,” and as students come to realize which of their peers post the most interesting messages, the “voices” that are heard best are the ones who have earned the greater share of attention through their thoughtfulness rather than their tenacity. Among the other affordances offered by an online format that I have come to find indispensable are as follows:

  • Peer feedback. In face-to-face classes, students may receive limited peer feedback during class, but this is far more manageable in online courses. Because students can view each other’s work for as long as they please and because the feedback they provide is written, students benefit both from the reflection necessary to give feedback and the new insights they gain from the feedback they receive. Also, as with discussions, all students are involved in peer feedback in online classes, not just those who are most likely to speak up in class.
  • An archive of participation. Because students’ contributions to discussion in an online class are written, there is an archive of their work. I have found this useful for assessing the thinking and contributions of individual students, as well as for identifying emerging understandings and misconceptions in the class as a whole.
  • Interactivity. Multimedia is most effective when it offers individual learners control over its pace, sequence, and content displayed. In a face-to-face class, the instructor controls these elements, meaning that learners are passive recipients of this information rather than active decision makers about what they will view, when, and for how long.
  • Repeated viewings of demonstrations. Live demonstrations have the advantage that the learners can ask questions and the instructor can adjust the demonstration to manage difficulties. Videotaped demonstrations have the advantage of allowing students to view the demonstrations as many times as they need to in order to learn the skill. I’ve found the latter particularly useful in that most student questions are resolved simply by seeing the demonstration multiple times and referring back to it as they attempt to apply a skill themselves.

Evidence of the value of these affordances and their effect on learning have yet to be examined sufficiently, so it would be premature to offer prescriptions for the design of online courses based on them. Yet, the similarities between face-to-face and online classes suggest that many of the instructional strategies effective in one setting are likely to be effective in the other, while the differences between them suggest that online courses may afford new strategies that can benefit learning. It is still necessary, as Veal et al. pointed out, to conduct research to test the benefits of existing strategies, as well as investigate new ones. Simply relying on research conducted in one setting to inform the other ignores the holistic nature of teaching and learning and the impact changing any aspect of instruction has on its quality as a whole. Also, such a research agenda can be used not only to inform the design of online courses but also better define the conditions under which a given instructional strategy is effective.

Investigating Innovations: Design-Based Research

Veal et al. used a “design study approach,” to investigate their course enhancements across two phases of their study, with the results of Phase 1 informing the design of Phase 2. This approach, also called “design-based research” (The Design-Based Research Collective, 2003) and “design experiments” (Brown, 1992), is an emerging methodology for the examination of interventions in which development and research take place within several cycles of design, use, analysis, and redesign. The purpose of this iterative process is not only to enhance the particular intervention being investigated but to also develop theories to account for the impact of the intervention and models to inform the design of other innovations. This approach has garnered great interest recently, with a theme issue of Educational Researcher being devoted to it (Kelly, 2003).

This approach holds a number of potential advantages for the investigation of online classes. First, because design-based research investigates innovations as they evolve within real-world settings, it reflects the way in which online courses evolve as instructors design, analyze, and redesign their courses. This process of evolution in online courses will be especially strong in the foreseeable future, both as new research impacts practice, and, more dramatically, as new technologies make new practices possible. This necessary evolution in course design can best be examined through a methodology aimed at deriving insights across multiple iterations of an innovation.

Second, a major thrust of research-based design is the development of theories for how learning occurs in specific situations (Cobb, Confrey, diSessa, Lehrer, & Schauble, 2003). As researchers collect data about conditions created by an innovation and the impact of that innovation on learning, motivation, or other constructs, patterns emerge that suggest how the innovation functions within the environment and why this occurs. These patterns can lead to theories about how learning occurs in online settings capable of informing designers of these classes about specific conditions conducive to learning or specific supports instructors can use to advance student thinking in online settings. These theories, as Cobb et al. (2003) pointed out, are likely to be quite humble, targeting only limited settings, but this is exactly the type of understanding that can help us to recognize how online settings differ from face-to-face ones and the impact of this on decisions about instructional strategies.

To illustrate how design-based research might work in online classes, I offer an anecdote from my own experiences teaching online that could best be investigated using this methodology. Online discussion is a key instructional strategy in one of my classes, and in order to get students to look at this approach from multiple perspectives, each student serves as a facilitator for one of the units of the course. As part of that role, students evaluate their peers’ contributions to the discussion. The goal of facilitation is for students to reflect meaningfully on what constitutes high quality participation, both so they can become better participants and so they can support quality interaction in future settings where they collaborate online.

For a variety of reasons, the quality of discussions in my class varied widely by group. To address that, I worked with facilitators to develop the rubric shown in Table 1 to assess student participation. This rubric included five categories for evaluation, with four possible levels of performance described within each category. One of these levels was “Exceeds Expectations.” Students did not receive extra credit for exceeding expectations, but this level was included in the rubric so that facilitators could acknowledge excellence. The rubric was revised with successive groups of facilitators. The interesting result was that the quality of the online discussions rose dramatically, with most students rightfully being evaluated as exceeding expectations in most categories. Why?

My informal theory is that the rubric (a) helped clarify expectations and define excellence in online discussion and (b) caused the facilitators to reflect on the nature of effective interactions, which made them better participants in subsequent units. This is indeed a modest theory, but if supported, it could in turn support theories about the role reflection and peer evaluation plays in enhancing performance or becoming a member of a community of practice. A study employing a design-based research approach could examine the impact of this rubric as it evolved across the semester, collecting data on changes in the content of students’ postings and their conceptions of what constitutes quality discussions. The rich data would provide a much better basis for a theory as to the impact of the rubric and perhaps as to ways to best encourage substantive participation in online discussion.

My anecdote also illustrates a potential problem with design-based research: It is easy to slip into anecdotal evidence and make causal claims precipitously. Shavelson, Phillips, Towne, and Feuer (2003) emphasized the paramount importance of ruling out competing hypotheses before making knowledge claims and argued that design-based research is particularly vulnerable to problems in this area, because it is used to investigate innovations in complex environments. In real world educational environments, literally dozens of variables are in play in any given situation. This is certainly the case for online courses, where the demands of the content are further burdened by issues related to distance technologies and students’ lack of experience with the format.

For example, in my anecdote, could experience with the online format and peer modeling of quality contributions to online discussion account for improvements in the discussions across the semester? Did students simply find more to say about readings in later units than in earlier ones? A rigorous investigation of the impact of the rubric on participation would need to examine these competing hypotheses. For this reason, design-based research is best conducted across several studies, with different studies approaching the issue in different ways and with the results of each study informing the next. For example, after the rubric is refined in early studies, setting up an experiment in which a group using the rubric is compared to a control group not using it could establish a basis for claims of causality. Examining its impact in other classes that employ extensive online discussions would support claims of generalizability. Such a line of research could contribute to an understanding of best practices in online classes and theories of how online learning is best supported.

Veal et al. collected data suggesting that teachers found their enhancements to an online course to be effective and useful to their own teaching and which offered some insights into which enhancements were most effective. Additional data from interviews and observations over the course of the class might have yielded a more robust understanding of how these enhancements impacted the ways in which teachers worked in the course and made it possible to begin to develop hypotheses about ways in which teachers develop content knowledge on geology in online classes and how course design can best support it. Developing such hypotheses is an important goal in design-based research and could guide future investigations that could lead to strong knowledge claims.

In conclusion, the rapid growth of online education has created a gap between research and practice, with our need for understanding of effective practices in online settings outstripping our knowledge. As Veal et al. pointed out, existing knowledge based on research conducted in traditional settings is a good starting point for finding ways to enhance online learning, but it is not sufficient. Likewise, a comparison of the different affordances offered by different settings can help us identify instructional strategies that are likely to be as successful in one setting as another. It remains necessary to examine those strategies in the new setting in order to develop understandings about how they impact learning and how to implement them effectively. Design-based research is emerging as an effective methodology for studying these types of innovations in action, but for this methodology to yield credible knowledge claims and theories, a rigorous line of research, with each study laying the groundwork for the next, is necessary.

 

References

Brown, A. L. (1992). Design experiments: Theoretical and methodological challenges in creating complex interventions in classroom settings. The Journal of the Learning Sciences, 2(2), 141-178.

Cobb, P., Confrey, J., diSessa, A., Lehrer, R., & Schauble, L. (2003). Design experiments in educational research. Educational Researcher, 32(1), 9-13.

Collins, A., Brown, J. S., & Newman, S. E. (1989). Cognitive apprenticeship: Teaching the crafts of reading, writing, and mathematics. In L. B. Resnick (Ed.), Knowing, learning and instruction: Essays in honor of Robert Glaser. Hillsdale, NJ: Lawrence Erlbaum Associates.

The Design-Based Research Collective. (2003). Design-based research: An emerging paradigm for educational inquiry. Educational Researcher, 32(1), 5-8.

Dick, W., Carey, D. M., & Carey, J. O. (2001). The systematic design of instruction (5th ed.). New York: Longman.

Gibson, J. J. (1977). The theory of affordances. In R. Shaw & J. D. Bransford (Eds.), Perceiving, acting, and knowing (pp. 67-82). Hillsdale, New Jersey: Lawrence Erlbaum Associates.

Jonassen, D. H., Campbell, J. P., & Davidson, M. E. (1994). Learning with media: Restructuring the debate. Educational Technology Research and Development, 42(2), 31-39.

Kelly, A. E., (Ed.). (2003). The role of design in educational research [Special Issue]. Educational Researcher, 31(1).

Moore, G. S., Winograd, K., & Lange, D. (2001). You can teach online. New York: McGraw Hill.

Morrison, G. R., Ross, S. M., & Kemp, J. E. (2004). Designing effective instruction. Hoboken, NJ: John Wiley & Sons.

Nixon, E. K., & Lee, D. (2001). Rapid prototyping in the instructional design process. Performance Improvement Quarterly, 14(3), 95-116.

Shavelson, R. J., Phillips, D. C., Towne, L., & Feuer, M. J. (2003). On the science of education design studies. Educational Researcher, 32(1), 25-28.

Tripp, S. D., & Bichelmeyer, B. (1990). Rapid prototyping: An alternative instructional design strategy. Educational Technology Research and Development, 38(1), 31-44.

Veal, W., Brantley, J, & Zulli, R. (2004). Developing an online geology course for preservice and inservice teachers: Enhancements for online learning. Contemporary Issues in Technology and Teacher Education [Online serial], 3(4). Retrieved July 23, 2004, from https://citejournal.org/vol3/iss4/science/article1.cfm

 

Susan Pederson
Texas A&M University
email: [email protected]

 

Table 1
Rubric Used to Assess Student Participation in an Online Discussion

 

Exceeds ExpectationsMeets ExpectationsFalls Short of ExpectationsDoes Not Meet Expectations
Quality of initial postingsstudent’s initial responses to two stimulus questions were original and insightful and provided rich material for group discussionstudent provided original and thoughtful responses to two stimulus questionsstudent provided a response to two stimulus questions, but the responses did not show depth of understanding of the readings or they merely echoed ideas already expressed by other group members; or student provided an initial response to only one of the stimulus questionsstudent did not post initial responses to the stimulus questions or posted brief, superficial responses
Rate of participationstudent participated five or more days, making substantive contributions on most of those daysstudent participated on at least three different days spread throughout the discussionstudent participated in discussion, but postings are either not on three different days or are bunched at the beginning or end of the unitstudent participated on only one day or not at all
Integration of the readingsstudent demonstrated a profound understanding of the readings by integrating numerous concepts from them across most of the messages he or she postedstudent integrated concepts from the readings into the discussion in a meaningful way in at least 4 different messagesstudent made fewer than 4 meaningful references to the readings; references to the readings were superficialstudent refered to readings less than two times and only in a superficial manner
Interaction with groupmatesafter the initial postings, student posted 4 or more substantive messages that elaborated on or provided a contrasting view with ideas contributed by group matesstudent responded to others’ contributions in at least 2 substantive messages and 2 shorter messagesstudent responded to others’ contributions by asking questions or agreeing with points made, but fewer than two of these responses were substantivestudent did not respond to others’ contributions

Loading