Lebec, M., & Luft, J. (2007). A mixed methods analysis of learning in online teacher professional development: A case report. Contemporary Issues in Technology and Teacher Education [Online serial], 7(1). Retrieved from http://www.citejournal.org/volume-7/issue-1-07/general/a-mixed-methods-analysis-of-learning-in-online-teacher-professional-development-a-case-report

A Mixed Methods Analysis of Learning in Online Teacher Professional Development: A Case Report

by Michael Lebec, Northern Arizona University; & Julie Luft, Arizona State University

Abstract

Web-based learning has been proposed as a convenient way to provide professional development experiences. Despite quantitative evidence that online instruction is equivalent to traditional methods (Russell, 2001), the efficiency of this approach has not been extensively studied among teachers.  This case report describes learning in an online biology course designed to help teachers prepare for science certification exams. A mixed methodology approach was utilized to analyze the manner in which course participants learned and how the online environment influenced this process. Concept maps scored by two different methods and objective pre- and postcourse examinations were contrasted as representations of assimilated knowledge, while unstructured interviews reflected participants’ perceptions of their experiences. Findings indicate that participants experienced gains in declarative knowledge, but little improvement with respect to more complex levels of understanding. Qualitative examination of concept maps demonstrated gaps in participants’ understandings of key course ideas. Engagement in the use of online resources varied according to participants’ attitudes toward online learning.  Subjects also reported a lack of motivation to fully engage in the course due to busy schedules, lack of extrinsic rewards, and the absence of personal accountability.

The use of the Internet as a medium for providing educational experiences is now a widespread phenomenon with a number of forces driving its proliferation. Distance educators hail Web-based instruction as a way to reach underserved populations (Baer, 1998). Administrators, on the other hand, often favor the use of Web-based learning as a means of conserving resources (Eamon, 1999). For students, the primary motivation for  choosing online courses seems to be compatibility with a busy lifestyle (Rose, Frisby, Hamlin, & Jones, 2000), while others praise the pedagogical potential associated with this learning environment (Jonassen, 1993).

One use for which Web-based instruction has become popular is in providing continuing education to working professionals (Baer, 1998). Online learning opportunities are seen as a feasible and convenient alternative for individuals who are forced to bypass traditional opportunities for self-enrichment due to time constraints (Barkley & Bianco, 2001). This trend has been explored considerably in a variety of fields, including medicine and industry (Sargeant et al., 2000).

Motivations for teachers to seek such opportunities are numerous. Dilemmas such as heavy instructional demands with minimal preparation time (Darling-Hammond & Cobb, 1996), accessibility to professional development in rural settings, and lack of institutional funds to send instructors to high quality courses or to cover their time away (Barkley & Bianco, 2001) often limit opportunities for teachers seeking additional training. Further complicating the matter are recent changes in educational policy, such as the No Child Left Behind Act. This plan demands nationwide increases in student achievement and accountability from presently deficient institutions, creating a greater need for high quality instructors in content areas (United States Department of Education, 2002). This impetus, coupled with existing regional shortages of certified instructors in domains such as the physical sciences (Choy, 1993), makes the easily accessible online environment attractive as an expedient means of gaining discipline-specific training (Bowman, Boyle, Greenstone, Herndon, & Valente, 2000; Herbert, 1999).

Despite the popularity of Web-based learning, a debate exists concerning its appropriate use. Although quantitative data suggesting insignificant differences between learning in traditional and online settings are plentiful, the bulk of the conclusions from such studies are based on statistical comparisons of objective examinations (Russell, 2001). Fewer studies attempt to address meaningful learning, examine outcomes associated with deeper levels of understanding, or triangulate quantitative findings with qualitative sources of data (Windschitl, 1998).

This case report describes learning that occurred in an online course designed to enhance teachers’ content knowledge of biology and utilizes mixed methods to answer the following research questions:

  1. What is the nature of the knowledge learned by participants enrolled in this online biology course?
  2. How did the Web-based environment influence learning by participants?

Related Literature

Learning in Web-Based or Online Environments

The literature contains multiple comparison studies pitting student outcomes in Web-based courses against similar measures in a traditional setting. Such investigations typically indicate that empirically based student outcomes derived from course exams or final averages are not significantly different when comparing traditional and Web-based courses (Grundman, Wigton, & Nickol, 2000; Hoey, Pettitt, & Brawner, 1998; Leasure & Thievon, 2000; Ostiguy & Haffer, 2001; Rose et al., 2000;  Russell 2001; Urven, Yin, Eshelman, & Bak, 2001). The degree to which such measures of classroom achievement represent the construct of meaningful learning is often debated (Duke, 1999; Kennedy, 1996). Shepard (2000) argued that because most exams involve preparation by rote memorization, learning for students is focused on facts and not conceptual understanding. Madaus (1988) proposed that conclusions about learning garnered from traditional test scores are limited due to the potential for a “testing effect” (Cook & Campbell, 1979), in which students may achieve success based on repeated experiences with course exams rather than learning of concepts. Furthermore, most tests used as a basis for comparison are multiple choice exams – a mode of assessment often described as limited in its ability to assess deeper levels of understanding (Jones, 1994; Madaus, 1988; White, 1992).

Studies investigating perceived learning in the Web-based environment commonly suggest that students are satisfied with their level of learning and that the process was effective and efficient (Carter, 2001; Grundman et al., 2000; Morss, 1999; Niederhauser, Bigley, Hale, & Harper, 1999; Sargeant et al., 2000). Alternatively, there are investigations that report mixed findings (Bostock, 1998) and indicate that the students felt they would have learned more in a traditional setting (Yucha & Princen, 2000). Studies comparing traditional instruction and Web-based learning generally declare no difference in student satisfaction or perceived learning (Edwards, Hugo, Cragg, & Petersen, 1999; Leasure & Thievon, 2000; Rose et al., 2000).

One area that does appear to be impacted by the online environment pertains to learning through reflection and communication (Akanabi, 2000; Bowman et al., 2000; Leach, 1997). Mathison and Pohan (1999) reported that student teachers had positive experiences based on Web communications that provided additional opportunities for reflection and critical thinking. According to the student teachers, the ability to contemplate a lesson when they had time was a significant advantage to the Web-based program. Another study (Shotsberger, 1999) had similar conclusions with experienced teachers. It reported that the online professional development program produced consistent opportunities for reflection and sharing, which occurred outside of the formal program. Barkley and Bianco (2001) concluded that a mixture of face-to-face and online professional development was successful in programs in rural areas of Ohio. Both parts of these programs contributed to the learning of the teachers by allowing the teachers to participate in different ways at different times.

The dilemma concerning online learning for teachers is well described by Colgan, Higginson, and Sinclair (1999): “Most of the research that deals with the topic of online professional development is limited to statements of vision, opinion, curriculum integration ideas, and descriptions of putative benefits ascribed to the web and other networks” (p. 315). Studies providing evidence that teachers gain useful classroom skill or conceptual knowledge are rare and often incomplete. For example, although Herbert (1999) reported that 95% of participants in their online development program thought it helped them “bridge the gap between theory and practice” (p. 41), investigations examining the impact of the program on helping teachers solve classroom problems are cited as “in progress.” Hewson and Hughes (1999), on the other hand, concluded that university faculty receiving training in an online information technology course gained the technical skills taught in the course, as assessed by their ability to complete tasks for which the skills were necessary.

Factors Influencing Learning in the Online Setting

Although learning online is influenced by the instructional method, learning is also impacted by the learner’s characteristics and the context of the experience (Cronbach, 1975). Some authors have attempted to define this relationship by investigating the potential for success in online courses based on a learner’s personality (Dewar & Whittington, 2000; Harsham, 1994; Livengood, 1995; Palloff & Pratt, 1999 ). In this type of investigation, learners with orientations toward introversion tend to value online learning because it provides space and privacy.  Extroverts tend to be less comfortable in such an environment but can also value learning in this setting when it allows them to connect with large numbers of other learners. Other studies describing the role of learner traits in Web-based learning indicate that previous experience with technology has a positive effect on performance in these settings (Volery, 2001) and that using a screening process to educate prospective students regarding expectations of this environment may be beneficial (Osborne, 2001; Warasila & Lomaga, 2001). Joo, Bong, and Choi (2000) examined self-efficacy and performance in the Web-based setting, measured by scores on objective postcourse tests and search tests examining their ability to utilize the Internet to find information. They found general academic self-efficacy to be predictive of posttest scores, while Internet self-efficacy was related to search test performance.

Learning in a Web-based setting is often considered an isolating experience for the student (Nasseh, 1998), and as a result some argue that motivation to put effort into online courses is often of greater importance than in the traditional setting (Noah, 2001). For this reason, various theoretical models have been proposed that attempt to explain how motivation might be affected in Web-based instruction and are worthy of consideration. The Technology Acceptance Model (Davis, Bagozzi, & Warshaw, 1989) suggested that the perceived ease of use and perceived usefulness of a technology will influence one’s motivation to employ it. Bandura’s (1997) theory of self-efficacy has also been discussed with regard to online courses. In this environment, the theory relates to one’s intention to engage in a task based on confidence in one’s associated abilities (Kinzie, Delcourt, & Powers, 1994). Motivational theory proposes that both intrinsic motivation – inherent satisfaction – and extrinsic motivation – impetus to perform a task to reach a goal – have been found to influence computer use for various purposes (Igbaria, 1993). One author combines these ideas into a model that has implications for Web-based learning and motivation (Liaw, 2001). According to the model, computer and Web experience lead to an increase in Web-based confidence, perceived usefulness, and enjoyment. These, in turn, all increase a user’s intention to be active in the Web-based learning environment.

Theories of Knowledge and the Nature of Learning

When investigating learning as a result of online education, it is important to acknowledge the various types and degrees of learning possible. Smith and Ragan (1993) outlined three such categories of knowledge – declarative, conditional, and procedural. They described declarative knowledge as knowing something to be true and useful in the recognition of facts, names, and lists. This type of knowledge is often compared to the recall and understanding levels of Bloom’s Taxonomy (Yildirim, Ozden, & Asku, 2001). Conditional knowledge involves understanding information in context (Bransford, Brown, & Cocking, 2000), the relationship between concepts (Yildirim et al. 2001), and predicting what may happen if the variables associated with the relationship are changed in some way (Smith & Ragan, 1993). Procedural knowledge involves “knowing” on yet another cognitive level in that it involves the use of both declarative and conditional knowledge and may be used to solve problems (Yildirim et al., 2001). Smith and Ragan (1993) stated that while declarative knowledge involves “knowing that” something is the case, procedural knowledge concerns “knowing how.” These ideas are relevant in this study, as they provide a frame of reference for describing the type of knowledge the participants were able to construct as a result of their course experiences.

Methods

Research Frame of the Present Study

Based on the nature of the research questions, the investigators found it necessary to assess learning using a mixed-methodology approach – a research paradigm that utilizes and assigns an equivalent status to both qualitative and quantitative methods (Tashakkori & Teddlie, 1998). The quantitative component of this study revealed trends concerning learning in the online course based on examination and concept map scores. The qualitative approaches were situated within the paradigm of constructivist inquiry (Guba & Lincoln, 1994). This research orientation aligns with an ontological position that adopts a relativist stance toward the situation to be understood and an epistemological perspective that acknowledges subjectivity and an interaction between the researcher and the environment (Guba & Lincoln, 1994). Specifically this approach was applied to analysis of semistructured interviews and concept maps. By combining both forms of research, a theory emerged from the objective data and is expanded and fortified with the salient findings of the participants’ course experiences. Ultimately, the findings from this study have breadth and scope as a result of the design (Greene, Caracelli, & Graham, 1989).

Description of the Course and the Enrolled Participants

The 3-week course described in this investigation was part of a grant-funded project implemented at a midsized university in the southwest designed in response to the ongoing need for qualified science instructors throughout the state. Its focus consisted of introductory biology concepts, including the evolution of living organisms, the organization and hierarchy of life, and a summary of the historical and contemporary contexts of biology. The WebCT course management tool was utilized by instructors at the sponsoring institution as a mode of delivery. Course content included the navigation of online activities formatted as quizzes, flashcards, animated sequences, self-directed activities, text-based readings, and the posting of asynchronous discussion comments reflecting upon these assignments.

Participants were recruited via a listserv specific to teachers in the state. The experience was advertised as a means for prospective applicants to increase content knowledge in biology and for preparing for the teaching certification exam. Five experienced teachers and two preservice teachers ranging in age from 24 to 46 formally enrolled in the course. The two preservice teachers had previously received bachelor’s degrees in biology disciplines and were enrolled in science teaching programs. At the time of the course, the certified teachers were all involved in teaching secondary biology or other science courses and had been doing so for anywhere from 3 to 11 years. None of these individuals had attained a certification specific to biology teaching nor had they taken the state certification exam.

Data Collection

Upon formal enrollment, students were mailed handouts detailing the format of the course, instructions for Web site navigation, an overview of concept maps, and concept mapping software and instructions outlining its use. At the initial login, subjects were required to complete a 31-item multiple-choice exam based on course concepts for the purpose of comparing results with a similar postcourse exam and fulfilling grant requirements of demonstrated learning outcomes. The items for this multiple-choice test were generated by the instructor from resources accompanying the text. The exam was further reviewed by a content expert to ensure its accuracy and validity.

Precourse knowledge was also assessed through the creation of concept maps. Subjects were trained in this method 3 weeks before the start of the course using a process perfected during a pilot study, in which students were given written instructions, directed to online tutorials, and provided with multiple examples. The concept mapping software tool known as Inspiration, which had been mailed to the participants at an earlier date, was used to maximize the efficiency of this process. Map content was constructed based on principle themes associated with the course. This method of assessment was selected for its potential to represent existing knowledge and meaningful learning (Canas et al., 2001; Dorough & Rye, 1997; Novak, 1981, 1988; Novak & Gowin, 1985), as well as for its potential to be analyzed through mixed-methods approaches (Dorough & Rye, 1997, Stoddart, Abrams, Gasper, & Canaday, 2000, Trochim, 1989, Truscott, Paulson, & Everall, 1999).  See figures 1 and 2 for sample pre and post concept maps.

Over the following 3 weeks, participants then accessed the course module and the Web-based content contained within. They were given online instructions regarding how to navigate the previously described online activities and were given associated readings in the text.  Expectations for completion were also provided on the course homepage. At the end of the 3-week period, students were expected to have finalized all course requirements.

Data collection continued after the students completed the module. Measures of learning gathered included construction of postcourse concept maps, as well as completion of a second multiple-choice test presented in a different sequence. Participants were interviewed regarding the nature of their experience in the online course. This interview was semistructured, followed the guidelines by Berg (1998), and contained a variety of questions, including participants’ reactions to the experience, the way they went about learning, and their motivation level to engage in the course (see Figure 3 for the template used for the interview protocol.) Lastly, documents were collected at this time that captured the organization and enactment of the online program. These documents included, but were not limited to, formally written course objectives, reading assignments, content from online activities, and the course designer’s documents pertaining to the program.

Data Analysis

Data were analyzed using methods thought to best answer the primary research questions:  (a) What is the nature of the knowledge learned by participants enrolled in this online biology course? (b) How did the Web-based environment influence learning by participants?)

Quantitative Data Analysis. Concept map content was represented in a quantitative fashion by using established scoring methods. In the first system, referred to in this study as the Stoddart Scoring Method, scores were calculated by assessing the validity of the connections made by students between concepts (Stoddart et al., 2000). These relationships were labeled as scientifically correct (and therefore consistent with course information) or scientifically inaccurate. To enhance reliability, the maps were evaluated separately by two researchers. Discrepancies in determining the validity of relationships formed between concepts were recorded and checked against written sources. In instances where the validity of the relationship was still in question, a content expert in the field was consulted to make a final determination. Final scores, represented as percentages, were calculated as the number of scientifically accurate relationships divided by the total number of connections formed by the student

The second scoring scheme, referred to herein as the Alternate Scoring Method, considered the quantity of relevant information contained in maps (Dorough & Rye, 1997; Rafferty & Fleschner, 1993). In this approach, occurrences of concepts, relationships, examples, and branching pathways are recorded, assigned point values, and then totaled to represent the final map score. Two scoring methods were included to capture differing perspectives they may provide regarding learning.

Multiple choice exam scores were also represented quantitatively. The pre and post versions of this method of assessment were expressed as percentages of correct answers and provided another outcome measure to consider during triangulation of data. These data were reported and compared descriptively by considering individual and group trends pre and post instruction.

Qualitative Data Analysis. An important aspect of this study entailed the identification of themes related to the content addressed in the program and the experiences of the teachers in the online program. In order to identify these specific themes, researchers used methods found in qualitative research. In terms of the first area, the content themes, pilot data, formally written course objectives, reading assignments, content from online activities, and discussions with the course designer all were collected and reviewed repeatedly for categories (as recommended by Bogdan & Biklen, 1992). The emergent categories were divided into three major areas and, when appropriate, subdivided into more specific sections representing the key concepts contained within. For example, the broad content domain entitled “Origins” concerned concepts relevant to early life formation and contained the following subsections: Combination of Atomic Particles, Membranes, Cells, Prokaryote to Eukaryote Transition, and Uni-Cellular to Multi-Cellular Life Transition. Table 1 provides a comprehensive listing of these domains and subcategories of concepts for potential learning.

Table 1
Major Domains and Corresponding Subcategories of Knowledge Associated With Course Content.

Origins of Life Combination of Atomic Particles, Membranes, Cells as Unit of Life, Prokayrote to Eukaryote Transition, Uni-Cellular to Multi-Cellular Life
Macro-Evolutionary Change Ancient Earth, Environment, Metabolic Synthesis, Sexual Reproduction
Natural Selection Mutation, Variation, Adaptation, Competition, Survival, Reproduction

With these general themes identified, concept maps and interview data from students were repeatedly examined by one researcher for the descriptive patterns and themes (as recommended in Bogdan & Biklen, 1992). Because the concept map and interview responses were more open ended and difficult to anticipate, the data had to be examined inductively so the general themes could be formulated. The themes generated from these two groups of data were compared to one another through checklist matrices (Miles & Huberman, 1994) in order to understand the conceptual knowledge of the teachers as compared to the intentions of the course (see figures 4 and 5.) Ultimately, it was important to understand the inclusion or omission of key ideas by the teachers in the pre and post assessments.

Collecting the different documents provided an additional richness to the findings that were not always clear through the interviews. In addition, they served as a “validity check” (Kirk & Miller, 1986) of the assumptions emerging from constructed meanings. One example of how this process was useful in providing such confirmation was with respect to the learning of concepts associated with natural selection. Although these ideas were deemed to be central components of the course, interview transcripts indicated that most participants did not view the concepts associated with this content as important points learned during course experiences. Concept map analysis confirmed this finding, as the researchers concluded that these ideas were also largely absent from these documents.

Limitations

Various limitations are associated with the present study. The small number of participants limited the degree to which conclusions could be made from a case report such as this. Certain aspects of the course design exist as limitations including the short, 3-week time period allotted for the experience, as well as the fact that course content was not assessed for quality by an outside source. A standardized tool designed for this purpose may have provided greater confidence in determining that the course was adequately designed to accomplish its goals and objectives. Finally, some limitations exist concerning data analysis. Although three investigators collaborated in analyzing the content of concept maps, their individual beliefs, philosophies, and perspectives were a source of potential bias.

Findings

Quantitative Data

As shown in Table 2, most students (5 out of 7) demonstrated a pre to post increase in their multiple-choice exam score, with pretest mean = 64.9% and posttest mean = 74.9%. A comparison of means from Stoddart et al. (2000) concept map scores indicated no gains in this measure, with a precourse mean = 59.0% and postcourse mean  = 56.4%. Alternate map scores, on the other hand, showed a general trend of pre to post improvement with precourse mean = 41.9 points and postcourse mean = 65.8 points.

A review of how these scoring methods assess learning helps to put these findings in perspective.  Stoddart scores reflect the validity of scientific relationships between map concepts, and Alternate scores largely indicate gains in the quantity of map content.  These results suggest that, although participants increased the number of concepts and connections included in their maps, they did not experience an increase in the depth of understanding of the relationships between those concepts.

Table 2
Pre to Post Changes in Exam and Concept Map Scores

Precourse Exam Score Postcourse Exam Score Precourse Stoddart Score Postcourse Stoddart Score Precourse Alternate Score Postcourse Alternate Score
Group Means 64.9% 74.9% 59.0% 56.4% 41.9 65.8

Qualitative Data: Inclusion of Central Course Ideas and Associated Subcategories

Qualitative analysis of concept maps was performed to determine whether the participants included the key course ideas, as well as the subsets of these domains. When viewing the larger domains of knowledge as a whole that were contained within the course, no clear patterns emerged with respect to the areas of Origins of Life and Macro-Evolutionary Changes, with the exception of inconsistent inclusion of the respective subcategories. Analysis of concept maps examining learning of Natural Selection topics, however, presented a different picture. In all of these subcategories, the majority of students showed minimal postcourse evidence of assimilating these ideas.

When considering all subcategories independent of their broader groupings,  participants had notable gaps in expected course knowledge. The subcategories consisted of a possible total of 15 concepts that were well represented across course materials. Comparing pre- and postcourse concept maps allowed researchers to determine whether students not including a concept in their precourse map had gained knowledge of the idea during course experiences, as measured by inclusion of the concept in their postcourse map. Among students who did not demonstrate prior knowledge of a concept, less than half of the students included the concept in their final map with respect to 11 of the 15 subcategories. Table 3 lists a complete breakdown of concepts added as a result of course experiences.

Table 3
Summary of Learning in All Subcategories

Concept

% of Students Without Prior Knowledge of Concept
Making Pre to Post Change

Origins of Life
Combination of Atomic Particles

50%

Membranes

17%

Cells as Unit of Life

0%

Prokayrote to Eukaryote Transition

57%

Uni-Cellular to Multi-Cellular Life

25%

Macro-Evolutionary Change
Ancient Earth

0%

Environment

67%

Metabolic Synthesis

50%

Sexual Reproduction

33%

Natural Selection
Mutation

0%

Variation

0%

Adaptation

17%

Competition

17%

Survival

0%

Reproduction

0%

Qualitative Data – Interviews

One of the questions posed by this study concerns the manner in which the online environment influenced learning of the content. Interview data provide some insight into how these participants were consciously or subconsciously affected by the learning environment. Analysis of interview transcripts revealed two major themes describing this phenomenon – student attitudes toward online learning and the influence of the online environment on motivation.

Theme 1: Students indicating a strong inclination for either online or traditional learning reported utilizing resources that reflected this preference. Data from the interview transcripts concerning students’ use of course resources revealed an important pattern pertaining to participation. Specifically, those students indicating a strong inclination for either online or traditional learning reported utilizing resources that reflected this preference. Not only did they find these study aids to be more engaging but also more valuable in making sense of course concepts.

A strong example of this is Holly’s case. Holly indicated that she “likes taking online courses” and had previously done so. She reported that she “really enjoyed the online activities” and “liked the way the course was set up” when referring to the format of the Web-based quizzes, flashcards, and other interactive tools. She indicated that she regularly navigated the interactive learning tools provided by WebCT and was one of only three students to make multiple postings to the online bulletin board.  However, she spent minimal time on the readings.

Bonnie also voiced a positive opinion about the course and its format. Her preference for online courses was associated with being able to access the course at her convenience and having a proclivity toward independent learning.  These points are illustrated in the following quote:

I have a great deal of internal motivation for this particular topic and that’s internal, so I don’t need a person standing in front of me trying to motivate me with that or anything because I’ve been teaching for so long I think I’m able to take maybe what might be new information or even old information and be creative with it and do something new. . . .It (face to face instruction) would have been worse . . .because it means I would have had to be somewhere and I couldn’t have done it at 2 o’clock in the morning.

Though Bonnie’s tendencies toward computer-oriented resources were not quite as extreme as Holly’s, her use still favored them over others. Bonnie was the most active of all students in the online discussion forum, and she reported navigating many of the online activities, valuing their “interactive” and “participatory” components. For her, readings from the text were less utilized since she viewed them as review.

Another group of students seemed to prefer both traditional settings and resources. Based on his course experiences, Alvin expressed a strong preference for face-to-face learning, as evidenced by opinions such as skepticism that “the online portion of the course does science,” and disappointment that he could not get the immediate feedback via the Internet that he generally needs. Correspondingly, he made greater use of the traditional resources, such as completing all text assignments.

Although he reluctantly completed the online quizzes, (he felt he could have done so more efficiently in writing, however), his use of other online resources was minimal. After an unsuccessful effort to access the interactive activities, he did not attempt to do so for the remainder of the course. Alvin also was less involved in the online discussion, making minimal postings to the bulletin board. The following are some reasons he outlined for his lack of involvement with this aspect of the course:

I like to express an idea and hear what people think about it.  That was difficult online because you would type something in and there would be no immediate response.  When I type or write email it takes more energy than speaking.

Suzy was another such example. Representative themes from her interview included preferences for face-to-face experience and a dislike for the additional mental processing associated with the online learning environment. Specifically she stated, “The online thing doesn’t work for me. I need more face to face interaction.” When further probed as to why she felt this way, she indicated that asking questions online was time consuming as opposed to in a traditional classroom where “If I have a question, I just ask it.” Her use of course resources also matched her attitudes. She did all of the readings and completed most of the online quizzes, but admitted spending minimal time doing so. With regard to other online aspects of the class, it was reported she “skimmed the flashcards but nothing else” and also made minimal use of the online bulletin board.

Rob’s attitudes and patterns of use were similar to Suzy’s. He, too, was most active with regard to the readings and online quizzes but reported putting minimal effort into bulletin board postings and other online activities. His reasoning for not participating in online discussion, however, was that he had difficulty operating the online tools and was reluctant to contact the instructors for help.

Theme 2: The learning environment was not motivating to participants for program and personal reasons. All participants expressed difficulties with motivation to complete course activities. Interview responses coded to this category provide some perspective on this matter. The most frequently cited reason for lack of dedication concerned limited time. As all in-service teachers were in the middle of a semester of instruction, they indicated they were prevented from becoming more active in course activities.  The two preservice teachers made similar claims regarding their education programs. In addition to work and school, subjects consistently expressed they did not have time for the course due to other academic obligations or personal family commitments, which often resulted in postponing course assignments or putting minimal effort into their completion.

Another prominent rationalization for decreased involvement cited by five of the respondents concerned the absence of tangible extrinsic motivation in the form of grades, credits, or progress toward a degree. These individuals indicated that they would have been more active in the course had such benefits been attached to their performance. Holly provided a salient comment when she said that she often deprioritized the course since “it was not part of a program” in which she was enrolled and because she was “not earning a solid grade” for her efforts. This finding was unexpected considering that subjects were offered continuing education units for their participation – an aspect that was apparently not a sufficient motivator. Furthermore, when asked about typical motivators for learning, most individuals voiced ideas more consistent with intrinsic motivation, such as topics of interest, the usefulness of an experience, or the potential for learning something new.

Participants also made frequent references throughout the interviews to the lack of personal accountability associated with the course format. Many felt they would have been more thorough in completing course expectations if there had been “someone to answer to” or “consequences” for inadequate performance. Although one student stated that she did not do as much for the course because she “didn’t have to,” another indicated that if course deadlines had been “less relaxed,” she may have been more diligent. Alvin in particular felt less compelled to fully engage because he did not feel accountable to a person he had met. “It was difficult to motivate without someone to answer to.  The stress of face to face learning causes students to perform, but I did not feel that this was present in the online situation.”

Other general reasons for lack of motivation were also offered.  Included in these was the fact that students were not required to pay for taking the course.  It was generally agreed that a personal monetary investment would have inspired individuals to try to extract more from the course. Some students indicated that they were less motivated because they did not plan to take the biology certification exam in the near future. The student most active in online discussion said she might have been more involved in other aspects of the course if the other students had made more bulletin board postings.

The sum total of these responses, in addition to the low use of the online discussion tools supports the notion that these participants were not inspired to invest themselves fully in the process associated with learning in this environment. Despite the fact that some of the participants expressed discomfort with Web-based education, course evaluations were positive. All participants also expressed repeatedly during interviews that they felt the course was valuable and that they do wish they had put more effort into it. For this reason, the researchers concluded that the course content was less of a factor for decreased motivation than the above-mentioned considerations.

Discussion

Considerations of Learning:  Quantity vs. Quality

In answering the first research question, which considers how and what participants learned as a result of course experiences, it is important to return to a mixed methods perspective and consider the conclusions possible from the triangulation of all forms of data. As is commonly reported in the literature (Hargis, 2001; Ostiguy & Haffer, 2000; Russell, 2001; Yildrim et al., 2001), most of these students improved their performance on a multiple-choice examination based on course content. Because this method is often criticized for its limitations in evaluating complex levels of understanding (Duke, 1999; Madaus, 1988; Shepard, 2000) the investigators provided additional perspectives by including concept mapping as a mode of assessment. The Alternate Scores, which in most cases increased after course experiences, were quantified based on the total number of relevant concepts, examples, and diverging map branches. These scores indicated that students increased their knowledge of course concepts and made additional connections between them. The Stoddart Scores, on the other hand, showed virtually no change in pre to post means. Because these are scored as a proportion of valid relationships relative to the total number of attempts, they are considered a representation of quality of learning and new knowledge (Stoddart et al., 2000).

Examining the findings associated with multiple-choice exams and concept map scores as a whole reveals a unique finding in regard to the knowledge of the students. Specifically, it seems that, although these students typically demonstrated gains in terms, concepts, and connections between ideas associated with the course, the overall proportion of scientifically accurate relationships demonstrated in their maps did not improve. In short, they seemed to gain knowledge of concepts and terms, but did not use them any more efficiently after their online experience. Such gains are akin to a declarative or recall understanding (Smith & Ragan, 1993; Yildirim et al., 2001), which represent the more elementary stages of meaningful learning.

Considering the collective body of data associated with student learning, it may be concluded that, although many participants gained additional knowledge of concepts, ideas, and terms associated with the course, this information was largely declarative in nature. There was considerably less evidence of meaningful assimilation of concepts of greater complexity. The interpretation of these findings for this particular study through the lens of a mixed-methods approach contrasts with the more common conclusion that students in Web-based courses achieve significant learning outcomes (Russell, 2001).

The Influence of the Web-based Environment on Learning

Student Attitudes: If I don’t like it, I might not do it.”

Two emergent themes from interview data provide perspective on why meaningful construction of knowledge among participants was somewhat limited. The first of these concerns the influence of student attitudes toward online learning. It has been argued that individuals’ behavior in a certain environment is influenced by their perception of how effective that setting is in helping them reach their goals (Bandura, 1997) and that the perceived usefulness of a technology will influence users’ behaviors with regard to the medium (Ajzen & Fishbein, 1980). These perspectives seemed to hold true for the participants of this study, as those individuals with less optimistic attitudes toward online learning were less likely to engage in the process. Interview transcripts commonly contained comments expressing that online learners missed face-to-face interaction or felt that they needed face-to-face interactions to maximize their learning. Noah (2001) agreed that “students who need the social context of face to face class meetings may not fare well entirely online.”

Multiple course participants indicated they were inhibited by their lack of ability to manipulate course tools and were, thus, frustrated by the inability to access various online resources. In accordance with self-efficacy theory, the aggravation felt by these individuals translated into a lack of engagement. They opted to forgo these and certain other segments of the course, rather than seeking help, which was readily available through instructor contact and tech support. Interviews also revealed that students preferring traditional learning gravitated toward non- or less technical resources, while individuals with positive views on Web-based learning were more likely to utilize these media.  These findings are also consistent with the literature.

Student Motivation and the Web-Based Environment:   “If I don’t have to do it, I might not.”

The findings interpreted in this section perhaps provide the greatest insight as to why participant learning was inconsistent. Based on the lack of reported engagement in online discussion and other activities, it can be concluded that as a group the participants did not put forth a maximum effort. Although in part, this lack of effort may be explained by considerations of attitudes, self-efficacy, and learning preferences, data also suggest that other factors contributed.

The most commonly reported reason that participants were less motivated to invest themselves in the course concerned priorities associated with work, school, and personal commitments. Online learners are often classified as overextended with regard to their life commitments (Johnson, 2002). The convenience of being able to access a course from their own homes (Leasure & Thievon, 2000) and, thus, fitting more into their busy schedule is often a primary motivation for enrolling in Web-based courses. The paradox is that these individuals are already overscheduled, but choose an online class for the flexible format that allows them to add yet more to their lives – often resulting in frustration or the need to withdraw (Johnson, 2002; Jung, Choi, Lim, & Leem 2002).

Teachers, in particular, are considered to be overextended during the school year (Darling-Hammond & Cobb, 1996). Like some of the participants in this study, many are inspired by career ladder programs to pursue advanced degrees and take night classes (Arends & Winitzky, 1996), leaving little time to dedicate to other forms of professional development. The participants in this study seemed to follow a similar pattern.  They opted to take the class because it was free and convenient but admitted that, in the end, they were too busy to become as involved as they would have liked.

Another commonly stated reason for lack of involvement concerned the absence of external motivation. Most respondents indicated that the course would have been a greater priority had they been receiving a grade or university credit or were working toward a degree. Hathorn and Ingram (2002) highlighted the need for such external motivators in Web-based learning, as they encourage the consistent use of online media associated with Internet courses. Alternatively, the external rewards associated with online professional development may not be enough to engage teachers sufficiently in the experience. Neither the extrinsic reward of increased performance on a certification exam nor the intrinsic satisfaction of becoming better versed in biology content knowledge seemed to be sufficient motivation for the participants in this study.

The final reason given for scarce participation involved the absence of personal accountability. Interviewees repeatedly declared that they would have been more engaged had they had a face-to-face meeting with a person who would hold them responsible for completing their work. The literature concurs that self-motivation is a quality of utmost importance for distance learners (Hathorn & Ingram, 2002; Jung et al., 2002; Noah, 2001; Osborne, 2001; Watson & Rossett, 1999). These findings reinforce the importance of this source of inspiration from the perspective of both online learners and course designers. Although students expecting success in this setting must possess the potential for maximizing this quality, instructors should arrange circumstances so that their presence becomes less removed.

Implications

The design of this study limits generalization of these findings, but the outcomes provide direction that may inform future efforts toward online teacher professional development. Although the small sample of subjects from which data were collected certainly may have influenced the findings, this situation still has the potential to reflect what may occur in a real-world online environment. Online learning modules for training and development are often designed so that learners may conveniently access them at the point of need. The quantity of individuals having simultaneous “point of need” for such experiences may indeed be limited and, therefore, may unfold much like this scenario. For this reason, the following suggestions may be relevant to course designers.

The fact that participants in this course were able to satisfy the instructor’s criteria for passing the experience but in most cases did not show evidence of meaningful learning suggests that designers of Web-based professional development courses need to provide experiences that equate to more than simply “online seat time.” Furthermore, both instructors and researchers attempting to define the nature of such approaches should consider multiple forms of assessment when drawing conclusions about online learning outcomes. The mixed methods approach utilized in this study provided a more complete picture of learning than might have been achieved using purely quantitative methods. As has been previously argued, future research efforts analyzing this environment may benefit from a greater emphasis on qualitative approaches (Windschitl, 1998).

Designers of online professional development experiences need also to consider factors maximizing engagement, personal accountability, and appropriate extrinsic motivation. These aspects are more easily attained in Web-based courses for university credit, but often arranging such circumstances in independently pursued online development experiences is more difficult.  Instructors may, therefore, benefit from finding creative ways of inspiring participants to fully invest themselves in the process.

Finally, this study provides evidence that online learning is not appropriate for everyone.  This perspective contrasts with the majority of the literature, which reports that online learners have equal chances for success as compared to those in a face-to-face environment (Russell, 2001). The theory that learners thrive in environments most compatible with their learning styles and preferences (Cronbach, 1975) applies to Web-based settings.  Because lack of self-motivation was shown in this and other studies to inhibit performance in Web-based settings (Jung et al., 2002; Noah, 2001; Osborne, 2001), individuals needing external prodding to fully engage seem less suited for this environment. Therefore, those involved with conducting online development programs may benefit from identifying participants who are most appropriate for these experiences.

Author Note

This study was funded in part through the Arizona Board of Regents: Dwight D. Eisenhower Science and Mathematics Program. The results herein represent the findings of the authors and do not necessarily represent the view of personnel affiliated with the Dwight D. Eisenhower Science and Mathematics Program.

References

Akanabi, L. (2000). Enhancing professional practice through WebCT:  A model for preparing reading professionals. Kennesaw State College, Georgia:  University System of Georgia. (ERIC Document Reproduction Service. No. ED455242)

Arends, R., & Winitzky, N. (1996). Program structures and learning to teach. In F. Murray (Ed.), The teacher educator’s handbook (pp. 14-62). San Francisco: Jossey-Bass Publishers.

Azjen, I., & Fishbein, M. (1980). Understanding attitudes and predicting social behavior. Englewood Cliffs: Prentice-Hall.

Baer, W. (1998). Will the Internet transform higher education? Washington DC: The Aspen Institute. (ERIC Document Reproduction Service. No. ED434551)

Bandura, A. (1997). Self-efficacy:  The exercise of control. Englewood Cliffs: Prentice-Hall.

Barkley, S., & Bianco, T. (2001). Online and onsite training: When to mix and match. Educational Technology, 41(4), 60-62.

Berg, B. L. (1998). Qualitative research methods for the social sciences. Needham Heights, MA: Allyn and Bacon.

Bogdan R. C., & Biklen, S. K. (1992). Qualitative research for education: An introduction to theory and methods. Boston: Allyn and Bacon.

Bostock, S. (1998). Constructivism in mass higher education: A case study. British Journal of Educational Technology, 29(3), 225-240.

Bowman, I., Boyle, B.A., Greenstone, K.A., Herndon, L.D., & Valente, A. (2000). Connecting teachers across continents through on-line reflection and sharing. TESOL Journal, 9(3), 15-18.

Bransford, J.D., Brown, A.L., & Cocking, R.R. (2000). How people learn:  Brain, mind, experience, and school. Washington DC: National Academy Press.

Canas, A.J., Ford, K. M., Novak, J.D., Hayes, P. Reichherzer, T.R., & Suri, N. (2001). Online concept maps: Enhancing collaborative learning by using technology with concept maps. The Science Teacher, 68(4), 49-51.

Carter, A. (2001). Interactive distance education: Implications for the adult learner. International Journal of Instructional Media, 28(3), 249-260.

Choy, S.P. (1993). America’s teachers: Profile of a profession. Washington, DC: National Center for Education Statistics, U.S. Department of Education.

Colgan, L., Higginson, W., & Sinclair, N. (1999). Transforming professional development:  An empirical study to determine the key aspects of electronic collaboration and social interaction in the elementary mathematics teaching community. The Alberta Journal of Educational Research, 45(3), 315-319.

Cook, T., & Campbell, D.T. (1979). Quasi-experimentation:  Design and analysis issues for field settings. Chicago: Rand-McNally College Publishing Company.

Cronbach, L. (1975). Beyond the two disciplines of scientific psychology. American Psychologist, 30, 116-127.

Darling-Hammond, L., & Cobb, V. (1996). The changing context of teacher education. In F. Murray (Ed.), The teacher educator’s handbook (pp. 14-62). San Francisco: Jossey-Bass Publishers.

Davis, F.D., Bagozzi, R.P., & Warshaw, P.R. (1989). User acceptance of computer technology: A comparison of two theoretical models. Management Science, 35(8), 982-1003.

Dewar, T., & Whittington, D. (2000). Online learners and their learning strategies. Journal of Educational Computing Research, 23(4), 385-403.

Dorough, D.K., & Rye, J.A. (1997, January). Mapping for understanding: Using concept maps as windows to students’ minds. The Science Teacher, 64, 37-41.

Duke, D. (1999). Real learning: Do tests help? Action for Better Schools, 6(2), 57-82.

Eamon, D.B. (1999). Distance education: Has technology become a threat to the academy? Behavior Research Methods, Instruments and Computers, 31(2),197- 207.

Edwards, N., Hugo, K., Cragg, B., & Petersen, J. (1999). The integration of problem-based learning strategies in distance education. Nurse Educator, 24(1), 36-40.

Greene J.C., Caracelli, V.J., & Graham, W.F. (1989). Toward a conceptual framework for mixed method designs. Educational Evaluation and Policy Analysis, 11, 255-274.

Grundman, J.A., Wigton, R.S., & Nickol, D. (2000). A controlled trial of an interactive, Web-based virtual reality program for teaching physical diagnosis skills to medical students. Academic Medicine, 75(10), October Supplement, s47-s49.

Guba, E.G., & Lincoln, Y.S. (1994). Competing paradigms in qualitative research. In N.K. Dezin & Y.S. Lincoln (Eds.), Handbook of qualitative research (pp. 105-117). Thousand Oaks, CA: Sage.

Hargis, J. (2001). Can students learn science using the Internet? Journal of Research on Computing in Education, 33(4), 475-487.

Harsham, E. (1994). Psychological type on the electronic highway. Bulletin of Psychological Type, 17(3), 20-22.

Hathorn, L.G., & Ingram, A.L. (2002). Online collaboration:  Making it work. Educational Technology, 42(1), 33-40.

Herbert, J.M. (1999). An online learning community. The American School Board Journal, 186(3), 39-41.

Hewson, L., & Hughes, C. (1999). An online postgraduate subject in information technology for university teachers. Innovations in Education and Training International, 36(2), 106-17.

Hoey, J.J., Pettitt, J.M., & Brawner, C.E., (1998). Project 25: First-semester assessment – A report on the implementation of courses offered on the Internet. Raleigh, NC: North Carolina State University.

Igbaria, M. (1993). User acceptance of microcomputer technology:  An empirical test. OMEGA:  International Journal of Management Science, 21, 73-90.

Johnson, M. (2002). Introductory biology “online”: Assessing outcomes of two student populations. Journal of College Science Teaching, 31(5), 312-317.

Jonassen, D. (1993). A manifesto for a constructivist approach to technology in higher education.  In T. Duffy, D. Jonassen, & J. Lowyck (Eds.), Designing environments for constructivist learning (pp. 231-248). Berlin, New York: Springer-Verlag.

Jones, R. W. (1994, July). Performance and alternative assessment techniques: Meeting the challenge of alternative evaluation strategies. Paper presented at the International Conference on Educational Evaluation and Assessment, Pretoria, Republic of South Africa. (ERIC Document Reproduction Services No. ED380483)

Joo, Y., Bong, M., & Choi, H. (2000). Self-efficacy for self-regulated learning:  Academic self-efficacy, and Internet self-efficacy in Web-based instruction. Educational Technology Research and Development, 48(2), 5-17.

Jung, I., Choi, S., Lim, C., & Leem, J. (2002). Effects of different types of interaction on learning achievement, satisfaction and participation in Web-based instruction. Innovations in Education and Teaching International, 39(2), 153-162.

Kennedy, M. (1996). Research genres in teacher education. In F. Murray (Ed.), The teacher educator’s handbook (pp. 14-62). San Francisco: Jossey-Bass Publishers.

Kinzie, M.B., Delcourt, M.A.B., & Powers, S.M. (1994). Computer technologies: Attitudes and self-efficacy across undergraduate disciplines. Research in Higher Education, 35, 745-768.

Kirk, J., & Miller, M.L. (1986). Reliability and validity in qualitative research, Qualitative research methods series (Vol. 1). Newbury Park, CA: Sage.

Leach, J. (1997). English teachers on-line:  Developing a new community of discourse. English in Education, 31(2), 63-72.

Leasure, A.R., & Thievon, S.L., (2000).  Comparison of student outcomes and preferences in a traditional vs. World Wide Web-based baccalaureate nursing research course. Journal of Nursing Education, 39(4), 149-154.

Liaw, S. (2001). Developing a user acceptance model for Web-based learning. Educational Technology, 41(6), 50-54.

Livengood, J. (1995). Revenge of the introverts. Computer Mediated Communication Magazine, 2(4), 8.

Madaus, G.F. (1988). The influence of testing on the curriculum. In  L. Tanner (Ed.), Critical issues in curriculum (pp. 83-121).  Chicago, Illinois: University of Chicago Press.

Mathison, C., & Pohan, C. (1999). An Internet-based exploration of democratic schooling within pluralistic learning environments. Educational Technology, 39(4), 53-58.

Miles, M. B., & Huberman, A. M. (1994). Qualitative data analysis (2nd ed.). Thousand Oaks, CA: Sage.

Morss, D.A. (1999). A study of student perspectives on web-based learning:  WebCT in the classroom. Internet Research:  Electronic Networking Applications and Policy, 9(5), 393-408.

Nasseh, B. (1998). Training and support programs, and faculty’s new roles in computer-based distance education in higher education institutions. Retrieved December 20, 2006 from http://www.bsu.edu/classes/nasseh/study/res98.html

Niederhauser, V., Bigley, M.B., Hale, J., & Harper, D. (1999). Cybercases: An innovation in Internet education.  Journal of Nursing Education, 38(9), 415-418.

Noah, C. (2001). Making the grade in distance learning. Public Libraries E-Libraries, 30-32.

Novak, J.D. (1981). Applying learning psychology as a philosophy of science to biology teaching. The American Biology Teacher, 43(1), 12-20.

Novak, J.D. (1988). Learning science and the science of learning. Studies in Science Education, 15, 77-101.

Novak, J.D., & Gowin, D.B. (1985). Learning how to learn. New York: Cambridge University Press.

Osborne, V. (2001). Identifying at-risk students in videoconferencing and Web-based distance education. The American Journal of Distance Education, 15(1), 41-54.

Ostiguy, N., & Haffer, A. (2001). Assessing differences in instructional methods: Uncovering how students learn best. Journal of College Science Teaching, 30(6), 370-374.

Palloff, R., & Pratt, K. (1999). Building learning communities in cyberspace. San Francisco: Jossey-Bass.

Rafferty, C.D., & Fleschner, L.K. (1993). Concept mapping:  A viable alternative to objective and essay exams. Reading Research and Instruction, 32(3), 25-34.

Rose, M. Frisby, A., Hamlin, M., & Jones, S. (2000). Evaluation of the effectiveness of a Web-based graduate epidemiology course. Computers in Nursing, 18(4), 162-167.

Russell, T. (2001). The no significant difference phenomenon. Montgomery, AL: International Distance Education Certification Center.

Sargeant, J.M., Purdy, R.A., Allen, M.J., Nadkarni, S., Watton, L., & O’Brien, P. (2000). Evaluation of a CME problem-based learning Internet discussion. Academic Medicine, 75(10), s50-s52.

Shepard, L. (2000). The role of assessment in a learning culture. Educational Researcher, 29(7), 4-14.

Shotsberger, P.G. (1999). The INSTRUCT Project: Web professional development for mathematics teachers. The Journal of Computers in Mathematics and Science Teaching, 18(1), 49-60.

Smith, P.L., & Ragan, T.J. (1993). Instructional design. New York: MacMillan.

Stoddart, T., Abrams, R., Gasper, E., & Canaday, D. (2000). Concept maps as assessment in science inquiry learning – A report of methodology. The International Journal of Science Education, 22(12), 1221-1246.

Tashakkori, A., & Teddlie, C. (1998). Mixed methodology. Thousand Oaks, CA: Sage Publications.

Trochim, W. (1989). An introduction to concept mapping for planning and evaluation. Evaluation and Program Planning, 12, 1-16.

Truscott, D., Paulson, B.L., & Everall, R.D. (1999). Studying participants’ experiences using concept mapping. The Alberta Journal of Educational Research, 45(3), 320-323.

United States Department of Education. (2002). No child left behind. Retrieved February 7, 2007, from http://www.ed.gov/nclb/landing.jhtml?src=pb

Urven, L.E., Yin, L.R., Eshelman, B.D., & Bak, J.D. (2001). Presenting science in a video-delivered, Web-based format. Journal of College Science Teaching, 30(3), 172-176.

Volery, T. (2001). Online education: An exploratory study into success factors. Journal of Educational Computing Research, 24(1), 77-92.

Warsila, R., & Lomaga, G. (2001). Screening prospective laboratory telecourse students. Journal of College Science Teaching, 30(3), 202-205.

Watson, J.B., & Rossett, A. (1999). Guiding the independent learner in Web-based training. Educational Technology, 39(3), 27-36.

White, E.M. (1992). Assessing higher order thinking and communication skills in college graduates through writing. Washington DC:  National Center for Educational Statistics. (ERIC Document Reproduction Service No. ED340767)

Windschitl, M. (1998) The W.W.W. and classroom research: What path should we take? Educational Researcher,  27, 28-33.

Yucha, C., & Princen, T. (2000). Insights learned from teaching pathophysiology on the World Wide Web. Journal of Nursing Education, 39(2), 68-72.

Yildrim, Z., Ozden, M.Y., & Asku, M. (2001). Comparison of hypermedia learning and traditional instructionon knowledge acquisition and retention. The Journal of Educational Research, 94(4), 207-214.

 

Author Note:

Michael Lebec
Northern Arizona University
Mike.Lebec@nau.edu

Julie Luft
Arizona State University
julie.luft@asu.edu