Baker, B. A., & Wedman, J. M. (2007). Developing preservice literacy teachers’ observation skills: Two stories, two technologies. Contemporary Issues in Technology and Teacher Education, 7(4). https://citejournal.org/volume-7/issue-4-07/general/developing-preservice-literacy-teachers-observation-skillstwo-stories-two-technologies

Developing Preservice Literacy Teachers’ Observation Skills:Two Stories, Two Technologies

by Elizabeth ( Betsy) A. Baker, University of Missouri-Columbia; & Judy M. Wedman, University of Missouri

Abstract

Systematic observation is a foundational skill teachers use in order to document children’s reading development and plan developmentally appropriate instruction. However, a variety of challenges make it difficult for teacher educators to help preservice teachers develop systematic observation skills. The purpose of this study is to tell two stories of two technologies (multimedia and video) used to help preservice literacy teachers develop systematic observation skills. These stories include descriptions of each technology and the results of sequential mixed methods studies used to examine the preservice teachers, development of systematic observation. Results indicate that the multimedia group showed similar or better performance than the video group for all measures. Discussion is offered to explore possible explanations for the findings and suggest further investigations.

Investigations into what distinguishes an expert from a novice consistently indicate that experts are able to observe and identify patterns significant to their field. For example, when expert chess players see a chess board, they can recognize salient information, identify patterns, and makes sophisticated decisions about what chess piece to move next (Chase & Simon, 1973; deGroot, 1965). Similar abilities have been found, among others, with expert physicians, electronics technicians, physicists, architects, baseball players, and teachers (Berliner, 1986; Bransford, Franks, Vye, & Sherwood, 1989; Carter, Sabers, Cushing, Pinnegar, & Berliner 1987; Chase & Chi, 1980; Egan & Schwartz, 1979; Leinhart & Greeno, 1986; Livingston & Borko, 1989; Norman, Jacoby, Feightner, & Campbell, 1979).

When expert literacy teachers observe a child reading, they are able to identify salient information about the child’s reading development and make instructional decisions that are developmentally appropriate for that child. In contrast, when preservice literacy teachers observe a child reading, they are likely to state that the child is a “good” or “bad” reader—with little evidence or ability to explain what makes this child’s ability good or bad. One of the challenges in teacher education, therefore, is to help preservice teachers develop observation skills that allow them to identify salient information about a child’s literacy development so they can make appropriate instructional decisions. This ability is referred to as “systematic observation.”

The purpose of this report is to tell two stories of two technologies used by teacher educators to address the challenge of developing systematic observation among preservice literacy teachers. These stories include descriptions of each technology and the results of sequential mixed methods studies used to examine the preservice teachers’ development of systematic observation The importance of systematic observation to literacy education is discussed, followed by a discussion of challenges to helping preservice teachers develop systematic observation skills. Finally, two types of technologies and reasons they are being used to address some of these challenges are described.

Systematic Observation

Standardized and criterion-referenced assessments are typically the means by which students’ literacy achievement is measured on national, state, and local levels. Such assessments provide political leaders, school district administrators, and parents with information that allows comparisons of students’ literacy achievement in relation to larger populations or defined levels (i.e., basic, proficient, or advanced). These assessments, however, often denote isolated skill achievement, may not reflect the curriculum taught in the classroom, and focus the teacher’s attention on teaching to the test rather than on meeting individual students’ literacy needs. As a result, standardized and criterion referenced assessments do not reveal individual students’ growth and needs in language and literacy learning; they do not inform instructional decisions, nor do they indicate the learning occuring on a day-to-day or week-to-week basis (Wilde, 1996).

In order to overcome the limitations of standardized and criterion-referenced assessments, preservice teachers must learn to apply assessment skills that examine individual children’s literacy development on an ongoing and regular basis. Assessment skills, such as systematic observation, provide literacy teachers with the information needed for meeting all students’ literacy needs and for daily instructional decision making.

Literacy teachers can use systematic observation to assess how a child works on a literacy task. For example, through observation, information can be learned about a student’s (a) competencies and confusions, (b) strengths and weaknesses, (c) processes and strategies, and (d) understandings of literacy processes (Clay 1993). Such information should be regularly recorded and used to scaffold (Bruner, 1978) and maximize each student’s literacy growth.

The literature is replete with studies and discussions of systematic observation. Clay (1993) described systematic observation from the standpoint of teachers using a variety of observation assessments to examine specific elements of young children’s literacy growth. Clay described the Observation Survey (i.e., running records, letter identification, concepts about print, word tests, writing, and hearing sounds) that emphasizes the operations and strategies used in early reading development. Careful record keeping of the child’s performance allows a teacher to tailor instruction so that the child acquires new literacy strategies and transfers their use from one situation to another.

Goodman (see Wilde, 1996) elaborated on systematic observation as a regular assessment practice and used the term “kidwatching” to describe the means by which teachers explore a child’s language development and knowledge of language. Kidwatching reveals up-to-date information of a child’s knowledge about language, as well as the role miscues play in language development. Goodman argued that the analysis of miscues provides the basis for discovering a child’s knowledge of reading and language. Miscue analysis allows teachers to examine a child’s successful and unsuccessful use of literacy strategies and then provide the support necessary for the further development. As children try to make sense of and organize their knowledge, the teacher, through observation, can make instructional decisions that help children rethink and build their understanding of language and literacy.

Leu and Kinzer (1999) described a framework useful for teachers’ understanding of children’s reading processes. This framework lists the following components of reading: decoding knowledge, vocabulary knowledge, syntactic knowledge, discourse knowledge, metacognitive knowledge, automaticity, and affect. Although a variety of models commonly address similar components in various ways (see Ruddell & Unrau, 2004), it is argued that systematic observation would include information about each component. In other words, a literacy teacher should observe and document the entire range of reading components.

In addition, teachers engaged in systematic observation of children’s reading abilities should substantiate their observations with examples of what a child says or does with regard to each reading component. Teachers commonly use examples of children’s reading to document growth, reconsider their conceptions of children’s growth, and conduct conferences with the children, parents, and principal (Anderson, 2000). Given examples, the child, parents, and principal may be able to contribute confirming as well as disconfirming examples that, in turn, help all involved to understand the child’s reading progress. Teachers should be able to cite examples to demonstrate that their evaluation of a child is justified. Finally, systematic observation of reading involves not just a range or examples but also involves pulling these examples together for an overall understanding of a child’s reading abilities.

In summary, systematic observation has been described as an assessment process that enables teachers to understand children’s literacy knowledge in terms of what they know and what could be taught. Researchers offer several conclusions that help to focus a teacher’s understanding during systematic observation: (a) meaning is actively constructed during reading, (b) errors inform teachers about reader’s reading development and about how they interpret text,(c) readers use cueing systems and strategies for constructing meaning as they read, (d) all readers use reading strategies and cueing systems in similar ways to construct meaning, and (e) differences in reader’s background (i.e., culture, experience, language) influence meaning construction (Marek & Goodman, 1985). These conclusions are clearly evident in the systematic observation concepts just described.

Challenges Encountered in Teacher Education

Although systematic observation is a valuable skill to literacy teachers, teacher educators have a variety of challenges to overcome in order to help preservice literacy teachers develop such skills. For example, one strategy that teacher educators can use is to model systematic observation. Obviously, modeling requires that teacher educators, preservice teachers, and children be present simultaneously. This is a challenge because teacher preparation programs do not typically have school-aged children on campus. A teacher educator can arrange to observe in an elementary classroom with preservice teachers and thereby model systematic observation. It is, however, cumbersome for 20-40 preservice teachers to observe in one classroom of 20 or so children. This strategy, therefore, is problematic.

Teacher educators can ask cooperating teachers to model systematic observation during field experiences. However, unless there is consistent communication between the cooperating teachers and teacher educators, the cooperating teachers may not be modeling the systematic observation foci and strategies that the preservice teachers are learning in their teacher education courses. Further, because preservice teachers are commonly placed in several different schools for field experiences, a teacher educator may need to maintain consistent communication with 10-40 cooperating teachers (depending on the number of preservice teachers in a course and the ratio between cooperating teachers and preservice teachers). Maintaining the necessary communication with the numerous cooperating teachers can be a daunting challenge for the teacher educator.

Another alternative is for teacher educators to supervise preservice teachers’ field experiences and model systematic observation. However, because preservice teachers work in different classrooms (often in different schools), teacher educators may be able to travel only a handful of times during a semester to multiple field sites to observe each preservice teacher and model systematic observation.

This paper tells the stories of two technologies being used to address the aforementioned challenges. Story 1 involves the use of multimedia cases from a CD-based series entitled Children As Literacy Kases (ChALK, see http://web.missouri.edu/~umccoechalk/). Story 2 involves the use of video vignettes that demonstrate systematic observations. These technologies focused on individual children as they read and wrote in the classroom during literature, math, science, and social studies instruction. The instructors selected these technologies to address the aforementioned challenges. Specifically, using ChALK and the selected videos allowed the instructors to discuss systematic observation with preservice teachers without overwhelming a classroom of students with 20+ observers. Using these technologies allowed the instructors to model systematic observation without traveling to each preservice teacher’s field school. Because modeling could occur during university courses, the instructors did not have to ask cooperating teachers to model systematic observation. Herein, the modeling could be aligned with the courses and address specific strategies that may otherwise not be demonstrated during the limited time that preservice teachers are in the field.

Potential for Improving Teacher Education: Reasons to Use Technology

In this section is discussed a variety of reasons literacy teacher educators are turning to technology to improve teacher education. The two technologies described in this study were used for sociocultural reasons. Specifically, our culture is increasingly technological. Technology was used in these courses so that preservice teachers could see examples of how to incorporate technology into classroom settings. The National Center for Education Statistics (NCES; U. S. Department of Education, 2005) reported that in 2004 nearly 100% of U. S. public schools had Internet access. On the other hand, in 2001 only 33% of teachers reported being well-prepared to use computers or the Internet in their classrooms. The U.S. Department of Education concluded that this lack of preparation often resulted in teachers not using the technology available in classrooms.

The U. S. Congress funded an initiative (Preparing Tomorrow’s Teachers to Use Technology, n.d.) to help teacher educators incorporate technology into their instruction and, thereby, help teachers feel more comfortable with technology and instruction with technology. Similarly, the National Council for Accreditation of Teacher Education (NCATE) determined that teacher education has an important role to play in helping K-12 teachers use technology in their classrooms. In 2001, NCATE added technology standards to their evaluation of teacher education programs (see http://www.ncate.org). The multimedia cases and video vignettes that are described in this study allowed preservice teachers to experience educational technologies.

The two technologies were used for cognitive reasons. Specifically, each technology was used to overcome inert knowledge (Bereiter, & Scardamalia, 1985; Bransford, 1989; Whitehead, 1929) regarding systematic observation by contextualizing observations via multimedia or video. There are anecdotal stories (e.g., Silverman and Welty, 1992), books (e.g., Atwell, 1987; Avery, 1993; Harp, 1993, Routman, 1994), and multimedia products (e.g., CTELL, see Teale, Leu, Labbo, & Kinzer, 2002, and Reading Classroom Explorer, see Hughes, Packard, & Pearson, 2000a, b) that allow users to learn about ways to teach children to read and write. In other words, several instructional materials are available that allow preservice teachers to understand different ways to set up a literacy curriculum (e.g., Alverez et al., 2005). In contrast, the multimedia cases and video vignettes used in these courses were selected by the instructors so that preservice teachers could systematically observe children while they read and write. These technologies were used because the focus is on the child instead of the teacher or the classroom.

Another cognitive reason that these technologies were used was to provide anchored instruction (Cognition and Technology Group at Vanderbilt, 1991) regarding systematic observation. The preservice teachers could help one another think about systematic observation from each of their perspectives and, thereby, foster reflective thinking (Schon, 1983; Shulman, 1992) about systematic observation. Hughes, et al. (2000a) asked preservice teachers to describe their experience with video vignettes. The participants reported that when classmates viewed the same vignettes, their class discussions were enriched because they shared a common anchored experience. Baker and Wedman (2000) found similar reports with a group of preservice teachers who used multimedia cases. Specifically, the participants stated that the use of multimedia cases in their course (which served as a shared anchor) helped them share and discuss their field experiences (which were different for each participant).

Finally, the two technologies were used for pedagogical reasons. The multimedia cases and video vignettes allowed instructors to shift from lectures about systematic observation to problem-based generative learning (Cammack & Holmes, 2002; Christenson, Gervin, & Sweet, 1991; Risko, McAllister, Peter, & Bigenho, 1994). In other words, the preservice teachers could observe the same child (via video or multimedia) and generate what they thought was salient information. Problem-solving was required because preservice teachers had to determine when and how children demonstrate reading abilities.

Hughes, et. al. (2000b) conducted a study in which teachers enrolled in a graduate literacy course had the option of using video vignettes to support their course work. They found that teachers who relied on the video to solve problems posed by the instructor were better able to support their claims about teaching reading. In other words, the video vignettes fostered problem-based generative learning. Baker and Wedman (2000) found that preservice teachers enrolled in a course using multimedia cases went from generating 42% of the discussion to generating 100% of the discussion within five class meetings. In other words, multimedia cases fostered generative learning experiences. In a phenomenological study, Baker (2005) found that 100% of a group of preservice teachers using multimedia cases perceived that they had grown as literacy teachers. In addition, they attributed their growth to the use of multimedia cases above field experiences. The problem-solving opportunities are one aspect of why the multimedia cases were valued.

Although the two technologies were used for sociocultural, cognitive, and pedagogical reasons, we wanted to know if the preservice teachers developed systematic observation skills. In the next section, the specific technologies are described and how each was used. The results of pre/post test measures of systematic observations are then reported.

The following questions guided this study:

  1. Did the ChALK users and video users observe a range of aspects of reading (adapted from Leu & Kinzer, 1999) on pretest and posttest measures?
  2. Did the ChALK users and video users substantiate their pretest/posttest observations?
  3. Did the ChALK users and video users describe an “overall” understanding of a child’s reading abilities?
  4. Did statistically significant differences exist between pretest and posttest measures for ChALK users? video users?

Story 1: Using ChALK to Develop Systematic Observation Skills

Description of ChALK

ChALK (see http://web.missouri.edu/~umccoechalk/) is a multimedia software package that, at the time of this study, consisted of eight CD-ROMs containing reading and writing samples of three first-grade classmates: Helen, Kenneth, and Zane. These samples were collected in the children’s classroom from September through May of their first-grade year and capture reading and writing during literature, math, science, and social studies instruction.

The interface offers the following features: (a) list of child’s reading and writing samples, (b) video window, (c) scanned artifact window, (d) scenario window that contains an explanation of the video setting, (e) the ability to sort by date and content area, (f) the ability to create random access to portions of video clips we call “Bookmarks,” and (g) icons indicating whether the sample features reading or writing (see Figure 1). More specifically, the list of the child’s work features titles of what the child read or wrote, the date the child did the reading or writing, and the duration of the video associated with each reading or writing. Clicking on a title allows users to access video, scanned images, and scenarios pertaining to that title. Each title includes an icon designating whether it is a reading sample (icon of a book) or writing sample (icon of a pencil).

The video window allows users to see the edited digital video clips of the child reading or writing. The scanned artifact window allows users to see what the child was reading or writing. The scenario includes a description of the video setting and text from books the child is reading in the video. In addition, users can print out artifacts and scenarios and thereby create running records and anecdotal notes that can be systematically analyzed. The sorting feature lets users access the samples by date and content area. This feature allows users to pull up the child’s work from any given month.

Because reading and writing occur throughout the elementary curriculum (i.e., literature, math, science, and social studies), users can review the child’s reading and writing samples in a particular content area. The sort feature allows users to see a list of, for example, only math or only science items. Users can also combine the sort features. For example, users could sort for December social studies and see only those items. The Bookmark feature allows users to create a list of video segments to which they want to return without having to sort through all of the video again. For example, users may identify clips they want to discuss further with classmates or the instructor. They can create their own list of Bookmarks, click on any Bookmark, and randomly access the clips they want to review.

Figure 1

Figure 1. ChALK interface.

How ChALK was Used

The ChALK-based section of preservice teachers used a computer-lab classroom throughout the semester. After the pretest was administered (class sessions 2 and 3) the instructor told the preservice teachers they would be using ChALK throughout the semester to develop systematic observation skills. The instructor modeled how to use ChALK by accessing Zane’s first reading sample, orally reading the related scenario, showing the video clip, and showing the related artifacts. A discussion ensued about the need for systematic observation, and the instructor recommended various books and chapters they could read in order to learn what to look for when observing children reading. During the next class, the preservice teachers shared what they read and started using the Zane CDs at their desks.

Throughout the semester, the ChALK users were asked to analyze and keep track of Helen’s, Kenneth’s, and Zane’s literacy development. At the beginning of the semester the observations were done in class. After the group was familiar with the interface, they analyzed specified ChALK segments for homework. Some participants chose to do the homework in small groups and others chose to do it independently.

The instructor scaffolded ChALK observations by providing study guides (Barnes, Christensen, & Hansen, 1994) that specified observation tasks. For example, the first study guide asked, “While watching Zane read and write in November, how would you describe his literacy abilities?” Another study guide asked, “It is January and you are Zane’s teacher. What would you plan for him tomorrow? Explain.” Later in the semester the study guide asked, “Based on your observations of Zane throughout the school year, how has Zane grown in his literacy abilities? Come to class ready to have an end-of-year conference with Zane’s parents.” The students responded to the study guides using analysis techniques such as anecdotal notes, checklists, and running records. During class, the ChALK users discussed their answers and cited data from ChALK that supported their conclusions about Helen’s, Kenneth’s, and Zane’s literacy development.

Story 2: Using Video to Develop Systematic Observation Skills

Description of Videos

The videos were noncommercial, unedited segments of first-grade children writing and reading in small groups and individual settings during classroom literacy activities. The segments focused on individual children were selected to provide examples of a range of reading and writing abilities and included the teacher working with the children in a variety of ways.

For example, one segment showed the teacher instructing a small group of emergent readers using a story from a basal reader. The teacher led the children through a picture walk, discussed specific words throughout the picture walk (i.e., compound words, words with endings), and read the story aloud with the children. This particular video segment focused on one child whose sight vocabulary and decoding skills were beginning to develop. The video was an over-the-shoulder view of the child who could be seen using picture and context clues to figure out unknown words, using his finger to point to each word, and covering parts of a word to decode it. The audio portion of the video also captured the child’s talk as he worked to recognize words and to respond to the teacher’s directions and questions. The child could also be heard reading the story aloud with the group.

A second video showed a child reading aloud to the teacher, who was completing a running record that included oral reading and comprehension. In this segment, the child read a storybook the teacher had identified as being appropriate for the child’s ability. After the child read the story aloud, the teacher engaged him in discussion by asking questions. The video recorded the child’s oral reading as well as all of the discussion occurring between the child and teacher.

A third video showed a child writing in the science area. The child was observing a lizard and writing his observations in a science notebook. He used invented spelling to write his words and also asked another student how to spell some words. The video included over-the-shoulder views of the child’s writing and the talk occurring between him and other students.

All video segments had characteristics similar to the three described above. In all instances, the segments were close-up shots focusing on one child, and the child’s reading and talk were clearly audible. The teacher was included in most of the segments either during instruction, conferencing, or assessing. The print material from which the child read was visible or available in hard copy to the preservice teachers. The child’s writing was also available in hard copy to preservice teachers.

How Videos Were Used

The videos were used to provide a classroom context for the course content. The content was organized by broad topics that included literacy theory, word identification, comprehension, writing, and assessment/evaluation. The instructor implemented each topic over several class sessions using the following format:

  1. The topic was introduced by the instructor who provided mini lectures about the content information.
  2. Instructional strategies were demonstrated.
  3. Video segments were used for in-class activities.
  4. The participants incorporated the instructional strategies and assessments into their field experiences.

Systematic observation of children’s literacy growth was emphasized throughout the course. Observation techniques included anecdotal notes written during the administration of running records and during teacher/student conferences. Observation included identifying characteristics such as a child’s literacy strengths, weaknesses, processes, attitudes, interests, and work habits.

For example when the preservice teachers were learning to administer running records, the instructor first explained what a running record is used for, what it measures, and how it is administered and evaluated. The preservice teachers then practiced coding while listening to a child orally read a story on audiotape.

Next, the preservice teachers viewed the video segments two times, both during class time. The first viewing focused on the teacher’s technique for administering the assessment. The preservice teachers were asked to observe how the teacher related to the child, how she administered the running record, and how she discussed the story with the child. The preservice teachers wrote their observations while viewing the segment. The preservice teachers and the instructor then discussed their observations.

The second viewing of the same video segment focused on the child. The preservice teachers coded the story text as the child read it and wrote observations of the child’s reading behaviors. During the next class session, the instructor and preservice teachers discussed the scoring of the running record and the implications the evaluation had for instruction. The preservice teachers were assigned to complete a running record during their upcoming field experience and to submit it to the instructor for feedback.

Video segments were used during all of the broad topics included in the course. They were used (a) to help preservice teachers develop systematic observation skills that consisted of discovering a child’s literacy behaviors and instructional needs and (b) to examine a teacher’s literacy practice. The video segments were always viewed during class time followed by in-depth discussion guided by the instructor.

Research Design

This study used sequential mixed methods (Tashakkori & Teddlie, 1998). Specifically, qualitative data were collected (written answers to written open-ended questions). Using qualitative typological analyses (Hatch, 2002; LeCompte & Preissle, 1993), the written answers were broken into discrete data units, which were tallied and computed into frequency counts. Finally, to compare pre- and posttest scores, the frequency counts were converted to Z-Scores and two-directional t-tests were computed.

To ensure validity, participants were randomly enrolled in one of three sections of the same block of courses, each of which adhered to the same course content and objectives. All instructors used the same instructional format, which included a variety of readings, demonstrations, and reflective writings directed toward particular objectives. All participants engaged in field experiences to plan and teach lessons to children in first or second grade classrooms. To develop systematic observation skills, one section used ChALK while another section used videos. The instructor of the third section opted not to participate in this study.

Participants and Setting

The participants (N = 54) were junior Elementary Education majors of which 49 were female and 5 were male. They attended a large midwestern US university and met the criteria for being admitted into the College of Education that included a minimum 2.75 grade point average, an ACT score of at least 21, completion of 8 hours of introductory education courses, and observations in K-12 classrooms for a minimum of 20 hours. Twenty-six participants were randomly enrolled in the ChALK-user section while 28 participants were randomly enrolled in the video-user section.

Each section functioned as an intact cohort randomly enrolled in a 9-hour block of courses: Emergent Literacy (3 credit hours), Emergent Language (2 credit hours), Children’s Literature (2 credit hours), and field experience (2 credit hours). The focus of this block of courses was on literacy skills, assessment processes, and instructional strategies appropriate for teaching first-grade through third-grade children. In order to maintain content consistency in the courses across different sections, instructors of the same course in each block used the same course goals and instructional objectives to develop learning experiences.

For example, each instructor of the Emergent Literacy course used the following course topics to ensure that the preservice teacher’s learning experiences were focused toward the same literacy content: (a) the theoretical foundations that support literacy acquisition, (b) emergent reading and writing processes, (c) instructional strategies supporting emergent readers, (d) formal and informal assessment strategies, (e) writing for different purposes and audiences, (f) classroom management and organization, and (g) curriculum and teaching strategies based on students’ interests, cultural and ethnic backgrounds, and physical and mental abilities.

Field experience for both sections occurred in first- or second-grade classrooms in public school settings for a 2-hour period on each Tuesday and Thursday morning throughout the academic semester. During the field experience, the participants worked with small groups of children with whom they performed assessments (e.g., observations, running records, individual conferences) and planned and implemented literacy lessons. The lessons incorporated the literacy curriculum and teaching procedures learned in the literacy block of courses. After each field experience session, the participants wrote reflections examining the strengths and weaknesses of their lessons and observations of the children’s literacy abilities. The reflections were submitted to field experience supervisors, who observed the participants in the classroom and provided suggestions and comments for improvement.

The ChALK users developed systematic observation skills by discussing and analyzing the students shown via ChALK, while the video users developed observation skills via video segments. Both groups used their observations to identify students’ developing literacy strengths and needs and to discuss appropriate instruction.

Data Sources

Both groups took the same pretest and posttest at the beginning and end of the semester. The pretest and posttest involved watching the same 13-minute video of a first-grade child orally reading a story to his teacher. The child looked at all of the illustrations in the book, predicted what the book might be about, and read the book aloud to his teacher. The teacher asked the child about his word attack strategies when he got stuck on words and occasionally told him words he did not know. After viewing the video, the participants responded in writing to the question, “What do you notice about the child’s reading?” The participants had no time limit for their responses.

Data Analysis

To address the stated research questions, four measures of the data were analyzed (see Table 1). The first question focused on whether participants observed a range of the components of reading. To address this question, we used typological analyses (Hatch, 2002), in which the data were “divided into groups or categories on the basis of some canon for disaggregating the whole phenomenon under study” (LeCompte & Preissle, 1993, p. 257).

Table 1
Research Questions, Measures Analyzed, and Examples of Scoring

 

Research QuestionsMeasures AnalyzedCases of Scoring
Did the ChALK-users and Video-users observe a range of aspects of reading (adapted from Leu & Kinzer, 1999) on pretest and posttest measures?A: Number of aspects of reading (adapted from Leu & Kinzer, 1999) a participant’s statements represented.Case A: Sally identified 3 aspects of reading (Decoding, Semantic, and Syntactic); Tom identified 4 aspects of reading (Decoding, Semantic, Syntactic, and Pragmatic).
Did the ChALK-users and Video-users substantiate their pretest/posttest observations?B: Number of examples a participant cited.Case B: Sally cited 5 examples; Tom cited 17 examples.
Did the ChALK-users and Video-users describe an “overall” understanding of a child’s reading abilities?C: A + BCase C: Sally scored 8 (3 aspects + 5 examples); Tom scored 21 (4 aspects + 17 examples)
Did statistically significant differences exist between pretest and posttest measures for ChALK-users? Video-users?D: Total number of statements a participant made + total number of examples a participant cited.Case D: Sally’s score was 11 because she made 6 statements and cited 5 examples; Tom’s score was 31 because he made 14 statements and cited 17 examples.

A modified version of Leu and Kinzer’s (1999) seven components of reading (see Appendix) was used. Specifically, the participants’ written responses were categorized according to the following aspects of reading: decoding, semantic, syntactic, pragmatic, metacognitive, affective, and automaticity. Then, each response was coded as a statement or as an example. Statements were defined as global or general statements that indicated recognition of an aspect of reading. For example, “He used the letters to sound out words,” was categorized as a statement about decoding. Examples were defined as a reference to a skill representative of a particular aspect of reading. For example, “He did not know the short sound of /u/,” was categorized as an example of a child’s decoding abilities. Throughout coding, the researchers refined classification characteristics for each aspect of reading. Herein, the participants’ observations were categorized according to whether they noted the child’s reading abilities for each component. This measure was labeled Measure A.

With regard to Measure A, each participant could score between 0 and 7 for making observations regarding 0-7 of the categories. In Case A (see Table 1), Sally made six statements. However, these statements addressed only three of the seven aspects of reading. Thus, for Measure A, Sally scored 3. It is possible that the child Sally was watching did not do anything to demonstrate his abilities in the other four aspects of reading, hence Sally’s omission. Systematic observations, however, should address what a child does and does not demonstrate (Wilde, 1996). Sally only noted what the child did; she neglected to take note of what the child did not do.

The second question focused on whether the participants substantiated their observations. Although teachers need to observe a range of children’s reading abilities (Measure A), they also need to take note of what a child says and does that represents development within each aspect of reading (Calkins, 2001). To address this question, the observations were broken into data units and then categorized as either (a) statement of reading component or (b) example of reading component. The number of examples each participant cited was used to calculate how well they substantiated their observations. This measure was labeled Measure B.

For example, in Case B (see Table 1), Sally cited five examples and Tom cited 17. It should be noted that when teachers make observations of a child’s automaticity, metacognition, and affect, they are referring to decoding, syntax, semantics, and pragmatics. In other words, a teacher might take note of a child’s automaticity with the initial sounds of “g,” metacognition about word order, and affect about how their background pragmatically relates to the story. In order to score each example cited by the participants as only one data point, the researchers scored such data points as examples of decoding, syntax, semantics, or pragmatics. Thus, the findings are reported in terms of how well participants scored in identifying examples of decoding, syntax, semantics, and pragmatics.

The third question asked about the participants’ overall observation abilities. To address this question, the measures used for the two previous questions (range of components plus examples of components) were combined. This measure was labeled Measure C. Analysis of this measure was conducted to address whether statistically significant change in pretest and posttest scores occurred in combined scores. (Although significance may be evident in separate A and B scores, they may not be significant when combined. Conversely, significance may not occur in separate measures but become evident when they are combined). The combined scores were considered because it is preferred that a teacher observer demonstrate the abilities represented in Measure A (range of reading abilities) as well as Measure B (cite examples)—rather than one measure or the other. Measure C provided insights into these combined abilities. In Case C (see Table 1), Sally scored 8 (3 aspects + 5 examples) while Tom scored 21 (4 aspects + 17 examples).

The final question focused on statistically significant changes within each group between the pretest and posttest performances. To address this question, quantitative analyses were used, specifically, a two-group mixed design (Myers & Wells, 1995). For each group there were repeated measures (pre- and posttest) with the variables being analyzed within groups. This measure was labeled Measure D.

Measure A considered the number of aspects of reading instead of the number of statements. Although a teacher needs to observe a range of reading abilities (Measure A), it is also important to collect confirming and disconfirming information. In other words, a teacher should take note of all observable information related to the child’s decoding abilities. For instance, Sally made three statements about syntax. One statement may have referred to the child’s use of subject-verb word order, another statement may have mentioned the child’s ability to correctly substitute a known adjective for an unknown adjective, and the other statement may have noted the child’s inaccurate substitution of a verb for a noun. Sally’s three statements regarding syntax were combined with the examples she cited. Hence, for Measure D, Sally scored 11 while Tom scored 31.

Participants’ pretest and posttest responses were categorized according to the previously described definitions for measures A, B, C, and D. Using the total frequency for each measure, a change score was computed for each measure. Each total was converted to a Z-Score (1.96) and a two-directional t-test was computed to determine whether or not significant differences existed between pretest and posttest performance. The .05 level was adopted as the measure of significance.

A series of steps were used to establish a dependable coding system. First, two researchers independently identified and categorized the statements and examples written by the ChALK users and the video users on pre/posttest measures. The categorizations were compared for agreement, then each statement or example for which discrepancies occurred were reviewed and discussed. Here the researchers reviewed the discrepant responses and examined similar responses previously categorized until agreement was reached. When 100% interrator agreement was reached, frequencies and percentages were computed for each group’s pretest and posttest total statement categories and total example categories.

Results

Range of Observations

The first research question examined the extent to which the ChALK users and the video users observed a range of the seven aspects of reading on pretest and posttests. In order to maintain discrete data points, the researchers categorized all statements and examples of metacognition, affect, and pragmatics according to whether they referred to the other aspects (decoding, semantics, syntax, automaticity). In other words, if a participant stated that the child proficiently decoded, then categorized this as a decoding statement instead of an automaticity statement.

Results indicated that the participants in both groups observed instances of the decoding, semantic, and automaticity categories more frequently than the remaining categories on pretest and posttest measures. Table 2 provides the frequency and percent of statements and examples written by the ChALK users and the video users on pretest and posttest measures. The results are delineated by the seven aspects of reading described earlier. As stated, all examples of pragmatics, metacognition, and affect were categorized according to whether they referred to the other aspects (decoding, semantics, syntax, automaticity).

Table 2
Frequency and Percent and Totals of Statements and Examples on Pretest and Posttest Measures for ChALK and Video User Groups

CategoryChALK UsersVideo Users
Pre f(%)Post f(%)Pre f(%)Post f(%)
Measure A – Statements

Decoding

19(26)25(21)16(17)16(17)

Semantic

23(31)30(25)18(19)29(32)

Syntactic

2(3)15(13)6(6)3(3)

Pragmatic

0(0)0(0)0(0)0(0)

Metacognition

0(0)7(6)2(2)2(2)

Affect

10(15)7(6)8(9)6(7)

Automaticity

19(26)34(29)43(46)36(39)
Total73(100)118(100)93(100)92(100)
Measure B – Examples

Decoding

15(65)59(80)16(36)16(76)

Semantic

3(13)2(3)3(7)0(0)

Syntactic

0(0)3(4)0(0)0(0)

Pragmatic

0(0)0(0)0(0)0(0)

Metacognition

0(0)5(7)0(0)0(0)

Affect

0(0)0(0)1(2)0(0)

Automaticity

5(22)3(4)24(55)5(24)
Total23(100)74(100)44(100)21(100)
Measures A + B96192137113
Measure CPrePostPrePost
Measure DPrePostPrePost
Grand Total111208175139

Of the 90 total posttest statements written by the video users, some were observations of automaticity (35/39%) and semantics (30/33%) and a few were identified as decoding (18/20%).

Of the 73 total pretest statements written by the ChALK users, some were observations of the child’s use of semantic cues (23/31%) and a few related to decoding (19/26%) and automaticity (19/26%). Of the 23 total pretest examples written by the ChALK users, most related to decoding (15/65%) while a few were related to semantics (3/13%) and automaticity (5/22%). The participants did not write statements categorized as pragmatic or metacognitive. They did not write examples categorized as syntactic.

Of the 117 total posttest statements written by the ChALK users, some were observations of automaticity (34/29%), some were related to semantics (30/26%), and a few were related to syntax (14/12%). Of the 74 total posttest examples, most related to decoding (59/80%). The participants did not write statements categorized as pragmatic or examples categorized as pragmatic or affective.

Of the 93 total pretest statements written by the video users, many were observations of automaticity (44/46%), and a few referred to semantics (18/19%) and decoding (16/17%). Of the 44 total pretest examples, most related to automaticity (25/55%) and some referred to decoding (14/36%). The participants did not write statements categorized as pragmatic or examples categorized as syntactic, pragmatic, metacognitive, or affective.

Of the 21 total posttest examples, most observations focused on decoding (11/76%) and a few referred to automaticity (9/24%). As in the pretest, the participants did not write statements categorized as pragmatic or examples categorized as syntactic, pragmatic, metacognitive, or affective.

Substantiation of Observations

Question 2 examined the extent to which the preservice teachers in the ChALK user group and the video user group substantiated their statements by identifying examples of their observations. The findings indicated that both groups observed examples of decoding and automaticity on pre- and posttests. Neither group provided examples of syntactic aspects. (As stated previously, all examples of pragmatics, metacognition, and affect were categorized according to whether they referred to the other aspects―decoding, semantics, syntax, automaticity.) The data revealed that while the ChALK users provided more than three times the number of examples on the posttest over the pretest, the video users provided about one-half as many examples.

Of the 23 total pretest examples written by the ChALK users, most related to decoding (15/65%) while a few related to semantics (3/13%) and automaticity (5/22%). Of the 74 total posttest examples, most related to decoding (59/80%).

Of the 44 total pretest examples written by the video users, most related to automaticity (25/55%) and some referred to decoding (17/36%). The participants did not write examples categorized as syntactic. Of the 21 posttest examples, most examples focused on decoding (12/76%) and a few referred to automaticity (9/24%). As on the pretest, the participants did not write examples categorized as syntactic.

Overall Observations

The third research question examined the ChALK users’ and the video users’ overall observations of the child’s literacy processes. Analysis of the ChALK users’ pretest for measure C (combined total for categories A and B) resulted in 96 statements and examples. Analysis of the posttest resulted in 192 statements and examples. Analysis of the video-users’ pretest resulted in 137 statements and examples. Posttest results indicated 120 statements and examples.

Analysis of the ChALK users’ pretest for measure D (grand total of all statements and examples) resulted in 93 total statements and examples. Posttest results included 189 total statements and examples. Analysis of the video-users’ pretest for measure D resulted in 137 statements and examples. Posttest results indicate 114 statements and results.

Change Between Pre and Post Measures

The final question examined whether or not significant differences existed between pretest and posttest measures for the ChALK users and the video users in identifying and giving examples of the seven aspects of reading. Changes in scores on the measures from pretest to posttest for the groups are displayed in Table 3. It is interesting to note that the average (mean) change was not uniform within either group. For the ChALK users, the changes ranged from a minimum of -1 to 11, while for the video users the spread was 20 points, from -13 to 7. For both groups, the largest spread was on measure D and the smallest was on measure A. Table 3 also shows the statistical difference between changes on all variables. The table reports scores in their original metric, although the analysis used standardized scores to show the change.

Table 3
Change Between Pretest and Posttest for ChALK and Video Users on Measures A, B, C, and D

 

Change MeasuresMinimum
Raw Score
Maximum Raw ScoreMean
Standard Deviation
p value
ChALK-users (n = 23)
A
-1
4
1.09
1.41
3.69*
B
-3
10
2.17
3.23
3.23*
C
-4
9
3.26
3.12
5.10*
D
-4
11
4.13
3.62
5.18*
Video-users (= 26)
A
-3
3
38
1.55
2.63*
B
-6
5
-.62
2.04
-1.55
C
-7
5
-.31
2.74
.54
D
-13
7
-.85
4.23
-1.02
p < .05

The results indicated that significant differences existed between the pretest and posttest on all measures (A, B, C, and D) for the ChALK-users. Significant differences existed between the pretest and posttests of the video-users for measure A only. Significance did not result in the remaining three areas.

Summary and Discussion

In this study, two stories are described of teacher educators’ using technology to overcome challenges encountered while helping preservice teachers become systematic observers of children’s literacy abilities. Story 1 focused on the use of a multimedia-based technology (ChALK) while Story 2 highlighted the use of video-based technology. In summary, the findings indicate that both technologies were useful in helping preservice teachers become more cognizant of the range of factors they should consider when they systematically observe children reading. In addition, the findings indicate that the multimedia-based technology (ChALK) was useful in helping preservice teachers substantiate their observations, document an overall understanding of a child’s reading abilities, and demonstrate statistically significant improvement in systematic observation abilities.

Conversely, findings indicate that the video-based technology did not have an impact on preservice teachers’ ability to substantiate their observations, document an overall understanding of a child’s reading abilities, or demonstrate statistically significant improvement in systematic observation abilities for three of the four measures. In this section, a variety of possible explanations are offered for these findings, some limitations are discussed, and opportunities for further investigations are highlighted.

With regard to helping preservice teachers develop the ability to notice a wide range of children’s reading abilities (Measure A), both technologies appear to be useful. Herein, teacher educators who do not have access to a multimedia program such as ChALK but do have video of children reading, may find that such videos are sufficient to help preservice teachers understand that systematic observations are multifaceted (e.g., include more than phonics or comprehension).

There are many possible explanations for why the preservice teachers in the multimedia-based (ChALK) group showed statistically significant growth in each of the additional measures while the preservice teachers in the video-based did not. For example, a situational explanation could be that the supervisor of the ChALK users required substantiation of their field experience observations while the video-based supervisor did not. Based on informal interviews with both supervisors, there is no evidence that supervision was significantly different. However, such data were not collected for this study and, therefore, the impact of the field supervisors on the development of systematic observation is unknown. Further investigations may want to take such data into account.

Another explanation could be that the multimedia-users had exactly that: multimedia. Users could access video, simultaneously see the video and the book the children read in the videos, print out the texts the children read and take notes of what the children said while attempting to read, read about the context of the videos (scenarios), sort the portfolio samples by date and content area, and create their own bookmarks within and across the videos. The video users had similar data (video, printouts of what the child was reading, over-the-shoulder views so they could see in the video the books the children read, and an oral explanation of the context of the video). In some ways, the video-users did use multimedia. Specifically, they watched video, had printouts, and heard video contexts. Further investigations may want to consider the importance to users of the video, printouts, scenarios, etc., on the same screen. In other words, does it matter that the ChALK users did not have to switch between looking up to see the video, find their place again on the printouts in front of them, and so forth? Did the ability to simultaneously open all pertinent data on one screen facilitate the ChALK users’ abilities to make sense of the data? Did the features unique to the multimedia format (ability to make Bookmarks and sort) make a difference?

Another feature available to the multimedia users but not the video users was user control. Specifically, after the ChALK-based instructor showed her class how to use the interface, the users were then able to peruse the data as they so desired. Each preservice teacher was seated at a computer that had ChALK installed. While the instructor specified what part of the data to evaluate (e.g., November Social Studies samples), the users could access any of the data (e.g., October Social Studies, November Literature, etc.).

Further investigations may want to identify whether the users accessed data that was not specified for the task at hand. If the users looked back at other data or at coinciding data, maybe their ability to cite examples was enhanced. In addition, user control of the data related to a specified task may have been a factor. The video users watched the video together—the users were not able to stop and start the video as they each saw fit. The multimedia users each had the ability to stop and start any of the data as they each desired, because they each had their own computers. Further investigation of the significance of user control may provide pertinent insights.

In interviews with the instructors, it became evident that both instructors used about 35-40% of the semester to help the preservice teachers develop systematic observation skills. The ChALK-based instructor focused on systematic observation for the first 5 weeks of the semester and revisited some of the concepts, as there was time, throughout the remaining 10 weeks. The video-based instructor intertwined systematic observation throughout all 15 weeks. Therefore, the findings do not appear to be a result of the ChALK users spending more time developing systematic observation skills. However, upon closer investigation it became evident that the ChALK users were able to do ChALK-based homework because there were multiple copies of ChALK, while the video users did not do video-based homework. Hence, although in-class time may have been similar between groups, homework time may account for different amounts of time spent developing systematic observation skills. Further investigations could specifically account for amounts of in-class and homework time spent developing systematic observation skills.

Theories of anchored instruction argue that when learners have a common experience (an anchor) they can build on this experience and help one another consider the complexities and intricacies of the experience (Cognition and Technology Group at Vanderbilt, 1990). In other words, they can help one another be reflective (Schon, 1983). Both groups in this study had an anchor: the multimedia users experienced ChALK while the video users experienced a variety of video segments. Why then did the multimedia-based anchor appear to be more effective than the video-based anchor? Further investigations could consider that not all anchors are equal. In this study, the multimedia-based anchor allowed users to follow three children for 8 months of their literacy development. Furthermore, these three children were in the same first-grade classroom. In other words, the multimedia-based anchor allowed users to build a coherent story of each child and an understanding of their first-grade classroom. On the other hand, the video-based anchor was a series of unrelated video segments designed to demonstrate ways to conduct systematic observation. This study may indicate that cohesive anchors are more effective than isolated anchors. This indication is significant to instructors who seek to provide meaningful anchored instruction.

Related to anchored instruction is case-based instruction (CBI; Merseth, 1997; Shulman, 1992). In fact, it is argued that one reason to use CBI in teacher education is that it provides an anchored experience (Baker & Wedman, 2000; Risko & Kinzer, 1994). Could the differences between groups in this study be related to not merely anchored instruction but the possibility that ChALK allowed users to build three cases of three first-grade children while the video-based users had isolated video clips? Some argue that cases are different from demonstrations because cases provide a rich context allowing users to explore the complexities of the case. Others argue that demonstrations are a type of case (Lundeberg, Levin, & Harrington, 1999). This study may indicate that regardless of whether demonstrations are a type of case or not, a rich context may be critical for users to substantiate their evaluation of a case. Further investigations into the richness of data provided in a case may help teacher educators develop a better understanding of what makes CBI more or less effective.

The study possessed the following limitations. The video used for the pretest and posttest was naturalistic, and it did not purposely represent each component of reading, nor did it purposely emphasize each component equally. Hence, participants may have been influenced to write about what was explicitly demonstrated in the video, while overlooking the range components expert observers might note. Although systematic observation is critical to making appropriate instructional decisions, this study focused only on systematic observation. Other studies may want to consider further the impact of systematic observation on instructional decision-making. This study employed a typological analysis using a modified version of Leu and Kinzer’s (1999) seven aspects of reading. There are other models that could be used for typological analysis of the components of reading (see Ruddell & Unrau, 2004).

In conclusion, both technologies appeared to be useful in the development of systematic observations among preservice literacy teachers. The multimedia users appear to have developed additional skills (e.g., the ability to substantiate their observations) and, therefore, there may be good reason to explore further the instructional power of multimedia materials. Given further investigations, researchers may be able to determine the importance of multimedia materials for the preparation of teachers, the importance of whether these materials need to form a case or simply represent a compilation of demonstrations, whether other factors (e.g., field supervisor) are more significant than types of technologies used in methods courses, and the impact of user-control on learning.

References

Alverez, M. C., Atkinson, T., Boling, E. C., Grisham, D. L., Labbo, L., Kinzer, C. K., Risko, V. J., (2005). The transformation of teacher education through technology integration. Paper presented at the meeting of the National Reading Conference, Miami, FL.

Anderson, C. (2000). How’s it going? Heinemann: Portsmouth.

Atwell, N. (1987). In the middle: Writing, reading, and learning with adolescents. Portsmouth, NH: Heinemann.

Avery, C. (1993). And with a light touch: Learning about reading, writing, and teaching with first graders. Portsmouth, NH: Heineman.

Baker, E. A. (2005). Can preservice teacher education really help me grow as a literacy teacher?: Examining preservice teachers’ perceptions of multimedia case-based instruction. Journal of Technology and Teacher Education, 13, 415-431.

Baker, E. A., & Wedman, J. (2000). Lessons learned while using case-based instruction with preservice literacy teachers. In T. Shanahan & F. Rodriguez-Brown (Eds.), Forty-ninth National Reading Conference yearbook (pp. 122-136). Chicago: National Reading Conference.

Barnes, L. B., Christensen, C. R., & Hansen, A. J. (1994). Teaching and the case method (3rd ed.). Boston, MA: Harvard Business School.

Bereiter, C., & Scardamalia, M. (1985). Cognitive coping strategies and the problem of “inert” knowledge. In S. F. Chipman, J. W. Segal, & R. Glaser (Eds.), Thinking and learning skills: Current research and open questions (Vol. 2, pp. 65 80). Hillsdale, NJ: Erlbaum.

Berliner, D. C. (1986). In pursuit of the expert pedagogue. Educational Researcher, 15(7), 5-13.

Bransford, J. D., Franks, J. J., Vye, N. J., & Sherwood, R. D. (1989). New approaches to instruction: Because wisdom can’t be told. In S. Vosniadnou & A. Ortony (Eds.), Similarity and analogical reasoning (pp. 470-497). New York: Cambridge University Press.

Bruner, J.S.. (1978). The role of dialogue in language acquisition. In A. Sinclair, R.J. Jarella, & W.J.M. Levelt (Eds.), The child’s conception of language. Springer-Verlag: Berlin.

Calkins, L. (2001). The art of teaching writing. New York: Longman.

Cammack, D. W., & Holmes, J. T. G. (2002, November). Extending the potential of the Internet for higher education: Two research projects at Vanderbilt University’s Learning Technology Center. International Journal of Educational Technology, 3(1). Retrieved October 22, 2007, from http://www.ascilite.org.au/ajet/ijet/v3n1/cammack/index.html

Carter, K., Sabers, D., Cushing, K., Pinnegar, S., & Berliner, D. C. (1987). Processing and using information about students: A study of expert, novice, and postulant teachers. Teaching and Teacher Education, 3, 147-157.

Chase, W. G., & Chi, M. T. H. (1980). Cognitive skill: Implications for spatial skill in large-scale environments. In J. Harvey (Ed.), Cognition, social behavior, and the environment. Potomac, MD: Erlbaum.

Chase, W. G., & Simon, H. A. (1973). Perception in chess. Cognitive Psychology, 4, (55-81).

Christensen, C. R., Garvin, D. A., & Sweet, A. (Eds.). (1991). Education for judgment: The artistry of discussion leadership. Boston, MA: Harvard Business School.

Clay, M. M. (1993). An observation survey of early literacy achievement. Heinemann: Portsmouth, NH.

Cognition and Technology Group at Vanderbilt. (1991, May). Technology and the design of generative learning environments. Educational Technology, 34-40.

deGroot, A. D. (1965). Thought and choice in chess. The Hague: Mouton.

Egan, R. W., & Schwartz, B. (1979). Chunking in recall of symbolic drawings. Memory and Cognition, 7, 149-158.

Harp, B. (1993). Bringing children to literacy: Classrooms at work. Norwood, MA: Christopher-Gordon Publishers.

Hatch, J. A. (2002). Doing qualitative research in education settings. Albany: State University of New York Press.

Hughes, J.E., Packard, B.W., & Pearson, P.D. (2000a). The role of hypermedia cases on preservice teachers’ views of reading instruction. Action in Teacher Education, 22(2A), 24-38.

Hughes, J.E., Packard, B.W., & Pearson, P.D. (2000b). Pre-service teachers’ experiences using hypermedia and video to learn about literacy instruction. Journal of Literacy Research, 32(4), 599-629.

LeCompte, M. D., & Preissle, J. (1993). Ethnography and qualitative design in educational research (2nd ed.). San Diego, CA: Academic Press.

Leinhardt, G., & Greeno, J. G. (1986). The cognitive skill of teaching. Journal of Educational Psychology, 78, 75-95.

Leu, D. J., & Kinzer, C. K. (1999). Effective reading instruction K-8 (4th ed.). Englewood Cliffs, NJ: Merrill.

Livingston, C., & Borko, H. (1989). Expert-novice differences in teaching: A cognitive analysis and implications for teacher education. Journal of Teacher Education, 40(4), 36-42.

Lundeberg, M. A., Levin, B. B., & Harrington, H. L. (Eds.) (1999). Who learns what from cases and how? Hillsdale, NJ: Erlbaum.

Marek, A.M., & Goodman, K.S. (1985). Annotated miscue analysis bibliography (Occasional Paper No. 16). Tuscon: Program in Language and Literacy, College of Education, University if Arizona. (ERIC Document Reproduction Service No. ED 275 998).

Merseth, K. K. (1997). Case studies in educational administration. New York: Longman.

Myers, J. L., & Wells, A. D. (1995). Research design and statistical analysis. Hillsdale, NJ: Erlbaum.

Norman, G. R., Jacoby, L. L., Feightner, J. W., & Campbell, E. J. M. (1979). Clinical experience and the structure of memory. Proceedings of the 18th annual conference on Research in Medical Education. Washington, DC: Association of Medical Colleges.

Preparing Tomorrow’s Teachers to Use Technology. (n. d.) Purpose. Retrieved September 27, 2007, from http://www.ed.gov/programs/teachtech/index.html

Risko, V. J., & Kinzer, C. K. (1994). Improving teacher education through dissemination of videodisc-based case procedures and influencing the teaching of future college professionals (Application No. P116A40242). Washington, DC: Fund for the Improvement of Postsecondary Education.

Risko, V. J., McAllister, D., Peter, J., & Bigenho, F. (1994). Using technology in support of preservice teachers’ generative learning. In E. C. Sturtevant & W. M. Linek (Eds.), Pathways for literacy: Learners teach and teachers learn (pp. 155-167). Pittsburg, KS: College Reading Association.

Routman, R. (1994). Invitations: Changing as teachers and learners K-12. Portsmouth, NH: Heineman.

Ruddell, R., & Unrau, N. J. (2004). Theoretical models and processes of reading (5th ed.). Newark, DE: International Reading Association.

Schon, D. (1983). The reflective practitioner. New York: Basic.

Shulman, L. (1992). Toward a pedagogy of cases. In J. Shulman (Ed.), Case methods in teacher education (pp. 1-30). New York: Teachers College Press.

Silverman, R., & Welty, B. (1992). Education: Case studies for teacher problem solving. New York: McGraw-Hill Primis.

Tashakkori, A., & Teddlie, C. (1998). Mixed methodology: Combining qualitative and quantitative approaches. Thousand Oaks, CA: Sage.

Teale, W.H., Leu, D.J., Jr., Labbo, L.D., & Kinzer, C. (2002, April). The CTELL project: New ways technology can help educate tomorrow’s reading teachers. The Reading Teacher, 55(7). Retrieved September 23, 2007, from
http://www.readingonline.org/electronic/elec_index.asp?HREF=/electronic/RT/4-02_Column/index.html

Tunmer, W. E., & Bowley, J. A. (1984). Metalinguistic awareness and reading acquisition. In W. E. Tunmer, C. Pratt, & M. L. Herriman (Eds.), Metalinguistic awareness in children: Theory, research, and implications (pp. 144-168). Berlin: Springer-Verlag.

U. S. Department of Education. (2005). Internet access in U.S. public schools and classrooms: 1994–2003 (NCES Report No. 2005–015). Retrieved September 23, 2007 from http://nces.ed.gov/surveys/frss/publications/2005015/

Whitehead, A. N. (1929). The aims of education. New York: Macmillan.

Wilde, S. (Ed.). (1996). Notes from a kidwatcher: Selected writings of Yetta M. Goodman. Portsmouth, NH: Heinemann.

 

Author’s Note:

Elizabeth A. Baker
University of Missouri-Columbia
[email protected]

 


 

Appendix

Definitions of Literacy Aspects used to Categorize Preservice Teacher’s Written Statements and Examples

CategorySample Responses
Decoding:

Statements indicate that words were sounded out or sound-symbol associations were applied.

He knows to sound out words.

Examples describe the use of skills such as matching between grapheme and phoneme, syllabication, root words, or words that look alike

In the word “across”, he was able to eventually read the word because he was sounding every letter out.
Semantic:

Statements identify instances of word or sentence meaning that was used to decode a word.

He skipped words he didn’t know and read on to see what would make sense.

He showed knowledge of semantics when he read goose instead of duck.

Examples refer to a specific word or quote or a substitution of a known word for an unknown word.

He is able to hear if a sentence makes sense.
Syntactic:

Statements indicate that word order was used to decode words.

He read, “The car stopped because the truck.”

Examples indicate appropriate grammatical structure was used to guess unknown words.

*
Pragmatic*

Statements refer to the meanings and acceptability of phrases, sentences, and texts (Tunmer & Bowey, 1984)

He is able to self-correct because he knows the sentence does not make sense or mean the correct thing.

Example*

Metacognitive:

Statements indicate that thinking about what is known was used to solve a reading dilemma

He has trouble staying focused on what he is reading with the other distractions in the classroom.

Example*

Affective:

Statements suggest that feelings were influencing the reading process (i.e. distractible, frustrated, uncomfortable)

He is always able to pick out beginning sounds and uses several strategies to help his with unknown words.

Example*

Automaticity:

Statements identify fluency that was/was not evident in the reading process (i.e. sight words, forgets same word he just
read, uses strategies).

Example

* No responses made

 

Loading