A Computer-Based Instrument That Identifies Common Science Misconceptions

by Timothy G. Larrabee, Oakland University; Mary Stein, Oakland University; & Charles Barman, Indiana University Purdue University Indianapolis

Abstract

This article describes the rationale for and development of a computer-based instrument that helps identify commonly held science misconceptions. The instrument, known as the Science Beliefs Test, is a 47-item instrument that targets topics in chemistry, physics, biology, earth science, and astronomy. The use of an online data collection system aided in developing this instrument and in ascertaining its validity and reliability. Validity was also established through use of expert panels, previously published items, and feedback from pilot tests. Using KR-21, internal consistency was established at 0.77. A test-retest reliability coefficient was established at 0.776, or moderate. As of December 2005, 1,071 respondents participated in this study, including 17 college and university educators, 40 members of the general public, and 41 K-12 educators. Eighty-five graduate students, 254 K-12 students, and 634 undergraduates also took the survey. This instrument continues to be revised to clarify items and add others to further its usefulness.

The editors of Contemporary Issues in Technology and Teacher Education hereby retract this article, “A Computer-Based Instrument That Identifies Common Science Misconceptions” by Timothy Larrabee, Mary Stein, and Charles Barman. The article is being retracted because a substantively duplicate manuscript was subsequently published by the same authors in the Journal of Science Teacher Education, Issue 2, Volume 18, April 2007: “What Are They Thinking? The Development and Use of an Instrument That Identifies Common Science Misconceptions” by Mary Stein, Charles R. Barman, and Timothy Larrabee (pp. 233–241).

 

Although many instruments have been developed that target individuals’ misconceptions about a variety of specific science topics, an online instrument targeting a wide range of science beliefs has not yet been developed. As society becomes more at ease with the use of the Internet, the development of instruments that effectively use technology for educational research are needed. This article will describe the rationale for and development of an easily administered instrument, known as the Science Beliefs Test, which helps researchers, science educators, and science teachers understand more about commonly held scientific misconceptions. A description of the Science Beliefs Test and an explanation of how its validity and reliability were established are included in this discussion.

Accessing Students’ Thinking About Science

Traditional Methods

Research on students’ beliefs and alternative conceptions they may hold has a long history and continues to be of great interest. A variety of methods have been used to elicit students’ ideas, and these have been widely reported in the literature (e.g., Aron, Francek, Nelson, & Bisard, 1994; Haslam & Treagust, 1987; Osborne & Freyberg, 1985; Schoon, 1995; Trumper, 2001; Watts & Zylbersztajn, 1981). Many of these methods are not feasible in terms of the time and effort required for use in existing K-12 and preservice elementary education science classrooms. Moreover, many of the assessments focus on specific science topics rather than on a broad range of science conceptions. The existing assessments often test at greater depths than can be reached in general survey science courses. As a result, the authors became interested in developing an instrument that would make effective use of existing technology in eliciting respondents’ beliefs about a wide range of science topics that could be accessed from any computer. The instrument would assist K-12 general science teachers, as well as preservice elementary education science educators, in identifying key misconceptions held by their students across the various science concepts presented in their curricula.

Haslam and Treagust (1987) noted that individual student interviews are often a useful way for researchers to identify students’ misconceptions in science; however, this methodology may not be as useful to teachers (Fensham, Garrard, & West, 1981; Peterson, Treagust, & Garnett, 1989). Typically, when interviewed, students’ responses are recorded, transcribed, and analyzed. As students become more adept at using the keyboard to express themselves in e-mails and instant messages, they will become more comfortable typing their responses to questions provided online. And their written responses will more closely approximate the verbal answers they may have given in a face-to-face interview.

Not only are current methods for eliciting students’ beliefs, such as interviewing and paper-and-pencil surveys, often cumbersome for teachers and scholars, they may also fail to be useful to the students as a means for encouraging thinking about their own ideas, the reasons for those ideas, and how their ideas may change as a result of instruction. Rosenfeld, Booth-Kewley, and Edwards (1993) reported that “responding on the computer may lead to higher levels of self-awareness,” and participants perceive online assessments as more useful and relevant (p. 498).

Odom and Barrow (1995) have advocated the development of paper-and-pencil tests to help classroom teachers diagnose misconceptions. Yet, administering and analyzing these assessments can be costly in terms of lost instructional time and money for printing and reprinting copies. Furthermore, there may be difficulties associated with collecting and analyzing the complete data set. In keeping with the concerns related to the difficulty of conducting personal interviews, as well as many other forms of data collection, we have developed an electronic instrument, the Science Beliefs Test, which aids in revealing science misconceptions.

Benefits of Online Surveys

The benefits of administering online instruments are well documented. Natal (1998) listed many of them. Respondents may complete online surveys at a time and place that is convenient for them without having to travel to a specific location at a particular time. Students receive immediate feedback on their results at the conclusion of the exam. Students with special needs can take all the time they need to complete the assessment without feeling rushed by the instructor or classmates. Students who have grown up with computers often feel more comfortable composing responses online, and their responses are more legible in type than script. Instructors do not have to give up instructional time and can instead use the time taken up by in-class paper exams to clarify students’ thinking about misconceptions identified by online assessments. Moreover, the collected data can be more easily analyzed.

Scholars benefit from the cost savings associated with not having to hire interviewers, transcribe tapes, or print paper surveys for large population samples. The surveys can also be disseminated to a wide range of participants and are not limited by geographic proximity or institutional interference in delivering the survey (Handwerk, Carson, & Blackwell, 2000; Schmidt, 1997). In addition to saving time, automatic data collection eliminates errors resulting from data entry (Rosenfeld et al., 1993).

A thorough review of the literature relating to the development of instruments used to determine misconceptions revealed that many researchers have emphasized the need for assessments that could be easily administered and used by classroom teachers. Although many of the existing instruments use a multiple-choice format, this format does not allow respondents to develop and express alternative responses that more fully reflect the range of their beliefs, including misconceptions, about a particular idea.

The Science Beliefs Test

Format

Our objectives in creating the Science Beliefs Test were to uncover prevalent misconceptions, as well as potential reasons for these misconceptions. Therefore, we decided to use a two-tiered instrument. The first tier consists of statements with a true or false response, and the second tier asks students to provide a written explanation to support the true/false response given for each item. The online collection process keeps a record of these explanations and provides a rich collection of data related to beliefs regarding specific scientific phenomena. This format also has practical classroom implications. It not only helps the teacher determine the extent to which particular misconceptions are held by students, but it also provides a mechanism for determining students’ underlying ideas. Moreover, it helps teachers understand when students are selecting the “right” answer but for the wrong reason(s) or, alternatively, the “wrong” answer but with a justified explanation. When used to assess students’ prior knowledge, teachers are alerted to the most commonly held misconceptions so they can then adjust their instruction accordingly.

Instrument Development

Item selection and development was an iterative process. A thorough review of research on instruments that were developed to target alternative conceptions and misconceptions revealed that most instruments were developed for in-depth study of a specific concept, such as diffusion or chemical bonding, rather than a variety of science concepts crossing a range of science disciplines. K-12 general science classroom teachers and science educators, particularly those who teach elementary preservice teachers, have use of a broader instrument for both teaching and research purposes. The Science Beliefs Test was constructed to target a wide range of science concepts across science disciplines for that target audience. We sought to maintain balance in the number of questions associated with the topics of life science, physics, chemistry, earth science, and astronomy. Moreover, we focused item development on concepts that appeared to be fundamental to developing higher levels of understanding related to a particular concept. The selection criteria included (a) item represented a basic understanding for scientifically literate adults, (b) concept had been previously identified as being problematic for learners in research on alternative conceptions and misconceptions, (c) concept or topic is addressed in the National Science Education Standards (National Research Council [NRC], 1996), and (d) a balance of items across science areas.

Initally, 23 items were selected from existing instruments. These questions were converted into a true/false format before being administered to preservice teachers. This pilot test was designed to reveal (a) problems with the structure of the statements that might mislead participants, (b) the effectiveness of the two-tier design, and (c) science misconceptions commonly held by the participants.

Based on the results from the pilot test, we revised the initial set of questions and developed the full version of the Science Beliefs Test to create a 48-item instrument targeting student beliefs in chemistry, physics, biology, earth science, and astronomy. The additional items were pulled from existing instruments, as well as from statements contained within the National Science Education Standards (NRC, 1996). After the Science Beliefs Test was fully developed, a panel of experts was used to determine the content validity. The full version of the test was piloted by administering it to a different set of preservice elementary teachers. Based on the results from this administration, one question was omitted, and others were revised for clarity. The resulting 47-item instrument was then put online to test the online data collection process (see https://www2.oakland.edu/secure/sbquiz).

Description

The first online page is a consent form – participants may respond to the questionnaire whether or not they provide consent to use the data for research purposes. If consent is given, the data is collected. If not, the program will not collect any of the data from the respondent’s answers. For those respondents choosing to participate in the study, two questions are asked that will further determine whether the data should be collected: (a) has the subject responded to this instrument previously, and (b) did the respondent receive help while answering the questions. If there is an affirmative response to either of these questions, then data for that subject is not collected. Respondents also provide demographic and background information, including gender, grade level (if a student), grade level taught (if an educator), number of science courses taken since high school, and academic major (if currently enrolled in college). Upon completing the 47-item survey, the instrument concludes by displaying the respondent’s answers and the correct answers. Each “correct” response is highlighted in green and “incorrect” responses are highlighted in pink. An overall correct percentage is also calculated.

This method allows researchers to collect information about science beliefs from a wide range of subjects with different backgrounds. It will also allow those who are interested in the science beliefs of their students to access easily some of their existing ideas. In December 2005 data had been collected from 1,102 respondents. Of the 1,071 respondents, 17 (1.6%) teach at a college or university; 40 (3.7%) identified themselves as members of the general public; 41 (3.8%) are K-12 teachers; 42 (3.9%) are 6-8 grade students; 83 (7.7) are 9-12 students; 85 (7.9%) are graduate students; 129 (12.0%) are K-5 grade students; 634 (59.2%) are undergraduate students. Two hundred thirty-eight respondents (22.2%) are male; 833 (77.8%) are female.

It should be noted that for many of the items, there are ways of thinking about the declarative statement that would make an “incorrect” answer “correct.” This is one reason that the opportunity to include an explanation that corresponds with the respondent’s answer is so important. We continue to work to clarify each item; however, there will always be alternative interpretations of the statements and different ways of thinking about science that will lead to valid alternatives to the “correct” answers provided.

Researchers can easily access the data online to view participant responses and explanations for these responses. This data can be quickly sorted to view only the explanations for (a) specific items; (b) correct responses; (c) incorrect responses; or (d) for a specific time period. The data can be imported to a Microsoft Access file and then exported to other types of files for thorough analysis.

Reliability and Validity

Analysis of the instrument with respect to its reliability and validity is ongoing. As discussed previously, many of the items from the instrument were developed for use with other instruments that targeted science misconceptions. The content validity of many of these items had already been established. A number of the items were direct statements found within the National Science Education Standards (NRC, 1996), and the content validity of this document was also established by a panel of expert reviewers. Additionally, the instrument has been through several iterations of development, during which respondents provided written explanations that detailed their understandings of the items.

Through this process, it became evident when an item needed to be revised to enhance its validity. For example, Item 14 originally stated, “When a book is at rest on a table (not moving), there are no forces acting on it.” While analyzing subjects’ responses during the pilot study, the vast majority responded “False” but provided the explanation that gravity was acting on the book. Through this item, researchers sought to glean information about understandings of balanced forces. Thus, the item was revised to its present form: “When a book is at rest on a table (not moving), other than the force of gravity, there are no other forces acting on it.” Many of the items went through similar types of revisions in an effort to enhance the validity of the instrument.

Reliability was investigated on a number of levels. For example, when considering only true/false responses, the internal consistency (Kuder-Richardson, KR-21) of the instrument is 0.77. A test-retest administration of the true/false items was used to further establish evidence of reliability. Items were administered and re-administered to 30 students within a 2-week interval. No instruction about the science topics tested was presented during this time. The test-retest reliability coefficient for this procedure was 0.776, which Campbell et al. (1999) considered a moderate reliability estimate.

Another component of the reliability of the instrument is the extent to which the explanations provided by the respondents “match” the true/false answers. With respect to the explanations provided, an independent rater with expertise in science education was given a random set of 30 explanations for each item and asked to match them with the appropriate true or false response. That is, when reading only the explanation for a particular item, to what extent could the rater predict whether the subject had responded “True” or “False” to this item? The expert rater averaged over 90% correct matches between the explanations and each true/false item.

Next Steps

The Science Beliefs instrument appears to be a good instructional tool for science educators interested in uncovering students’ beliefs about specific areas in science. Moreover, the online administration of this instrument offers great potential to science educators and researchers who are interested in studying students’ misconceptions. The instrument will likely undergo continuous revision as items are clarified and added. Subsets of items in each science area (e.g., biology, chemistry, physics) will also be made available to science educators specializing in those areas. It appears that as a responder proceeds through the 47-item instrument, the number of explanations and the extent to which ideas are described descreases. Responders may become fatigued with explaining their thoughts in this format. By dividing the instrument into relevant subsets with fewer items, the number and depth of explanations may increase.

In addition to adult responders, many of the items included in the Science Beliefs Test may be useful to teachers of science at the elementary, middle, and high school levels. We have received several requests from school districts to use this instrument. Only time will tell of its efficacy with students at these instructional levels.

References

Aron, R. H., Francek, M. A., Nelson, B. D., & Bisard, W. J. (1994). Atmospheric misconceptions: How they cloud our judgment. The Science Teacher, 61, 31-33.

Campbell, K.A., Rohlman, D.S., Storzbach, D., Binder, L.M., Anger, W.K., Kovera, C.A., Davis, K.L., & Grossmann, S.J. (1999). Test-retest reliability of psychological and neurobehavioral tests self-administered by computer. Assessment, 6(1), 21-32.

Fensham, P.J., Garrard, J., & West, L.W. (1981). The use of cognitive mapping in teaching and learning strategies. Research in Science Education, 11, 121-129.

Handwerk, P. G., Carson, C., & Blackwell, K. M. (2000, May). On-line vs. paper-and-pencil surveying of students: A case study. Paper presented at the annual forum of the Association for Institutional Research, Cincinnati, OH.

Haslam, F., & Treagust, D.F. (1987). Diagnosing secondary students’ misconceptions of photosynthesis and respiration in plants using a two-tier multiple choice instrument. Journal of Biological Education, 21(3), 203-211.

Natal, D. (1998, May). On-line assessment: What, why, how. Paper presented at the TechnologyEducation Conference, Santa Clara, CA.

National Research Council. (1996). National science education standards. Washington, DC: National Academy Press.

Odom, A.L., & Barrow, L. H. (1995). Development and application of a two-tier diagnostic test measuring college biology students’ understanding of diffusion and osmosis after a course of instruction. Journal of Research in Science Teaching, 32(1), 45-61.

Osborne, R., & Freyberg, P. (Eds.). (1985). Learning in science: The implications of children’s science. London: Heinemann.

Peterson, R.F., Treagust, D. F., & Garnett, P. (1989). Development and application of a diagnostic instrument to evaluate grade-11 and -12 students’ concepts of covalent bonding and structure following a course of instruction. Journal of Research in Science Teaching, 26(4), 301-314.

Rosenfeld, P., Booth-Kewley, S., & Edwards, J.E. (1993). Computer-administered surveys in organizational settings. American Behavioral Scientist, 36(4), 485-511.

Schmidt, W.C. (1997). World-wide web survey research: Benefits, potential problems, and solutions. Behavior Research Methods, Instruments, & Computers, 29(2), 274-279.

Schoon, K. J. (1995). The origin and extent of alternative conceptions in the earth and space sciences: A survey of preservice elementary teachers. Journal of Elementary Science Education, 7(2), 27-46.

Trumper, R. (2001). A cross-age study of senior high school students’ conceptions of basic astronomy concepts. Research in Science & Technological Education, 19(1), 97-109.

Watts, D. M., & Zylbersztajn, A. (1981). A survey of some children’s ideas about force. Physical Education, 16, 360-365.

 

Author Note

Timothy G. Larrabee
Oakland University
email: [email protected]

Mary Stein
Oakland University
email: [email protected]

Charles Barman
Indiana University Purdue University Indianapolis
email: [email protected]

 

 

Loading