Willis, J. (2003). Commentary: Reflections on the relationship between idealogy and research. Contemporary Issues in Technology and Teacher Education [Online serial], 3(1). Retrieved from https://www.citejournal.org/volume-3/issue-1-03/editorial/commentary-reflections-on-the-relationship-between-idealogy-and-research

Commentary: Reflections on the Relationship Between Idealogy and Research.

by Jerry Willis, Iowa State University

The Main Issue: Warrant

 

The main point of Lederman’s (2003) editorial on the nature of scientific research was that projects like the U.S. Department of Education’s What Works Clearinghouse (WWC) attempt to select and disseminate only information about how we should teach and learn that comes from research based on variations of the scientific method (for an expansion of this discussion, see the Appendix).

WWC and similar efforts thus attempt to create a classical link between research and practice. Researchers discover the truth through controlled experiments and pass on the implications to practitioners who are good practitioners to the extent that they follow the directives of the researchers.

Lederman believes WWC is flawed for at least two reasons. First, he criticizes the underlying assumption of the project that “scientific evidence can only be provided by causal research designs (aka The Scientific Method). I applaud and agree with Lederman’s suggestion that we use a much wider range of research methods and that we always keep in mind the foundational assumptions of the research frameworks we use.

Further, Lederman argued that the WCC assumes “research findings from studies of teaching and learning can be generalized freely across contexts and situations if derived from studies following causal designs.” This does, indeed, seem to be a foundational assumption of WCC—that universals, once discovered, can be widely applied. This is one of the major rocks upon which naive positivism in the form of the Vienna Circle foundered. The results of any particular study may be due to many factors—instrumentation, a failure to separate theory from observation, measurement error, experimental bias, sampling error—that have nothing to do with the truth of the hypothesis under study. This led Sir Karl Popper to formulate an approach now called post-positivism that rejected some of the confidence of the Vienna Circle’s logical empiricism (positivism) but maintained most of the tenets and strategies of the scientific method. This kinder, gentler positivism still dominates much of American experimental psychology, but today there are at least a hundred alternatives (including postmodern psychology) that are based on different paradigms, ask different questions, and use different research methods. That pattern is also reflected in education where critical theory, postpositivism, and interpretivism are movements that guide the research of many groups of scholars.

To this point I agree with much of what Lederman has said, but I think he has not gone far enough. He presents the issues before us as a problem of limits. Some groups want to restrict our sources of information to the results of research based on “The Scientific Method,” and Lederman believes there are many other valid and useful sources of knowledge. I agree, but I think this issue is not the core one. The core is how we decide what warrants our attention and our belief.

With hundreds of thousands of studies published in education every year, there must be a way of deciding which studies to pay attention to and which not to pay attention to. By limiting their focus to “scientific” studies, especially experimental and quasi-experimental research, WCC has effectively eliminated about 97% of the educational literature. It can then concentrate on the remaining 3% and confidently report results that should be generalizable. What could be simpler? Base your directions to practitioners on the results of well done “scientific research”!

As Lederman points out, this approach can be considered simplistic for many reasons. However, if we stop at this point in the analysis, we miss two important points. One is that ideology is what guides most decisions about what constitutes acceptable research, and the other is that different audiences for research have different ways of deciding what warrants their attention.

 

Ideology: We Are Self-Confirming Organisms

In my early years as a professor, I was unwittingly a participant in a study of how a scholar’s beliefs influence decisions about whether an article should be published or not. Professors from two theoretical camps received a paper that supposedly had been submitted to an annual that would be published in a few months. They were asked to review the paper and recommend whether it should be included or not. There were two versions of the paper. In one the theoretical beliefs of Reviewer Group A were supported by the results, and in the other version the beliefs of Group B were supported. The results of the study indicated that reviewers who read the version supporting their beliefs they were more likely to recommend acceptance, while the opposite was true of reviewers who read a version with data that did not support their beliefs.

Today there are hundreds of studies of reviewer bias indicating that the ideology of the reviewer is a significant influence on whether a paper is accepted to a journal or not. This is a specific example of what may be a general characteristic of humans—we have lower standards for information that confirms beliefs we already hold and higher standards for accepting information that disconfirms our beliefs.

That characteristic is a foundational issue when it comes to research. Whether we are researchers, editors, reviewers, or consumers of research, we prefer studies that tell us we are right and our opponents are wrong. Our ideology is a basic, primitive source of guidance when we do research and when we select what warrants our attention. Put another way, we do not decide what type of research we will consume and then develop our ideology, our beliefs, from that reading. Instead, we begin with our ideology and then we select research based on our ideology. Do you doubt this assertion? Consider this quote from Todd Oppenheimer (1997) in an influential article he wrote for The Atlantic Monthly:

There is no good evidence that most uses of computers significantly improve teaching and learning, yet school districts are cutting programs — music, art, physical education — that enrich children’s lives to make room for this dubious nostrum, and the Clinton Administration has embraced the goal of “computers in every classroom” with credulous and costly enthusiasm.

Oppenheimer is an outspoken critic of computers in schools and his reading of the research literature is that computers are a “dubious nostrum.” Another critic, Larry Cuban (2001), wrote a book titled Oversold and Underused: Computers in the Classroom, and after reviewing the research, he concluded that computers in schools have had very little impact.

In another book, titled The Child and the Machine: How Computers Put Our Children’s Education At Risk, Armstrong and Casement (2000) said they had

discovered that what had been excluded from the debate was scientific evidence. Proponents often claimed this research bolstered the argument for computer-based education, but in reality it struck a far more cautious, if not critical note….So far the most that can be said about computer-based instruction is that vast sums have been lavished on a technology whose educational potential has yet to be proven. (p. xii)

 

I have cited three extensive studies of the research on computers in education concluding that we simply do not have the evidence that computers do anything good in education. Does that settle the matter? No. Consider these results from surveys of the literature on the use of computers in education.

Recent research consistently demonstrates the value of technology in enhancing student achievement. (Sivin & Bialo, 1994).

A meta-analysis of the research literature showed “computer-assisted instruction in science education” is effective. (Bayraktar, 2002).

 

A review of the research on “computers as learning tools” concluded they are effective. (Schacter, 1999).

 

A meta-analysis concluded computers are powerful tools for reading instruction. (Soe, Koki, & Chang, 2000).

 

How can these two sets of conclusions be valid when they are supposedly based on the same research literature? The answer is complex but it is also simple. Simply put, we are self-confirming organisms, and we tend to find what we expect to find when we read and select research that warrants our attention. If I am correct, research will never “settle” disagreements about issues such as whether computers are effective in schools. However, research will often be used to support established positions.

This assertion is amply illustrated in the current literature, and I will cite only one example. In their book on how to fix American education, two conservative critics, David Kearns and James Harvey (2000), who have connections to recent Republican administrations, advocated their solution _ standards and testing. The authors of A Legacy of Learning: Your Stake in Standards and New Kinds of Public Schools repeatedly say that their solutions are based on research. However, at about the same time this book was published by the Brookings Institution Press in Washington, Howard Sacks (1999) wrote The Higher Price of America’s Testing Culture and What We Can Do to Change It. In that book Sacks concluded,

The evidence revealed the very troubling and costly effects of our growing dependence on large-scale mental testing to assess the quality of schools, one’s merit for college, and a person’s aptitude for many different jobs. In light of the evidence, I was dumbfounded that mental testing was continuing to carve out an increasingly entrenched and unquestioned position in our schools, colleges, and workplaces. (p. xi)

 

Sacks was dealing with the same issues, and he sometimes used the same evidence, but the conclusions he drew from his review of the research are the opposite of Kearns and Harvey’s. Sacks is a liberal and Kearns and Harvey are conservatives. The difference in the two books was not research, it was the ideology of the authors.

If you still have doubts about the central role of ideology in the debates about education and educational technology, I urge you to read Gerald Bracey’s (2002) book on the attack on American public education. Bracey is a liberal commentator, and the book is a direct attack on the conservative Republican view of American education as broken and in need of fixing. Bracey goes to great pains to point out how the Republicans misinterpret research, suppress and reject research that goes against their education ideology, and ignore published studies that contradict their long-held policy. Once you have read Bracey, read Finn and Ravitch (1996) and learn how liberal teacher education professors have scorned good research on teaching methods they call “instructivism” and instead adopted the unproved and ill-conceived ideas of Dewey and constructivists.

Again, research is not the center of our major debates in education today—ideology is. And while both the major sides _ liberals and conservatives—accuse the other of ignoring or misconstruing the available research, it does not mean that research is at the center of the debate. It is simply a weapon to advance one side over another.

Bridging the Gap

American education today is a battle ground for two major ideologies _ one conservative and one liberal (with a third, critical theory, playing a lesser role in the public debate). Advocates believe their side is right and the other wrong, and much of the debate, especially at the policy level, is couched in those terms. I believe it is quite possible that both sides in this ideological debate have something to say to American education. If we decide to play a role in the ideological debate, I think we have three main options for that role.

Radical Advocates (Political Officer)

In the movie The Hunt for Red October, the dual command structure of the Soviet military was highlighted. The Captain of the submarine Red October was the ranking naval officer but there was a political officer who also had considerable power. This pattern was also highlighted in Dr. Zhivago when the military commander and the political commander of a Red Army unit disagreed on whether Zhivago should be allowed to return home. The political officer was responsible for representing the “party line” and seeing to it that decisions were according to party beliefs. In today’s discussions about education and educational technology many of us will play the role of political officer. We will adhere to the party line, express our views in terms of what the party says we should do, and advocate the views of the party. This practice tends to polarize communities. It only reinforces those who already agree and rarely changes the minds of those who disagree.

There are, however, two other roles you can play.

Opportunists

 

When I work in former Soviet Union countries I often come in contact with educators and politicians who have become ardent capitalists and advocates of Western education methods. For some of them this is a true conversion or an opportunity to express ideas that had been suppressed or hidden during the Soviet era. For many, however, it is opportunism. The wind changed and so did the loyalties of the opportunists. Some of these people were Young Communists a few years ago, and when that era passed they took Lenin’s picture and hammer and sickle emblems off their scarves. Not long after that, they were dressed in the power suits and ties of capitalism and were looking for deals.

Any change will attract opportunists. I suspect that some of the current interest in integrating technology into teacher education is because the U.S. Department of Education’s PT3 grant programs make it profitable. When that money runs out, we can expect some opportunists to drop that topic and move on to the next one with money behind it. I am not even sure that is bad. Opportunists work within the existing structure and manage to get resources to do things that might be impossible without their ability to take advantage of changing situations.

Translators and Interpreters

Another significant role to consider is that of translator and interpreter. Members of ideological camps tend to talk to each other and to read research that supports their views. They often find research from other traditions hard to follow and to understand. Just as two people who speak different languages need a translator to communicate, sometimes ideological groups need translators and interpreters to communicate.

Two conservatives talking about accountability and how computers can help make schools accountable for learning probably will not have difficulty communicating with each other. Adding a political agent from the liberal camp to the conversation is more likely to bring it to an abrupt halt than to advance understanding. What is needed is an interpreter who understands both ideologies and can translate concerns and research results from one camp into the language and context of the other.

 

This is probably the most difficult of the three roles but it is probably the most important if we are to take advantage of all the knowledge and expertise in our field. Unfortunately, the role of translator/interpreter is likely to be the most unappreciated and difficult to understand. It requires the widest range of skills and it may also call for a thick skin. On the other hand, successful interpreters are likely to find the role very rewarding.

In Summary

Norman Lederman’s paper on the nature of scientific research highlights a major issue, not only in educational technology and teacher education. It is one that resonates across the field of education. Research in the traditional “scientific method” mold is too narrow and limited to supply us with the rich and robust vein of understanding that we need. We must encourage and consume many forms of scholarship. However, I do not think research plays the central role in decision making that Lederman implies. It is, instead, a tool, or weapon, that is used to support ideology. It is ideology that is at the core of many education debates today, and the sooner we realize that, the sooner we can play a significant role in the determination of policy and practice.

Change research and you will not change ideology. Change ideology and research will follow the change in ideology.

References

Armstrong, A., & Casement, C. (2000). The child and the machine: How computers put our children’s education at risk. Beltsville, MD: Robins Lane Press.

 

Bayraktar, S. (2002). A meta-analysis of the effectiveness of computer-assisted instruction in science education. Journal of Research on Technology in Education, 34(2), 173-88.

 

Bracey, G. (2002). What you should know about the war against America’s public schools. Boston: Allyn and Bacon.

 

Cuban, L. (2001). Oversold and underused: Computers in the classroom. Cambridge, MA: Harvard University Press.

 

Finn, C.E., & Ravitch, D. (1996). Education reform 1995-1996: A report from the Educational Excellence Network to its Education Policy Committee and the American People. Indianapolis: Hudson Institute.

 

Kearns, D.T., & Harvey, J. (2000). A legacy of learning: Your stake in standards and new kinds of public schools. Washington, DC: Brookings Institution Press.

 

Oppenheimer, T. (1997) The computer delusion. The Atlantic Monthly [Electronic version]. Retrieved April 16, 2003, from: http://www.theatlantic.com/issues/97jul/computer.htm

 

Sacks, P. (1999). Standardized minds: The high price of America’s testing culture and what we can do to change it. New York: HarperCollins.

 

Schacter, J. (1999). The impact of education technology on student achievement: What the most current research has to say. Milken Family Foundation. Retrieved April 8, 2003, from: http://www.mff.org/publications/publications.taf?page=161

 

Sivin-Kachala, J., & Bialo, E. R. (1994). Report on the effectiveness of technology in schools, 1990-1994. Washington, DC: Software Publishers Association.

 

Soe, K., Koki, S., Chang, J. M. (2000). Effect of computer-assisted instruction (CAI) on reading achievement: A meta-analysis. Retrieved April 8, 2003, from http://www.prel.org/products/Products/effect-cai.htm

Appendix

Further Reflections on the Nature of Scientific Research

 

Norman Lederman’s paper on ways of thinking about and doing research in education begins with a commentary on the movie, Never Cry Wolf. That story, which did not receive the attention I think it should have, is about a research project.

Dr. Lederman provides an overview of the plot in his paper so I will not repeat it here. I do, however, want to comment on two important implications of the movie. The first is that ideology guided the research project. The main character, a young scientist, was commissioned to do a certain research study because his mentor already “knew” what the results would be (that Artic wolves were killing caribou and causing a significant and dangerous decline in the population). The researcher’s main job was to provide empirical evidence to support that known fact and then come up with ways of reducing the wolf population in order to save the caribou population. The research thus began with a preconceived notion about the place of wolves in the Arctic ecosystem, along with a very detailed set of assumptions about their hunting and eating patterns and family structure.

We could speculate for a long time on why the mentor was so sure wolves were the cause of the caribou problem. Perhaps it goes back to the steady diet of Disney movies to which many of us were exposed while growing up. Predators like wolves do not get much good press in those movies. And then there is the Big Bad Wolf fairy tale and the use of the term wolf to mean a predatory male on the prowl. No, wolves do not get an even shake in the children’s media, and things are not much better for them when it comes to coverage in adult media.

Of course, we do not have to develop a theory about cultural bias against wolves to explain the mentor’s ideas. He could have simply based his belief on sound experience. Perhaps all his experience in the field pointed to wolves as the culprits, and the research was just a formality in establishing a truth that everyone in the field knew already.

 

That is not what happened, however. Our young researcher begins his lonely research in the Artic and as he gathers data he gradually comes to realize that wolves are not the source of the caribou decline. They mostly eat mice and other small rodents, which the researcher also samples to test their edibility. Wolves do sometimes bring down a caribou, but they mostly catch the weak and the ill, which means they probably enhance the survival probabilities of healthy caribou rather than reducing them.

How did this happen? How could the mentor have been so wrong? We will probably never know why, but we can be sure that this was not the first time research refuted a sure thing nor will it be the last. We do research because we want to know more about the world we live in, and sometimes the results give us more confidence in what we believed in the first place. Sometimes the results cause us to question our closely held beliefs and even to reject them in favor of another way of explaining the world. That is the purpose of research.

A second aspect of the study in Never Cry Wolf is that the standard criteria of what is generally called the Scientific Method were violated many times. The researcher did not state a hypothesis, gather data, and then analyze the data to determine if the hypothesis was supported or not. Tyler, the researcher, changed his research method several times, and he changed the focus and purpose of his research. Though his research was successful _ peers in his field agreed that he had discovered something about an important relationship that was not known before _ it did not meet the criteria for “valid” research that tens of thousands of college students are taught in undergraduate and graduate research courses every year.

What are those standards? Here is my list of essential characteristics of research that follows the scientific method.

1. Objective—you must approach the question of the research in an objective way and avoid interjecting any form of subjectivity into your research because that can lead to bias. Tyler actually began the study with one subjective bias, borrowed from his mentor. He expected a certain set of results, and he set up a study to get those results. Then, as the study progressed his subjective bias shifted to the opposite conclusion and he began to gather data that would support the opposite of his original belief.

2. Empirical—gather quantifiable data and analyze it according to established statistical methods so that your conclusions are based on data that can be replicated and tested by other researchers. Tyler’s most important data was not a set of numbers he could analyze with a standard statistical procedure such as a t-test or analysis of variance. Instead, his observations were most important data. His data was predominately qualitative, not quantitative.

3. Linear, preplanned, and structured—design your study beforehand and then execute it according to your plan. Once the data has been gathered, analyze it according to your plan and report the results. At noted earlier, the original research plan was jettisoned and replaced by a series of new plans. Even the purpose of the research changed and that called for new methods and new types of data.

4. Prefer controlled experimental methods—while other methods are acceptable, it is always best to conduct experiments under controlled conditions. Where that is not possible, use quasi-experimental methods, and where that is not possible use correlational methods. Case studies are the least useful type of research. In essence, Tyler conducted what anthropologist would call ethnographic research and what educational researchers would describe as a case study. He used one of the weakest forms of research according to those who believe in “The Scientific Method.”

5. Search for universals: Laws, rules of behavior—The purpose of research in the social sciences is to predict and control behavior. You do that through discovering universal laws of behavior that allow us, if we know the context, to predict how an organism will behave. This is the only one of the six characteristics of the scientific method that actually fits Tyler’s research. He was looking for a general answer to the question of whether wolves are responsible for the decline in the caribou population.

6. Research is separate from and superior to practice—in the traditional scientific method, research is a complex and sophisticated activity that must be done by specialists, called researchers, in controlled contexts. Once the researcher has discovered universal laws, the implications of those laws are communicated to practitioners. “Good practice” in this model involves doing what researchers say you should do. Tyler comes close to meeting this criterion. He was a specially trained researcher and he did draw conclusions from his research that were intended to guide practitioners. However, in this case he and people like him were both researchers and practitioners. They played both roles, just as teachers who do an action research project in their classroom play both roles. Also, Tyler did not work in a controlled environment. He did his research in the natural environment and sacrificed strict control, but he gained a great deal. For example, he did not have to generalize from the behavior of wolves in captivity to wolves in the wild _ he was studying in the very environment to which he wanted to generalize!

Never Cry Wolf is a movie that could be profitably viewed in many types of research classes. It illustrates the folly of teaching “the scientific method.” As Lederman points out, there is not one scientific method, there are many. Different fields of scholarship do not all adhere to the same “scientific method.” Rather, different disciplines have developed their own approach to research.

In the 20th century, for example, many American psychologists used tightly controlled experiments in artificial environments to study how children learn. At the same time, Jean Piaget in Switzerland was studying the behavior of his own children in semi- or uncontrolled contexts. Instead of preplanning his research, Piaget would often adapt his method and procedure based on the behavior of the children, and he often gathered qualitative data, such as comments from children about their learning, in preference to the quantitative data that American psychologists preferred.

Also in the 20th century Margaret Mead changed our perceptions of other cultures with reports of her ethnographic research of exotic Pacific island cultures, where she used participant observation as her primary research method. Her research environment was not even semi-controlled. It was the natural context, and thousands of researchers have followed in her footsteps to study everything from the Fox Indians in Iowa to the behavior of teachers and students in inner city schools.

Finally, social scientists like Max Horkheimer and Theodor Adorono helped found the Frankfurt School, now Critical Theory, and used a method of research that combines a revised Marxist ideology with many different methods to expand our understanding of everything from authoritarian personalities to democratic leadership.

All of these scholars have had a major impact on how we think about society, schools, and learning. Their descendants have contributed to our growing knowledge about technology and teacher education. If, however, we consider the six characteristics of the traditional scientific method _ objective, empirical, linear and preplanned, a preference for experimental methods, the search for universals, and the research-practice relationship _ there is not a single one that is common to all, or even most, of the research that has influenced education and the social sciences over the past 100 years. Not a one! Every one of these “requirements” has been ignored, violated, and directly opposed by thousands of scholars in many disciplines who have produced useful research that influenced teachers, administrators, and policy makers.

That list of characteristics makes up, at best, the guidelines for doing psychological research that were advocated by most of the major American centers of psychological research for part of the 20th century. And even while that model dominated American psychology there were advocates of other approaches. Kurt Lewin, for example, began talking about action research methods that were based on participatory models as early as 1948.

Jerry Willis
Curriculum & Instruction
Iowa State University
N155 lagomarcino
Ames, IA 50011
Phone: (515) 294-2934
email: jwillis@iastate.edu