Sprague, D. (2006). Defining education research: Continuing the conversation (Republished). Contemporary Issues in Technology and Teacher Education [Online serial], 6(2). https://citejournal.org/volume-6/issue-2-06/editorial/article1-html-6

Defining Education Research: Continuing the Conversation

by Debra Sprague, Editor, Journal of Technology and Teacher Education, George Mason University

Abstract

A debate currently occurring in the research community centers around what qualifies as “high quality” education research.  This discussion was prompted by the U. S. Department of Education’s challenge to consider only “scientifically based research” in their funding and policies.  This article outlines some of the issues related to this topic.  It concludes with an invitation for interested researchers to continue this conversation.

 

This editorial was initially published in the Journal of Technology and Teacher Education (2006), 14(3), 431-439. It is republished with permission from the Society for Information Technology and Teacher Education.

Introduction

The Society for Information Technology and Teacher Education (SITE) sponsors two peer-reviewed professional journals.  The Journal of Technology and Teacher Education (JTATE) is an international publication that “serves as a forum for the exchange of knowledge about the use of information technology in teacher education” (http://www.aace.org/pubs/jtate/).  JTATE publishes research-based articles that explore the role of technology in teacher education.

The Contemporary Issues in Technology and Teacher Education Journal (CITE), an online journal, was established as a multimedia counterpart to JTATE.  CITE includes three major categories of articles:

  • Current Issues include theoretical discussions of technology and teacher preparation.
  • Current Practices provide shorter, up-to-the-minute snapshots of technology in practice.
  • Seminal Articles include previously published “classic” articles that have advanced the discussion of technology and teacher education.

Several professional organizations share responsibility for the editorial review of current issues related to their disciplines.  These professional organizations include the Association for the Education of Teachers in Science (AETS), the Association of Mathematics Teacher Educators (AMTE), the Conference on English Education (CEE), the National Council of Social Studies College and University Faculty Assembly (CUFA), and SITE (http://www.aace.org/pubs/cite/).

Both of these journals are interested in articles that are research-based and provide empirical evidence of the benefits and limitations of technology.  However, a discussion currently occurring in the research community centers around what qualifies as “empirical evidence” and what methods constitute “scientifically based” research (Roblyer, & Knezek, 2003, Dede, 2004; Hostetler, 2005; Thompson, 2005).

Defining the Conversation

In 2001 the United States (U. S.) Congress passed the No Child Left Behind Act (NCBL).  At this time, the U. S. Department of Education challenged the research community to consider what constitutes scientifically based research (SBR).  NCBL defined SBR as   using empirical methods, randomized samples, and rigorous data analysis and measurements.  In addition, research needed to be easily replicated and generalizable (National Research Council, 2002). This model is based on research conducted in the scientific and medical fields.  This began a debate as to what constitutes “high quality” education research.

Although acceptable within laboratory settings, SBR raises issues of practicality when applied to classroom settings in which all variables cannot be accounted for.   It is not always possible in real classrooms to assign students to randomized treatments for the purpose of conducting research.  Often schools are reluctant to participate in experimental research studies for fear that the study will detract students from learning necessary content needed to pass standardized tests.  Finding one classroom willing to participate can be difficult; finding two so one can serve as a control group doubles the challenge.

There are also ethical concerns to be considered when using randomized samples.  How does one explain to parents that their child cannot use technology or specific software because the child is part of the control group?  One way is to let them know that their child will have an opportunity to use the technology at a later time.  This may alleviate the parents’ concerns, but children do not respond well to delayed gratification.  They do not understand why their friends are able to use technology and they are not.

The editors of the National Research Council report acknowledge these issues.  They state that the goal is not to require that all research be based on randomized samples, only when the research questions warrant it.  “For example, when well-specified causal hypotheses can be formulated and randomization to treatment and control conditions is ethical and feasible, a randomized experiment is the best method for estimating effects” (Feuer, Towne, Shavelson, 2002, p. 8).

However, Maxwell (2004) questions the National Research Council’s (2002) assumption that SBR should be the preferred method for causal investigations.  He states that this assumption is too narrow, out-dated philosophically, and ignores the richness of information generated by qualitative research.  Rather than choosing between quantitative and qualitative research, Maxwell (2004) believes that “practitioners of both approaches will need to develop a better understanding of the logic and practice of the other’s approach, and a greater respect for the value of the other perspective” (p. 9).

Continuing the Conversation

With such differing views and concerns, what is a researcher to do?  This is a concern we all need to wrestle with, especially those just beginning their research careers.  The editors of several of the education technology journals meet once a year, as part of the National Technology Leadership Summit, to discuss issues such as this.  Editorials have appeared in several of these journals addressing the implications of the U. S. Department of Education’s call for improving the quality of education research.  Recommendations from these editorials include:

  1. Defining a solid theoretical framework
  2. Developing clear and significant questions
  3. Developing clear and rigorous methods
  4. Developing clearly defined instruments that have well-established validity and reliability
  5. Conducting research that can easily be replicated by others
  6. Conducting research that allows for the possibility for predictions and generalizations (Thompson and Rodriguez, 2003-2004).

This section will discuss each of these recommendations in terms of conducting research on the role of technology in teacher education.  An editor’s perspective will be provided in order to assist with the publication process.

Defining a solid theoretical framework

It is important to do a thorough review of the current literature so that a solid framework can be developed.  A review of the literature allows the researcher (and eventually the reader of the article) to understand the scholarly context of previous research on the topic.  However, there is a word of caution.  The field of technology changes quickly.  It is essential to review current literature, within the past three years, to be sure the information is not outdated.  This does not mean one should ignore the literature prior to this time period, but an up-to-date understanding of this rapidly moving field is vital for developing a theoretical framework.

Developing Clear and Significant Questions

What is the purpose of the research?  What are you trying to determine?  Why is it important?  How will it benefit teacher educators?  These are essential questions to ask when developing the research questions.  They should be formulated before conducting the study.  For journals such as JTATE and CITE, Pollard and Pollard (2004-2005) advocates a research landscape that examines six areas:

  1. Learning – examine the relationship between technology and how people learn
  2. Teachers – develop models for preservice and inservice teachers to become effective users of technology
  3. Models/Strategies – develop technology-rich models to support student learning
  4. Assessment – develop appropriate methods and criteria for evaluating the effectiveness of instruction enhanced by technology
  5. Schools – investigate changes in the classroom, in teachers’ roles and schools due to the integration of technology
  6. Social Issues – investigate factors related to the digital divide

Policy issues are a possible seventh category.

Developing Clear and Rigorous Methods

The research methods chosen should emerge from the research questions.  “The question drives the methods, not the other way around.  The overzealous adherence to the use of any given research design flies in the face of this fundamental principle” (Feuer, Towne, Shavelson, 2002, p. 8). Articles rejected for publication often have a disconnect between the research methods and the questions.  Authors try to use a qualitative method when a quantitative method would be more appropriate or vice versa.  Finding the correct method is essential to ensuring “high quality” research.

Attitude and belief surveys dominate much of the education technology literature.  Although these are important studies, what is even more vital is understanding how these attitudes and beliefs change the behavior of the individual and how these changes improve learning.  Case studies are also popular in the literature.  Although case studies provide a rich understanding of the issues, there is a danger that the “study” will become too descriptive and anecdotal.  Learning to write a case study so it provides the research methods that give some confidence in the findings’ generalizability is a skill that needs to be developed.

Developing Clearly Defined Instruments

Several standardized instruments have been developed to assess technology’s learning potential.  Some of these are documented in journal articles or are available on the Internet.  These instruments have established validity and reliability.  Using these instruments, when appropriate, will allow other’s to replicate the study.  However, if it is necessary to develop a new instrument, including the validity and reliability scores when writing the results is essential.  When conducting a qualitative study, including interview questions helps readers to interpret the findings.

Conducting Research That Can Easily Be Replicated by Others

As stated in the previous paragraph, using instruments that have well-established validity and reliability scores allows others to replicate the study.  Through replication, the validity of the findings is established.  Validity is important no matter what research method is used, as this allows the field to know that the findings are legitimate.

Conducting Research That Allows for the Possibility for Predictions and Generalizations

Generalization means that what occurs in one setting will have the same results in another setting.  Research studies should describe the context of the settings and demographics of the participants.  By having this information, readers can determine if the findings would be the same in their situation.  For example, if one is studying a model for ways to improve preservice teachers’ understanding of technology integration and the model calls for two technology courses, this model will not easily generalize to other settings.  Most preservice teacher programs are having difficulty maintaining one technology course and would be unable to implement two such courses.

Using Thompson and Rodriguez’s (2003-2004) recommendations as guidelines, one can use quantitative, qualitative, or mixed-method approaches to education research.  To help guide the field, the editors of the education technology journals are exploring ways to mentor new researchers.  The American Education Research Association (AERA) offers one such approach for mentoring young researchers.  At the AERA annual meeting, editors meet with authors who submitted a manuscript prior to the meeting.  The editor reviews the manuscript and then discusses the document with the author, providing feedback that will enable the author to improve the quality of the research and the quality of the writing.  The intent is to guide the author through the publishing process by providing individual feedback not always provided by the normal review process.  SITE is exploring such a mentoring model to use during its annual meeting.

TappedIn

The editor of JTATE leads a monthly online discussion in TappedIn (http://www.tappedin.org).  TappedIn is a multi-user virtual environment in which teachers, librarians, university faculty, students, and researchers meet to share ideas and collaborate.  The environment includes text-based chat and private messaging and threaded discussion boards in every room.  Participants can either login as a guest or become a member (membership is free). Conversation transcripts are automatically emailed to members upon completion of the session.

The monthly discussion, titled “Publishing Your Work”, provides the field with the opportunity to discuss issues related to publishing with JTATE’s editor.  (Please see TappedIn’s calendar at http://tappedin.org/tappedin/do/CalendarAction for the schedule of these discussions.)  The editor’s intent is to use this forum as a way to continue the conversation about the quality of education research.  Using guest speakers, some who have been referenced in this article, the field will have an opportunity to further explore this important issue.  All readers are invited to attend these monthly discussions and participate in this crucial conversation.

Conclusion

This article outlines some of the major areas of discussion regarding what is “high quality” educational research.  Issues related to scientific-based research are presented.  Recommendations of what constitutes “high quality” research are offered.  The article concludes with an invitation to continue this conversation by participating in TappedIn online discussions with the editor of JTATE and other researchers.

References

Dede, C. (2004). If Design-Based Research is the Answer, What is the Question? Journal of the Learning Sciences, 13(1), 105-114.

Feuer, M. J., Towne, L., and Shavelson, R. J. (2002, November).  Scientific culture and educational research.  Educational Researcher, 31(8), 4-14.

Hostetler, K. (2005, August/September).  What is “good” education research?  Educational Researcher, 34(6), 16-21.

Maxwell, J. A. (2004, March).  Causal explanation, qualitative research, and scientific inquiry in education.  Educational Researcher, 33(2), 3-11.

National Research Council (2002). Scientific research in education. Washington, DC: National Academy Press.

Pollard, C. and Pollard, R. (2004-2005, Winter).  Research priorities in educational technology:  A Delphi study.  Journal of Research on Technology in Education, 37(2), 145-160.

Roblyer, M. D., & Knezek, G. A. (2003). New millennium research for educational technology: A call for a national research agenda. Journal of Research on Technology in Education, 36(1), 60-71.

Thompson, A. (2005, Summer).  Scientifically based research:  Establishing a research agenda for the technology in teacher education community. Journal of Research on Technology in Education, 37(4), 331-337.

Thompson, A. and Rodriguez, J. C. (2003-2004, Winter).  Scientifically based research:  Technology in teacher education.  Journal of Computing in Teacher Education, 20(2), 50, 52.

U. S. Congress (2001). No Child Left Behind Act. http://www.ed.gov/nclb/landing.jhtml

 

Author Note:

Debra Sprague
George Mason University
[email protected]

 

 

Loading