Handbook of psychology volume 7 educational psychology


Educational/Psychological Intervention Research


Download 9.82 Mb.
Pdf ko'rish
bet130/153
Sana16.07.2017
Hajmi9.82 Mb.
#11404
1   ...   126   127   128   129   130   131   132   133   ...   153

Educational/Psychological Intervention Research

JOEL R. LEVIN, ANGELA M. O’DONNELL, AND THOMAS R. KRATOCHWILL



557

CONTEMPORARY METHODOLOGICAL ISSUES:

A BRIEF OVERVIEW

558


Evidence-Based Treatments and Interventions

558

Quantitative Versus Qualitative Approaches

558

RESEARCH METHODOLOGY AND THE CONCEPT

OF CREDIBLE EVIDENCE

560


Credible Versus Creditable Intervention Research

560

Components of CAREful Intervention Research

561

THE CONCEPT OF EVIDENCE

562

Good Evidence Is Hard to Find

562

The Evidence of Intervention Research

563

Additional Forms of Contemporary Intervention

Research Evidence

565

Summary

568

ENHANCING THE CREDIBILITY OF

INTERVENTION RESEARCH

568


Psychological /Educational Research Versus

Medical Research

568

A Stage Model of Educational /Psychological

Intervention Research

569

What Is Random in Randomized Classroom

Trials Studies?

573

Implementing a Randomized Classroom Trials Study

574

Commitment of Federal Funds to Randomized Classroom

Trials Research

575

Additional Comments

576

Closing Trials Arguments

577

REFERENCES

577

The problems that are faced in experimental design in the so-



cial sciences are quite unlike those of the physical sciences.

Problems of experimental design have had to be solved in the

actual conduct of social-sciences research; now their solu-

tions have to be formalized more efficiently and taught more

efficiently. Looking through issues of the Review of Educa-

tional Research one is struck time and again by the complete

failure of the authors to recognize the simplest points about

scientific evidence in a statistical field. The fact that 85 per-

cent of National Merit Scholars are first-born is quoted as if it

means something, without figures for the over-all population

proportion in small families and over-all population propor-

tion that is first-born. One cannot apply anything one learns

from descriptive research to the construction of theories or to

the improvement of education without having some causal

data with which to implement it (Scriven, 1960, p. 426).

Education research does not provide critical, trustworthy,

policy-relevant information about problems of compelling

interest to the education public. A recent report of the U.S.

Government Accounting Office (GAO, 1997) offers a damn-

ing indictment of evaluation research. The report notes that

over a 30-year period the nation has invested $31 billion in

Head Start and has served over 15 million children. However,

the very limited research base available does not permit one

to offer compelling evidence that Head Start makes a lasting

difference or to discount the view that it has conclusively es-

tablished its value. There simply are too few high-quality

studies available to provide sound policy direction for a

hugely important national program. The GAO found only

22 studies out of hundreds conducted that met its standards,

noting that many of those rejected failed the basic method-

ological requirement of establishing compatible comparison

groups. No study using a nationally representative sample

was found to exist (Sroufe, 1997, p. 27).

Reading the opening two excerpts provides a sobering

account of exactly how far the credibility of educational re-

search is perceived to have advanced in two generations. In

what follows, we argue for the application of rigorous research

methodologies and the criticality of supporting evidence. And,

as will be developed throughout this chapter, the notion of

In 1999 Joel R. Levin and Angela M. O’Donnell published an arti-

cle, “What to Do About Educational Research’s Credibility Gaps?”

in Issues in Education: Contributions from Educational Psychology,

a professional journal with limited circulation. With the kind per-

mission of Jerry Carlson, editor of Issues, major portions of that ar-

ticle have been appropriated to constitute the bulk of the present

chapter.


558

Educational / Psychological Intervention Research

evidence—specifically, what we are increasingly seeing as

vanishing evidence of evidence—is central to our considerable

dismay concerning the present and future plight of educational

research, in general, and of research incorporating educational

and psychological treatments or interventions, in particular.

We maintain that “improving the ‘awful reputation’ of educa-

tion research” (Kaestle, 1993; Sroufe, 1997) begins with ef-

forts to enhance the credibility of the research’s evidence.

Improving the quality of intervention research in psychology

and education has been a primary goal of scholars and

researchers throughout the history of these scientific disciplines.

Broadly conceived, intervention research is designed to pro-

duce credible (i.e., believable, dependable; see Levin, 1994)

knowledge that can be translated into practices that affect

(optimistically, practices that improve) the mental health and ed-

ucation of all individuals. Yet beyond this general goal there has

always been disagreement about the objectives of intervention

research and the methodological and analytic tools that can be

counted on to produce credible knowledge. One purpose of this

chapter is to review some of the controversies that have befallen

psychological and educational intervention research. A second,

and the major, purpose of this chapter is to suggest some possi-

bilities for enhancing the credibility of intervention research. At

the very least, we hope that our musings will lead the reader to

consider some fundamental assumptions of what intervention

research currently is and what it can be.

CONTEMPORARY METHODOLOGICAL ISSUES:

A BRIEF OVERVIEW

Although there is general consensus among researchers that

intervention research is critical to the advancement of knowl-

edge for practice, there is fundamental disagreement about the

methodologies used to study questions of interest. These include

such issues as the nature of participant selection, differential

concerns for internal validity and external validity (Campbell &

Stanley, 1966), the desirability or possibility of generalization,

the appropriate experimental units, and data-analytic tech-

niques, among others that are discussed later in this chapter.



Evidence-Based Treatments and Interventions

Of the major movements in psychology and education, few

have stirred as much excitement or controversy as have

recent efforts to produce evidence-based treatments. With its

origins in medicine and clinical-trials research, the evidence-

based movement spread to clinical psychology (see Chamb-

less & Ollendick, 2001, for a historical overview; Hitt, 2001)

and, more recently, to educational and school psychology

(Kratochwill & Stoiber, 2000; Stoiber & Kratochwill, 2000).

At the forefront of this movement has been the so-called

quantitative/experimental/scientific methodology featured as

the primary tool for establishing the knowledge base for treat-

ment techniques and procedures. This methodology has been

embraced by the American Psychological Association (APA)

Division 12 (Clinical Psychology) Task Force on Evidence-

Based Treatments (Weisz & Hawley, 2001). According to the

Clinical Psychology Task Force criteria for determination of

whether a treatment is evidence based, quantitative group-

based and single-participant studies are the only experimen-

tal methodologies considered for a determination of credible

evidence.

The School Psychology Task Force, sponsored by APA Di-

vision 16 and the Society for the Study of School Psychology,

has also developed criteria for a review of interventions (see

Kratochwill & Stoiber, 2001). In contrast to their clinical psy-

chology colleagues’ considerations, those of the School Psy-

chology Task Force differ in at least two fundamental ways.

First, the quantitative criteria involve a dimensional rating of

various designs, including criteria of their internal validity, sta-

tistical conclusion, external validity, and construct validity.

Thus, the evidence associated with each dimension is based on

a Likert-scale rating and places responsibility on the consumer

of the information for weighing and considering the support

available for various interventions under consideration. Table

22.1 provides sample rating criteria for group-based interven-

tions from the Procedural and Coding Manual for Review of



Evidence-Based Interventions (Kratochwill & Stoiber, 2001).

A second feature that distinguishes the School Psychology

Task Force considerations from previous evidence-based

efforts is the focus on a broad range of methodological strate-

gies to establish evidence for an intervention. In this regard,

the School Psychology Task Force has developed criteria for

coding qualitative methods in intervention research. At the

same time, a premium has been placed on quantitative method-

ologies as the primary basis for credible evidence for inter-

ventions (see Kratochwill & Stoiber, 2000; Kratochwill &

Stoiber, in press). The higher status placed on quantitative

methods is not shared among all scholars of intervention

research methodology and sets the stage for some of the

ongoing debate, which is described next.



Quantitative Versus Qualitative Approaches

What accounts for the growing interest in qualitative method-

ologies? Recently, and partly as a function of the concern

for authentic environments and contextual cognition (see

Levin & O’Donnell, 1999b, pp. 184–187; and O’Donnell &

Levin, 2001, pp. 79–80), there has been a press for alternatives



TABLE 22.1

Selected Examples of School Psychology Task Force Evidence-Based Intervention Criteria

I. General Characteristics

A. Type of Basis (check all that apply)

A1.


ᮀ Empirical basis

A2.


ᮀ Theoretical basis

B. General Design Characteristics

B1.

ᮀ Completely randomized design



B2.

ᮀ Randomized block design (between-subjects/blocking variation)

B3.

ᮀ Randomized block design (within-subjects/repeated measures/multilevel variation)



B4.

ᮀ Randomized hierarchical design

B5.

ᮀ Nonrandomized design



B6.

ᮀ Nonrandomized block design (between-subjects/blocking variation)

B7.

ᮀ Nonrandomized design (within-subjects/repeated measures/multilevel variation)



B8.

ᮀ Nonrandomized hierarchical design

C. Statistical Treatment (check all that apply)

C1.


ᮀ Appropriate units of analysis

C2.


ᮀ Family-wise/experiment-wise error rate controlled

C3.


ᮀ Sufficiently large N

II. Key Features for Coding Studies and Rating Level of Evidence/Support

(3

ϭ Strong Evidence 2 ϭ Promising Evidence 1 ϭ Weak Evidence 0 ϭ No Evidence)



A. Measurement (check rating and all that apply)

3

2



1

0

A1.



ᮀ Use of outcome measures that produce reliable scores for the population under study:

Reliability

ϭ

A2.


ᮀ Multimethod

A3.


ᮀ Multisource

A4.


ᮀ A case for validity has been presented.

B.

ᮀ Comparison Group (check rating)



3

2

1



0

B1. Type of Comparison Group (check all that apply)

B1.1.

ᮀ No intervention



B1.2.

ᮀ Active Control (attention, placebo, minimal intervention)

B1.3.

ᮀ Alternative Treatment



B2.

ᮀ Counterbalancing of Change Agents

B3.

ᮀ Group Equivalence Established



B3.1.

ᮀ Random Assignment

B3.2.

ᮀ Statistical Matching (ANCOVA)



B3.3.

ᮀ Post hoc test for group equivalence 

B4.

ᮀ Equivalent Mortality with 



B4.1.

ᮀ Low Attrition (less than 20% for posttest)

B4.2.

ᮀ Low Attrition (less than 30% for follow-up)



B4.3.

ᮀ Intent to intervene analysis carried out

• Key Findings ————

C. Key Outcomes Statistically Significant (check)

3

2

1



0

C1. Key Outcomes Statistically Significant (list only those with p

р .05)

D. Key Outcomes Educationally or Clinically Significant (check)



3

2

1



0

D1. Effect Sizes [indicate measure(s) used]

E. Durability of Effects (check)

3

2



1

0

ᮀ Weeks



ᮀ Months

ᮀ Years


F. Identifiable Components (check)

3

2



1

0

G. Implementation Fidelity (check)



3

2

1



0

G1.


ᮀ Evidence of Acceptable Adherence

G1.1.


ᮀ Ongoing supervision/consultation

G1.2.


ᮀ Coding sessions

G1.3.


ᮀ Audio/video tape

G2.


ᮀ Manualization

H.

ᮀ Replication (check rating and all that apply)



3

2

1



0

H1.


ᮀ Same Intervention

H2.


ᮀ Same Target Problem

H3. Relationship Between Evaluator/Researcher and Intervention Program

ᮀ Independent evaluation

Source:

Adapted from Kratochwill & Stoiber (2001).



560

Educational / Psychological Intervention Research

to traditional experimental methodologies in educational

research. Concerns for external validity, consideration of the

complexity of human behavior, and the emergence of socio-

cultural theory as part of the theoretical fabric for understand-

ing educational processes have also resulted in the widespread

adoption of more qualitative methods. In terms of Krathwohl’s

(1993) distinctions among description, explanation, and vali-

dation (summarized by Jaeger & Bond, 1996, p. 877), the pri-

mary goals of educational research, for example, have been to

observe and describe complex phenomena (e.g., classroom in-

teractions and behaviors) rather than to manipulate treatments

and conduct confirming statistical analyses of the associated

outcomes.

For the past 10 years or so, much has been written about

differing research methodologies, the contribution of educa-

tional research to society, and the proper functions and pur-

poses of scientific research (e.g., Doyle & Carter, 1996;

Kaestle, 1993; Labaree, 1998; O’Donnell & Levin, 2001).

Some of these disputes have crystallized into the decade-long

debate about quantitative and qualitative methodologies and

their associated warrants for research outcomes—a debate,

we might add, that is currently thriving not just within educa-

tion but within other academic domains of the social sciences

as well (e.g., Azar, 1999; Lipsey & Cordray, 2000). As has

been recently pointed out, the terms qualitative and quantita-



tive are oversimplified, inadequate descriptors of the method-

ological and data-analytic strategies associated with them

(Levin & Robinson, 1999).

The reasons for disagreements between quantitative and

qualitative researchers are much more than a debate about

the respective methodologies. They are deeply rooted in be-

liefs about the appropriate function of scientific research.

Criticism of quantitative methodologies has often gone hand

in hand with a dismissal of empiricism. Rejection of qualita-

tive methodologies has often centered on imprecision of

measurement, problems with generalizability, and the qual-

ity and credibility of evidence. Failures to resolve, or even to

address, the issue of the appropriate research function have

resulted in a limiting focus in the debate between qualitative

and quantitative orientations that trivialize important

methodological distinctions and purposes. Unfortunately, the

debate has often been ill conceived and unfairly portrayed,

with participants not recognizing advances that have been

made in both qualitative and quantitative methodologies in

the last decade. The availability of alternative methodolo-

gies and data-analytic techniques highlights a key issue

among researchers regarding the rationale for their work

and the associated direction of their research efforts.

Wittrock (1994) pointed out the need for a richer variety of

naturalistic qualitative and quantitative methodologies, rang-

ing from case studies and observations to multivariate de-

signs and analyses.

In addition, arguments about appropriate methodology

have often been confused with a different argument about the

nature of scholarship. Beginning with Ernest Boyer’s (1990)

book, Scholarship ReconsideredPriorities of the Professori-

ate, institutions of higher education have sought ways to

broaden the concept of scholarship to include work that does

not involve generating new knowledge. This debate is often

confused with the methodological debate between the respec-

tive advocates of qualitative and quantitative approaches, but

an important feature of this latter debate is that it focuses on



methods of knowledge generation (see also Jaeger, 1988).

RESEARCH METHODOLOGY AND THE CONCEPT

OF CREDIBLE EVIDENCE

Our purpose here is not to prescribe the tasks, behaviors, or

problems that researchers should be researching (i.e., the topics

of psychological and educational-intervention research). Some

of these issues have been addressed by various review groups

(e.g., National Reading Panel, 2000), as well as by task forces

in school and clinical psychology. As Calfee (1992) noted

in his reflections on the field of educational psychology, re-

searchers are currently doing quite well in their investigation

of issues of both psychological and educational importance.

As such, what is needed in the future can be characterized more

as refining rather than as redefining the nature of that research.

For Calfee, refining means relating all research efforts and find-

ings in some way to the process of schooling by “filling gaps

in our present endeavors” (p. 165). For us, in contrast, refining

means enhancing the scientific integrity and evidence credi-

bility of intervention research, regardless of whether that re-

search is conducted inside or outside of schools.



Credible Versus Creditable Intervention Research

We start with the assertion, made by Levin (1994) in regard

to educational-intervention research, that a false dichotomy

is typically created to distinguish between basic (laboratory-

based) and applied (school-based) research. (a) What is the

dichotomy? and (b) Why is it false? The answer to the first

question addresses the methodological rigor of the research

conducted, and which can be related to the concept of inter-

nal validity, as reflected in the following prototypical

pronouncement: “Applied research (e.g., school-based

research) and other real-world investigations are inherently


Research Methodology and the Concept of Credible Evidence

561

complex and therefore must be methodologically weaker,

whereas laboratory research can be more tightly controlled

and, therefore, is methodologically stronger.”

In many researchers’ minds, laboratory-based research

connotes “well controlled,” whereas school-based research

connotes “less well controlled” (see Eisner, 1999, for an

example of this perspective). The same sort of prototypical

packaging of laboratory versus classroom research is evident

in the National Science Foundation’s (NSF) 1999 draft

guidelines for evaluating research proposals on mathematics

and science education (Suter, 1999). As is argued in a later

section of this chapter, not one of these stated limitations is

critical, or even material, as far as conducting scientifically

sound applied research (e.g., classroom-based research) is

concerned.

The answer to the second question is that just because dif-

ferent research modes (school-based vs. laboratory-based)

have traditionally been associated with different methodolog-

ical-quality adjectives (weaker vs. stronger, respectively),

that is not an inevitable consequence of the differing research

venues (see also Levin, 1994; Stanovich, 1998, p. 129).

Laboratory-based research can be methodologically weak

and school-based research methodologically strong. As such,

the methodological rigor of a piece of research dictates

directly the credibility (Levin, 1994) of its evidence, or the

trustworthiness (Jaeger & Bond, 1996) of the research find-

ings and associated conclusions (see also Kratochwill &

Stoiber, 2000). Research credibility should not be confused

with the educational or societal importance of the questions

being addressed, which has been referred to as the research’s

creditability (Levin, 1994). In our view (and consistent with

Campbell & Stanley’s, 1966, sine qua non dictum), scientific

credibility should be first and foremost in the educational

research equation, particularly when it comes to evaluating

the potential of interventions (see also Jaeger & Bond, 1996,

pp. 878–883).

With the addition of both substantive creditability and

external validity standards (to be specified later) to scientifi-

cally credible investigations, one has what we believe to be

the ideal manifestation of intervention research. That ideal

surely captures Cole’s (1997, p. 17) vision for the future of

“both useful research and research based on evidence and

generalizability of results.” For example, two recent empiri-

cal investigations addressing the creditable instructional

objective of teaching and improving students’ writing from

fundamentally different credible methodological approaches,

one within a carefully controlled laboratory context

(Townsend et al., 1993) and the other systematically within the

context of actual writing-instructed classrooms (Needels &

Knapp, 1994), serve to punctuate the present points. Several

examples of large-scale, scientifically credible research stud-

ies with the potential to yield educationally creditable pre-

scriptions are provided later in this chapter in the context of a

framework for conceptualizing different stages of interven-

tion research.



Download 9.82 Mb.

Do'stlaringiz bilan baham:
1   ...   126   127   128   129   130   131   132   133   ...   153




Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©fayllar.org 2024
ma'muriyatiga murojaat qiling