Handbook of psychology volume 7 educational psychology


Components of CAREful Intervention Research


Download 9.82 Mb.
Pdf ko'rish
bet131/153
Sana16.07.2017
Hajmi9.82 Mb.
#11404
1   ...   127   128   129   130   131   132   133   134   ...   153

Components of CAREful Intervention Research

In our view, credible evidence follows from the conduct

of credible research, which in turn follows directly from

Campbell and Stanley’s (1966) methodological precepts (for

a contrasting view, see chapter by McCombs in this volume).

The essence of both scientific research and credible research

methodology can in turn be reduced to the four components

of what Levin (1997b) and Derry, Levin, Osana, Jones, and

Peterson (2000) have referred to as CAREful intervention re-

search: Comparison, Again and again, Relationship, and



Eliminate. In particular, it can be argued that evidence link-

ing an intervention to a specified outcome is scientifically

convincing if (a) the evidence is based on a Comparison that

is appropriate (e.g., comparing the intervention with an ap-

propriate alternative or nonintervention condition); (b) the

outcome is produced by the intervention Again and again

(i.e., it has been “replicated,” initially across participants or

observations in a single study and ultimately through inde-

pendently conducted studies); (c) there is a direct Relation-

ship (i.e., a connection or correspondence) between the

intervention and the outcome; and (d) all other reasonable

competing explanations for the outcome can be Eliminated

(typically, through randomization and methodological care).

Succinctly stated: If an appropriate Comparison reveals



Again and again evidence of a direct Relationship between an

intervention and a specified outcome, while Eliminating all

other competing explanations for the outcome, then the re-

search yields scientifically convincing evidence of the inter-

vention’s effectiveness.

As might be inferred from the foregoing discussion, scien-

tifically grounded experiments (including both group-based

and single-participant varieties) represent the most com-

monly accepted vehicle for implementing all four CAREful

research components. At the same time, other modes of em-

pirical inquiry, including quasi experiments and correlational

studies, as well as surveys, can be shown to incorporate one

or more of the CAREful research components. In fact, being

attuned to these four components when interpreting one’s

data is what separates careful researchers from not-so-careful

ones, regardless of their preferred general methodological

orientations.


562

Educational / Psychological Intervention Research

THE CONCEPT OF EVIDENCE

Good Evidence Is Hard to Find

If inner-city second graders take piano lessons and receive

exercises that engage their spatial ability, will their mathe-

matics skills improve? Yes, according to a newspaper ac-

count of a recent research study (“Piano lessons, computer

may help math skills,” 1999). But maybe no, according to in-

formed consumers of reports of this kind, because one’s con-

fidence in such a conclusion critically depends on the quality

of the research conducted and the evidence obtained from it.

Thus, how can we be confident that whatever math-skill im-

provements were observed resulted from students’ practicing

the piano and computer-based spatial exercises, rather than

from something else? Indeed, the implied causal explanation

is that such practice served to foster the development of cer-

tain cognitive and neurological structures in the students,

which in turn improved their mathematics skills: “When chil-

dren learn rhythm, they are learning ratios, fractions and pro-

portions. . . .  With the keyboard, students have a clear visual

representation of auditory space.” (Deseretnews.com, March

15, 1999, p. 1). Causal interpretations are more than implicit

in previous research on this topic, as reflected by the authors’

outcome interpretations and even their article titles—for

example, “Music Training Causes Long-Term Enhancement

of Preschool Children’s Spatial-Temporal Reasoning”

(Rauscher et al., 1997).

In the same newspaper account, however, other re-

searchers offered alternative explanations for the purported

improvement of musically and spatially trained students, in-

cluding the enhanced self-esteem that they may have experi-

enced from such training and the positive expectancy effects

communicated from teachers to students. Thus, at least in the

newspaper account of the study, the evidence offered to sup-

port the preferred cause-and-effect argument is not com-

pelling. Moreover, a review of the primary report of the

research (Graziano, Peterson, & Shaw, 1999) reveals that in

addition to the potential complicators just mentioned, a num-

ber of methodological and statistical concerns seriously com-

promise the credibility of the study and its conclusions,

including nonrandom assignment of either students or class-

rooms to the different intervention conditions, student attri-

tion throughout the study’s 4-month duration, and an

inappropriate implementation and analysis of the classroom-

based intervention (to be discussed in detail in a later sec-

tion). The possibility that music instruction combined with

training in spatial reasoning improves students’ mathematics

skill is an intriguing one and one to which we personally res-

onate. Until better controlled research is conducted and more

credible evidence presented, however, the possibility must

remain just that—see also Winner and Hetland’s (1999) criti-

cal comments on this research, as well as the recent empirical

studies by Steele, Bass, and Crook (1999) and by Nantais and

Schellenberg (1999).

In both our graduate and undergraduate educational psy-

chology courses, we draw heavily from research, argument,

and critical thinking concepts presented in three wonderfully

wise and well-crafted books, How to Think Straight about



Psychology (Stanovich, 1998), Statistics as Principled Argu-

ment (Abelson, 1995), and Thought and Knowledge: An

Introduction to Critical Thinking (Halpern, 1996). Anyone

who has not read these beauties should. And anyone who has

read them and applied the principles therein to their own re-

search should more than appreciate the role played by old-

fashioned evidence in offering and supporting an argument,

whether that argument is in a research context or in an every-

day thinking context. In a research context, a major theme of

all three books—as well as of the Clinical Psychology and

School Psychology Task Forces—is the essentiality of pro-

viding solid (our “credible”) evidence to support conclusions

about causal connections between independent and dependent

variables. In terms of our present intervention research con-

text and terminology, before one can attribute an educational

outcome to an educational intervention, credible evidence

must be provided that rules in the intervention as the proxi-

mate cause of the observed outcome, while at the same time

ruling out alternative accounts for the observed outcome.

If all of this sounds too stiff and formal (i.e., too acade-

mic), and maybe even too outmoded (Donmoyer, 1993;

Mayer, 1993), let us restate it in terms of the down-to-earth

advice offered to graduating seniors in a 1998 university

commencement address given by Elizabeth Loftus, an expert

on eyewitness testimony and then president of the American

Psychological Society:

There’s a wonderful cartoon that appeared recently in Parade

Magazine. . . . Picture this: mother and little son are sitting at the

kitchen table. Apparently mom has just chided son for his exces-

sive curiosity. The boy rises up and barks back, “Curiosity killed

what cat? What was it curious about? What color was it? Did it

have a name? How old was it?” I particularly like that last ques-

tion. . . . [M]aybe the cat was very old, and died of old age, and

curiosity had nothing to do with it at all. . . . [M]y pick for the one

advice morsel is simple: remember to ask the questions that good

psychological scientists have learned to ask: “What’s the evi-

dence?” and then, “What EXACTLY is the evidence?” (Loftus,

1998, p. 27)

Loftus (1998, p. 3) added that one of the most important

gifts of critical thinking is “knowing how to ask the right



The Concept of Evidence

563

questions about any claim that someone might try to foist

upon you.” In that regard, scientific research “is based on a

fundamental insight—that the degree to which an idea seems

true has nothing to do with whether it is true, and the way to

distinguish factual ideas from false ones is to test them by ex-

periment” (Loftus, 1998, p. 3). Similarly, in a recent popular

press interview (Uchitelle, 1999), economist Alan Krueger

argued for continually challenging conventional wisdom and

theory with data: “The strength of a researcher is not in being

an advocate, but in making scientific judgments based on the

evidence. And empirical research teaches us that nothing is

known with certainty” (p. C10). Stanovich (1998), in advanc-

ing his fanciful proposition that two “little green men” that

reside in the brain control all human functioning, analogizes

in relation to other fascinating, though scientifically unsup-

ported, phenomena such as extrasensory perception, bio-

rhythms, psychic surgery, facilitated communication, and so

on, that “one of the most difficult things in the world [is to]

confront a strongly held belief with contradictory evidence”

(p. 29). 

That intervention researchers are also prone to pro-

longed states of “evidencelessness” has been acknowledged

for some time, as indicated in the following 40-year-old

observation:

A great revolution in social science has been taking place, par-

ticularly throughout the last decade or two. Many educational

researchers are inadequately trained either to recognize it or to

implement it. It is the revolution in the concept of evidence.

(Scriven, 1960, p. 426)

We contend that the revolution referred to by Scriven has not

produced a corresponding revelation in the field of interven-

tion research even (or especially) today. Consider, for exam-

ple, the recent thoughts of the mathematics educator Thomas

Romberg (1992) on the matter:

The importance of having quality evidence cannot be overem-

phasized. . . . The primary role of researchers is to provide relia-

bility evidence to back up claims. Too many people are inclined

to accept any evidence or statements that are first presented to

them urgently, clearly, and repeatedly. . . . A researcher tries to be

one whose claims of knowing go beyond a mere opinion, guess,

or flight of fancy, to responsible claims with sufficient grounds

for affirmation. . . . Unfortunately, as any journal editor can tes-

tify, there are too many research studies in education in which ei-

ther the validity or the reliability of the evidence is questionable.

(Romberg, 1992, pp. 58–59)

In the pages that follow, we hope to provide evidence to sup-

port Scriven’s (1960) and Romberg’s (1992) assertions about

the noticeable lacks of evidence in contemporary interven-

tion research.



The Evidence of Intervention Research

The ESP Model

Lamentably, in much intervention research today, rather than

subscribing to the scientific method’s principles of theory,

hypothesis-prediction, systematic manipulation, observation,

analysis, and interpretation, more and more investigators are

subscribing to what might be dubbed the ESP principles of



Examine, Select, and Prescribe. For example, a researcher

may decide to examine a reading intervention. The researcher

may not have well-defined notions about the specific external

(instructional) and internal (psychological) processes in-

volved or about how they may contribute to a student’s per-

formance. Based on his or her (typically, unsystematic)

observations, the researcher selects certain instances of cer-

tain behaviors of certain students for (typically, in-depth)

scrutiny. The researcher then goes on to prescribe certain in-

structional procedures, materials and methods, or small-

group instructional strategies that follow from the scrutiny.

We have no problem with the examine phase of such re-

search, and possibly not even with the select phase of it, in-

sofar as all data collection and observation involve selection

of one kind or another. We do, however, have a problem if

this type of research is not properly regarded for what it is:

namely, preliminary-exploratory, observational hypothesis

generating. Certainly in the early stages of inquiry into a re-

search topic, one has to look before one can leap into design-

ing interventions, making predictions, or testing hypotheses.

To demonstrate the possibility of relationships among vari-

ables, one might also select examples of consistent cases.

Doing so, however, (a) does not comprise sufficient evidence

to document the existence of a relationship (see, e.g., Derry

et al., 2000) and (b) can result in unjustified interpretations of

the kind that Brown (1992, pp. 162–163) attributed to Bartlett

(1932) in his classic study of misremembering. With regard

to the perils of case selection in classroom-intervention

research, Brown (1992) properly noted that 

there is a tendency to romanticize research of this nature and rest

claims of success on a few engaging anecdotes or particularly

exciting transcripts. One of the major methodological problems

is to establish means of conveying not only the selective and not

necessarily representative, but also the more important general,

reliable, and repeatable. (p. 173)

In the ESP model, departure from the researcher’s

originally intended purposes of the work (i.e., examining a


564

Educational / Psychological Intervention Research

particular instance or situation) is often forgotten, and pre-

scriptions for practice are made with the same degree of ex-

citement and conviction as are those based on investigations

with credible, robust evidence. The unacceptability of the

prescribe phase of the ESP research model goes without say-

ing: Neither variable relationships nor instructional recom-

mendations logically follow from its application. The

widespread use of ESP methodology in intervention research

and especially in education was appropriately admonished

35 years ago by Carl Bereiter in his compelling case for more

empirical studies of the “strong inference” variety (Platt,

1964) in our field:

Why has the empirical research that has been done amounted to so

little? One reason . . . is that most of it has been merely descriptive

in nature. It has been a sort of glorified “people-watching,” con-

cerned with quantifying the characteristics of this or that species

of educational bird. . . . [T]he yield from this kind of research gets

lower year by year in spite of the fact that the amount of research

increases. (Bereiter, 1965, p. 96)

Although the research names have changed, the problems

identified by Bereiter remain, and ESP methodology based

on modern constructs flourishes.

The Art of Intervention Research: Examples

From Education

If many intervention research interpretations and prescrip-

tions are not based on evidence, then on what are they based?

On existing beliefs? On opinion? Any semblance of a model

of research yielding credible evidence has degenerated into a

mode of research that yields everything but. We submit as a

striking example the 1993 American Education Research

Association (AERA) meeting in Atlanta, Georgia. At this

meeting of the premier research organization of our educa-

tors, the most promising new developments in educational

research were being showcased. Robert Donmoyer, the meet-

ing organizer, wanted to alert the world to the nature of those

groundbreaking research developments in his final preconfer-

ence column in the Educational Researcher (the most widely

distributed research-and-news publication of AERA):

Probably the most radical departures from the status quo can be

found in sessions directly addressing this year’s theme, The Art

and Science of Educational Research and Practice. In some of

these sessions, the notion of art is much more than a metaphor.

[One session], for example, features a theater piece constructed

from students’ journal responses to feminist theory; [another]

session uses movement and dance to represent gender relation-

ships in educational discourse; and [another] session features a

demonstration—complete with a violin and piano performance—

of the results of a mathematician and an educator’s interdiscipli-

nary explorations of how music could be used to teach mathe-

matics. (Donmoyer, 1993, p. 41)

Such sessions may be entertaining or engaging, but are

they presenting what individuals attending a conference of a

professional research organization came to hear? The next

year, in a session at the 1994 AERA annual meeting in New

Orleans, two researchers were displaying their wares in a joint

presentation: Researcher A read a poem about Researcher B

engaged in a professional activity; Researcher B displayed a

painting of Researcher A similarly engaged. (The details pre-

sented here are intentionally sketchy to preserve anonymity.)

Artistic? Yes, but is it research? Imagine the following dia-

logue: “Should the Food and Drug Administration approve

the new experimental drug for national distribution?” “Defi-

nitely! Its effectiveness has been documented in a poem by

one satisfied consumer and in a painting by another.”

These perceptions of a scientific backlash within the re-

search community may pertain not just to scientifically based

research, but to science itself. In their book The Flight From

Science and Reason, Gross, Levitt, and Lewis (1997) in-

cluded 42 essays on the erosion of valuing rationalism in so-

ciety. Among the topics addressed in these essays are the

attacks on physics, medicine, the influence of the arguments

against objectivity in the humanities, and questions about the

scientific basis of the social sciences. Thus, the rejection

of scientifically based knowledge in education is part of a

larger societal concern. Some 30 years after making his case

for strong-inference research in education (Bereiter, 1965),

Carl Bereiter (1994) wrote the following in a critique of the

current wave of postmodernism thought among researchers

and educators alike:

This demotion of science to a mere cognitive style might be

dismissed as a silly notion with little likelihood of impact on

mainstream educational thought, but I have begun to note the

following milder symptoms in otherwise thoroughly mainstream

science educators: reluctance to call anything a fact; avoidance

of the term misconception (which only a few years ago was a

favorite word for some of the same people); considerable ago-

nizing over teaching the scientific method and over what might

conceivably take its place; and a tendency to preface the word

science with Eurocentric, especially among graduate students.

(Bereiter, 1994, p. 3)

What is going on here? Is it any wonder that scholars from

other disciplines, politicians, and just plain folks are looking

at educational research askew?


The Concept of Evidence

565

Labaree (1998) clearly recognized the issue of concern: 

Unfortunately, the newly relaxed philosophical position toward

the softness of educational knowledge . . . can (and frequently

does) lead to rather cavalier attitudes by educational researchers

toward [a lack of] methodological rigor in their work. As confir-

mation, all one has to do is read a cross-section of dissertations

in the field or of papers presented at educational conferences. For

many educational researchers, apparently, the successful attack

on the validity of the hard sciences in recent years has led to the

position that softness is not a problem to be dealt with but a

virtue to be celebrated. Frequently, the result is that qualitative

methods are treated less as a cluster of alternative methodologies

than as a license to say what one wants without regard to rules of

evidence or forms of validation. (Labaree, 1998, p. 11)

In addition to our having witnessed an explosion of pre-

sentations of the ESP, anecdotal, and opinion variety at the

“nouveau research” AERA conferences (including presenta-

tions that propose and prescribe instructional interventions),

we can see those modes of inquiry increasingly being wel-

comed into the academic educational research community—

and even into journals that include “research” as part of their

title:

When I first began presiding over the manuscript review process



for Educational Researcher, for example, I received an essay

from a teacher reflecting on her practice. My initial impulse was

to reject the piece without review because the literary genre of

the personal essay in general and personal essays by teachers in

particular are not normally published in research journals. I

quickly reconsidered this decision, however, and sent the paper

to four reviewers with a special cover letter that said, among

other things: “. . . The Educational Researcher has published

pieces about practitioner narratives; it makes sense, therefore, to

consider publishing narrative work . . . I will not cavalierly reject

practitioners’ narratives and reflective essays.” . . . [I]n my cover

letter, I did not explicitly invite reviewers to challenge my judg-

ment (implicit in my decision to send work of this kind out for

review) that the kind of work the manuscript represented—if it is

of high quality—merits publication in a research journal. In fact,

my cover letter suggested this issue had already been decided.

(Donmoyer, 1996, pp. 22–23)

We do not disregard the potential value of a teacher’s

reflection on experience. But is it research? On what type

of research-based evidence is it based? We can anticipate the

reader dismissing Donmoyer’s (1996) comments by arguing

that the Educational Researcher is not really a scholarly

research journal, at least not exclusively so. It does, after all,

also serve a newsletter function for AERA. In fact, in the

organization’s formative years (i.e., in the 1940s, 1950s, and

1960s) it used to be just that, a newsletter. Yet, in Donmoyer’s

editorial, he was referring to the Features section, a research-

based section of the Educational Researcher, and he clearly

regarded the contents of that section, along with the manu-

scripts suitable for it, in scholarly research terms. What is not

research may soon be difficult, if not impossible, to define. In

the 1999 AERA meeting session on research training, there

was no clear definition of what research is.


Download 9.82 Mb.

Do'stlaringiz bilan baham:
1   ...   127   128   129   130   131   132   133   134   ...   153




Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©fayllar.org 2024
ma'muriyatiga murojaat qiling