Handbook of psychology volume 7 educational psychology


Participatory Simulations


Download 9.82 Mb.
Pdf ko'rish
bet97/153
Sana16.07.2017
Hajmi9.82 Mb.
#11404
1   ...   93   94   95   96   97   98   99   100   ...   153
Participatory Simulations

Participatory Simulations, a project overseen by Uri

Wilensky and Walter Stroup at Northwestern University, is a

distributed computing environment built on the foundations

of Logo and NetLogo that encourages learners collabora-

tively to explore complex simulations (Wilensky & Stroup,

1999). This project centers around HubNet, a classroom-

based network of handheld devices that enables learners to

participate in and collaboratively control simulations of dy-

namic systems. The emergent behavior of the system be-

comes the object of collective discussion and collaborative

analysis.



Figure 16.7

The MacMOOSE client interface showing editing, browsing, and main interaction windows.



414

Computers, the Internet, and New Media for Learning

Figure 16.8

SimCalc’s interactive velocity lab, with animation and real-time graphs.



CoVis

CoVis (Collaborative Visualization), a project developed at

Northwestern University in the 1990s, focuses on science

learning through projects using a telecommunications infra-

structure, scientific visualization tools, and software to sup-

port collaboration between diverse schools in distributed

locations (Edelson et al., 1996). Much of learners’ investiga-

tion centered on atmospheric and environmental studies,

allowing wide-scale data sharing across the United States.

Learners could then use sophisticated data analysis tools

to visualize and draw conclusions. CoVis made use of a

variety of networked software: collaborative “notebooks,”

distributed databases, and system visualization tools, as well

as the Web and e-mail. The goal in the CoVis project was for

young people to study topics in much the same way as pro-

fessional scientists do. See http://www.covis.nwu.edu/.



Network Science

In the late 1980s and 1990s a number of large-scale research

projects explored the possibilities of connecting multiple

classrooms across the United States for data sharing and col-

laborative inquiry (Feldman et al., 2000). Programs like

National Geographic Kids Network (NGKNet), a National

Science Foundation–funded collaboration between the

National Geographic Society and TERC, reached thousands of

classrooms and tens of thousands of students. TERC’s

NGKNet provided curriculum plans and resources around

issues such as acid rain and tools that facilitated large-scale

data collection, sharing, and analysis of results. Other projects,

such as Classroom BirdWatch and EnergyNet, focused on is-

sues with comparable global significance and local implica-

tions, turning large numbers of learners into a community of

practice doing distributed scientific investigation. Feldman,

Konold, and Coulter noted that these large-scale projects ques-

tion the notion of the individual child as scientist, pointing

instead toward interesting models of collaborative en-

gagement in science, technology, and society issues (pp. 142–

143).

Virtual-U 

Developed by Linda Harasim and Tom Calvert at Simon

Fraser University and the Canadian Telelearning National

Centres of Excellence, Virtual-U is a Web-based course-

delivery platform (Harasim, Calvert, & Groeneboer, 1996).

Virtual-U aims to provide a rich, full-featured campus envi-

ronment for learners, featuring a cafe and library as well as

course materials and course-management functionality. See

http://virtual-u.cs.sfu.ca/ and http://www.telelearn.ca/.

Tapped In

Tapped In (see Figure 16.9) is a multiuser online educational

workspace for teachers and education professionals. The

Tapped In project, led by Mark Schlager at SRI International,



Exemplary Learning Systems

415

began in the late 1990s as a MOO (textual virtual reality) en-

vironment for synchronous collaboration and has since

grown into a sophisticated (Web plus MOO) multimedia en-

vironment for both synchronous and asynchronous work,

with a large and very active user population (Schlager &

Schank, 1997). Tapped In uses a technological infrastructure

similar to that of MOOSE Crossing but has a different kind of

community of practice at work within it; Tapped In functions

more like an ongoing teaching conference, with many weekly

or monthly events, workshops, and happenings. Tapped In is

an exemplary model of a multimode collaborative environ-

ment. See http://www.tappedin.sri.com/.

CoWeb

At Georgia Tech Mark Guzdial and colleagues at the Collab-

orative Software Laboratory (CSL) have created a variety of

software environments building on the original educational

computing vision of Alan Kay in the 1970s (Kay 1996); the

computer can be a tool for composing and experiencing dy-

namic media. Growing from Guzdial’s (1997) previous work

on the CaMILE project—a Web-based anchored collabora-

tion environment, CSL’s CoWeb project explores possibili-

ties in designing and using collaborative media tools online

(Guzdial, 1999). CoWeb and other CSL work are largely

based on the Squeak environment, a direct descendant of

Alan Kay’s research at Xerox PARC in the 1970s. See

http://coweb.cc.gatech.edu/csl.



MaMaMedia

The rationale of MaMaMedia, a company founded by MIT

Media Lab graduate Idit Harel, is to enable young learners

and their parents to participate in web experiences that are

safe, constructionist by nature, and educational. MaMaMedia

maintains a filtered collection of dynamic Web sites aimed

at challenging young children to explore, express, and

exchange (Harel’s three Xs) ideas. Harel’s (1991) book



Figure 16.9

The TAPestry interface to the Tapped In environment.



416

Computers, the Internet, and New Media for Learning

Children Designers, lays the foundation for MaMaMedia

and for research in understanding how children in rich online

environments construct and design representations of their

thinking. In Harel’s doctoral work, one young girl named

Debbie was part of the experimental group at the Hennigan

School, working with fractions in Logo. After several months

of working on her project, she looked around the room and

said, “Fractions are everywhere.” MaMaMedia enables thou-

sands of girls and boys to be online playing games, learning

how to think like Debbie, and participating in the vast

MaMaMedia community. To join this constructionist com-

munity for kids and parents, go to http://www.mamamedia

.com/.

WebGuide

WebGuide,

a

web-based,



collaborative

knowledge-

construction tool, was created by Gerry Stahl and colleagues

at the University of Colorado (Stahl, 1999). WebGuide is

designed to facilitate personal and collaborative understand-

ing through mediating perspectivity via cultural artifacts.

WebGuide acts as a scaffold for group understanding.

WebGuide is a structured conferencing system supporting rich

interlinking and information reuse and recontextualization, as

well as multiple views on the structure of the information set.

Learners contribute information from individual perspectives,

but this information can later be negotiated and recollected in

multiple contexts construct. See http://www.cs.colorado.edu/

~gerry/webguide/.



Affective Computing and Wearables

A series of research projects under Rosalind Picard at the

MIT Media Lab are aimed at investigating affective comput-

ing (Picard 1997)—the emotional and environmental aspects

of digital technologies. Research areas include computer

recognition of human affect, computer synthesis of affect,

wearable computers, and affective interaction with comput-

ers. Jocelyn Schreirer conducted several experiments with

advisors Picard, Turkle, and Goldman-Segall to explore how

affective wearable technologies become expressive devices

for augmenting communication. This relatively new area for

research will undoubtedly prove very significant for educa-

tion as well as other applications because the affective com-

ponent of computing has been overlooked until recently. See

http://www.media.mit.edu/affect/.

WebCT

Originally developed in the late 1990s by Murray Goldberg at

the University of British Columbia, WebCT has grown to be

an enormously popular example of a course management sys-

tem. What began as an easy-to-use Web-based courseware

environment is now in use by more than 1,500 institutions. In-

deed, it is so widespread among postsecondary institutions

that WebCT, now a company, is almost a de facto standard for

online course delivery. See http://www.webct.com.

CHALLENGING PARADIGMS AND

LEARNING THEORIES

Cognition: Models of Mind or Creating Culture?

In this section, two challenging cognitive paradigms will be

discussed. The overriding discussion focuses on whether

cognition is best understood as a model of the mind or rather

as a creation of culture.

From the Cognitive Revolution to Cultural Psychology

From the vantage point of the mid-1990s, Jerome Bruner

looked back on the cognitive revolution of the late 1950s,

which he helped to shape, and reflected on a lost opportunity.

Bruner had imagined that the new cognitive paradigm would

bring the search for meaning to the fore, distinguishing it

from the behaviorism that preceded it (Bruner, 1990, p. 2).

Yet as Bruner wrote, the revolution went awry—not because

it failed, but because it succeeded:

Very early on, for example, emphasis began shifting from

“meaning” to “information,” from the construction of meaning

to the processing of information. These are profoundly different

matters. The key factor in the shift was the introduction of com-

putation as the ruling metaphor and computability as a necessary

criterion of a good theoretical model. (p. 4)

The information-processing model of cognition became

so dominant, Bruner argued, and the roles of meaning and

meaning making ended up as much in disfavor as they had

been in behaviorism. “In place of stimuli and responses, there

was input and output,” and hard empiricism ruled again, with

a new vocabulary, but with the same disdain for mentalism

(Bruner, 1990, p. 7).

Bruner’s career as a theorist is itself instructive. Heralded

by Gardner and others as one of the leading lights of 1950s

cognitivism, Bruner has been one of a small but vocal group

calling for a return to the role of culture in understanding the

mind. This movement has been tangled up closely with the

evolution of educational technology over the same period,

perhaps illuminated in a pair of titles that serve as book-

ends for one researcher’s decade-long trajectory: Etienne

Wenger’s (1987) Artificial Intelligence and Tutoring


Challenging Paradigms and Learning Theories

417

Systems: Computational and Cognitive Approaches to the

Communication of Knowledge and his (1998) Communities

of Practice: Learning, Meaning, and Identity.

Cognitive Effects, Transfer, and the Culture of

Technology: A Brief Narrative

In his 1996 article, “Paradigm Shifts and Instructional Tech-

nology: An Introduction,” Timothy Koschmann began by

identifying four defining paradigms of technology in educa-

tion. In roughly chronological (but certainly overlapping)

order, these are CAI, characterized by drill-and-practice and

programmed instruction systems; ITS, which drew on AI re-

search to create automated systems that could evaluate a

learner’s progress and tailor instruction accordingly; the

Logo-as-Latin paradigm, led by Papert’s microworld and

children-as-programmers efforts; and CSCL, a socially-

oriented, constructivist approach that focuses on learners

in practice in groups. Koschmann invoked Thomas Kuhn’s

(1996) controversial notion of the incommensurability of

competing paradigms: 

Kuhn held that the effect of a paradigm shift is to produce a

divided community of researchers no longer able to debate their

respective positions, owing to fundamental differences in termi-

nology, conceptual frameworks, and views on what constitutes

the legitimate questions of science. (Koschmann, 1996, p. 2)

Koschmann’s analysis may well be accurate. The literature

surrounding the effects that learning technology produces

certainly displays examples of this incommensurability, even

within the writings of individual theorists.

As mentioned earlier, Papert’s work with teaching chil-

dren to program in Logo was originally concerned with

bridging the gap between Piaget’s concrete and formal think-

ing stages, particularly with respect to mathematics and

geometry. Over time, however, Papert’s work with children

and Logo began to be talked about as “computer cultures”

(Papert, 1980, pp. 22–23): Logo gave its practitioners a vo-

cabulary, a framework, and a set of tools for a particular kind

of learning through exploration. Papert envisaged a computer

culture in which children could express themselves as episte-

mologists, challenging the nature of established knowledge.

But although Papert’s ideas and the practice of Logo learning

in classrooms contributed significantly to the esprit de temps

of the 1980s, it was difficult for many mainstream educa-

tional researchers and practitioners to join the mindset that he

believed would revolutionize learning.

A large-scale research project to evaluate the claims of

Logo in classrooms was undertaken by Bank Street College

in the mid-1980s. The Bank Street studies came to some crit-

ical conclusions about the work that Papert and his col-

leagues were doing (Pea & Kurland, 1987; Pea, Kurland, &

Hawkins, 1987). Basically, the Bank Street studies concluded

with a cautious note—that no significant effects on cognitive

development could be confirmed—and called for much more

extensive and rigorous research amid the excitement and

hype. The wider effect of the Bank Street publications fed

into something of a popular backlash against Logo in the

schools. A 1984 article in the magazine Popular Psychology

summarized the Bank Street studies and suggested bluntly

that Logo had not delivered on Papert’s promises.

Papert responded to this critique (Papert, 1985) by arguing

that the framing of research questions was overly simplistic.

Papert chided his critics for looking for cognitive effects by

isolating variables as if classrooms were treatment studies.

Rather than asking “technocentric” questions such as “What

is THE effect of THE computer?” (p. 23), Papert called for an

examination of the culture-building implications of Logo

practice, and for something he called “computer criticism,”

which he proposed as akin to literary criticism.

Pea (1987) responded, claiming that Papert had unfairly

characterized the Bank Street research (Papert had responded

only to the Psychology Today article, not to the original liter-

ature) and arguing that as researchers they had a responsibil-

ity to adhere to accepted scientific methods for evaluating the

claims of new technology. The effect of this exchange was

to illuminate the vastly different perspectives of these re-

searchers. Where Papert was talking about the open-ended

promise of computer cultures, Pea and his colleagues, devel-

opmental psychologists, were evaluating the work from the

standpoint of demonstrable changes in cognition (Pea &

Kurland, 1987). Whereas Papert accused his critics of reduc-

tionism, Davy (1985) likened Papert to the proverbial man

who looks for his keys under the streetlight because the light

is better there.

Gavriel Salomon and Howard Gardner responded to this

debate with an article that searched for middle ground

(Salomon & Gardner, 1986): An analogy, they pointed out,

could be drawn from research into television and mass media,

a much older pursuit than educational computing, and one in

which Salomon was an acclaimed scholar. Salomon and

Gardner argued that one could not search for independent

variables in such a complex area; instead, they called for a

more holistic, exploratory research program, and one that took

more than the overt effects of the technology into account.

Indeed, in 1991 Salomon and colleagues David Perkins

and Tamar Globerson published a groundbreaking article that

shed more light on the issue (Salomon et al., 1991). To con-

sider the effects of a technology, one had to consider what



418

Computers, the Internet, and New Media for Learning

was changed after a learner had used a technology—but in

the absence of it. The questions that arise from this are

whether there is any cognitive residue from the prior experi-

ence and whether there is transfer between tasks. This is a

different set of questions than those that arise from investi-

gating the effects with technology, which demand a more de-

centered, system-wide approach, looking at the learner in



partnership with technology.

Although it contributed important new constructs and vo-

cabulary to the issue, the Salomon et al. (1991) article is still

deeply rooted in a traditional cognitive science perspective,

like much of Pea’s research, taking first and foremost the indi-

vidual mind as the site of cognition. Salomon, Perkins, and

Globerson, all trained in cognitive psychology, warn against

taking the “effects with” approach too far, noting that com-

puters in education are still far from ubiquitous, and that the

search for the “effects of ” is still key.

In a 1993 article Pea responded to Salomon et al. (1991)

from yet a different angle. Pea, then at Northwestern and

working closely with his Learning Sciences colleagues,

wrote on “distributed intelligence” and argued against taking

the individual mind as the locus of cognition, criticizing

Salomon and colleagues’ individualist notions of cognitive

residue: “The language used by Salomon et al. (1991) to

characterize the concepts involved in how they think about

distributed intelligence is, by contrast, entity-oriented—a

language of containers holding things” (Pea, 1993, p. 79).

Pea, reviewing recent literature on situated learning

and distributed cognition (Brown et al., 1996; Lave, 1988;

Winograd & Flores, 1986), had changed his individualist

framework of cognitive science for a more “situative perspec-

tive” (Greeno, 1997, p. 6), while Salomon (1993) argued that

cognition still must reside in the individual mind. It is interest-

ing to note that neither Salomon nor Pea in this exchange

seemed completely comfortable at this point with the notion of

culture making beyond its influence as a contributing factor to

mind, artifacts, and such empirically identifiable constructs.



Bricolage and Meaning Making at MIT

Scholarship at MIT’s Media Lab was also changing in the

early 1990s. The shift played out amid discussions of

bricolage, computer cultures, relational approaches, the con-

struction and sharing of public artifacts, and so on (Papert,

1980, 1991; Turkle, 1984, 1995), as well as amid the centered,

developmental cognitive science perspective from which

their work historically derives. Theorizing on epistemological

pluralism, Turkle and Papert (1991) clearly revealed the ten-

sion between the cognitivist and situative perspective: Papert

and Turkle desired to understand the mind and simultaneously

to reconcile how knowledge and meaning are constituted in

community, culture, and technology. The cognitivist stance

might well have been limiting for constructionist theory in the

1980s. Pea (1993) offered a critique of Papert’s construction-

ism from the standpoint of distributed intelligence:

Papert described what marvelous machines the students had

built, with very little interference from teachers. On the surface,

the argument was persuasive, and the children were discovering

important things on their own. But on reflection, I felt this argu-

ment missed the key point about the invisible human interven-

tion in this example—what the designers of LEGO and Logo

crafted in creating just the interlockable component parts of

LEGO machines or just the Logo primitive commands for con-

trolling these machines. (p. 65)

Pea’s critique draws attention to the fact that what is going

on in the Logo project exists partly in the minds of the chil-

dren, and partly in the Logo system itself—that they are in-

separable. Pea’s later work pointed to distributed cognition,

whereas the Media Lab’s legacy—even in the distributed

constructionism of Mitchel Resnick and Uri Wilensky and in

the social constructionism of Goldman-Segall—is deeply

rooted in unraveling the mystery of the mind and its ability to

understand complexity and complex systems. For example,

whereas Resnick’s work explores ecologies of Logo turtles, it

does not so much address ecologies of learners. Not until the

late 1990s did the research at the Media Lab move toward

distributed environments and the cultures and practices

within them (Bruckman, 1998; Picard, 1997).

Learning, Thinking Attitudes, and Distributed Cognition

Understanding the nature of technology-based learning

systems greatly depends on one’s conceptualization of how

learning occurs. Is learning linear and developmental, or a

more fluid, flexible (Spiro, Feltovich, Jacobson, & Coulson,

1991) and even random “system” of making meaning of expe-

rience? Proponents of stage theory have tried to show how mat-

uration takes place in logical causal sequences or stages

according to observable stages in growth patterns—the final

stage being the highest and most coveted. Developmental the-

ories, such as Freud’s oral, anal, and genital stages (Freud,

1952), Erikson’s eight stages of psychological growth from

basic trust to generativity (Erikson, 1950), or Piaget’s stages

from sensorimotor to formal operational thinking (see Grubner

& Voneche, 1977), are based on the belief that the human or-

ganism must pass through these stages at critical periods in its

development in order to reach full healthy integrated matura-

tion, be it psychological, physical, spiritual, or intellectual.



Download 9.82 Mb.

Do'stlaringiz bilan baham:
1   ...   93   94   95   96   97   98   99   100   ...   153




Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©fayllar.org 2024
ma'muriyatiga murojaat qiling