Content introduction chapter I. The importance of academic writing


Peer review as a form of peer driven formative assessment


Download 0.57 Mb.
bet20/22
Sana07.05.2023
Hajmi0.57 Mb.
#1437968
1   ...   14   15   16   17   18   19   20   21   22
Bog'liq
Content Introduction Chapter I. The importance of academic writi (1)

2.2. Peer review as a form of peer driven formative assessment


In order to understand the theoretical basis of peer review for learning and as a form of assessment, it is helpful to understand its grounding in formative assessment. Formative assessment is generally regarded as any assessment that is used to shape and promote students͛ learning ;Wiliam͕ Lee͕ Harrison, & Black, ϮϬϬϰͿ͘ Often referred to as ͞assessment for learning͕͟ may be contrasted with summative assessment, which is sometimes used to evaluate students͛ knowledge or skills within a given moment in time, and which can function to serve the goal of accountability or verifying competency (Wiliam, 2018). Instructional feedback, that is, the information provided to students about their progress for the sake of improving future learning experiences, is viewed as central to formative assessment (Smith & Lipnevich, 2018). Peer-driven assessment is a type of formative assessment. Prior efforts to synthesize research on peer assessment for learning have determined that it can be effective for learning content and skills (Althauser & Darnall, 2001; Jensen & Fischer, 2005; Liu & Carless, 2006; Sun, Harris, Walther, & Baiocchi, 2015; Topping, 2005). Well-designed peer assessment activities are thought to reap the benefits of highly interactive classroom activities (Chi & Wylie, 2014). Peer review as a form of peer-driven assessment provides students an opportunity to expand upon and explore new perspectives on their work and that of their peers. In addition, some evidence suggests that it may enhance the efficacy of self-assessment, particularly within the knowledge or skill domain as students become more cognizant of their strengths and find opportunities for self-improvement (DeGrez, Valcke, & Roozen, 2012; Falchikov Θ Goldfinch͕ ϮϬϬϬͿ͘ In addition͕ the opportunity to analyze a peer͛s work provides students with exposure to a variety of exemplars (Mulder, Pearce, & Baik, 2014). Students often have difficulty determining what qualities make a final written paper very good or fair, particularly when they only view model examples. Thus, when students are able to compare different examples of work, it often becomes easier for them to identify strengths and weaknesses that need improvement in their own work and that of their peers (Reinholz, 2015). In short, active engagement in a peer review activity, both in receiving feedback and in reviewing other students͛ work ;Li͕ Liu͕ Θ Steckelberg͕ ϮϬϭϬͿ͕ is thought to facilitate individualized learning through collaborative knowledge generation. In addition to providing students with the opportunity to receive and give feedback on written products, the peer review activity also closely aligns with several goals outlined in the American Psychological Association (APA) Guidelines for the Undergraduate Major (APA, 2013) , given that the activity facilitates both reflection on writing as a practice and communication with peers about writing. For example, students may ͞demonstrate effective writing for different purposes͟ ;APA͕ ϮϬϭ3; Goal 4.1) by drafting the content of the written report and commenting reflectively on peers͛ work͘ The activity also encourages students to ͞interact effectively with others͟ ;APA͕ ϮϬϭ3; Goal 4.3) by supporting a context that gives students an opportunity to explain ideas, question one another, and in so doing, develop and refine their concepts of psychological science, as well as derive new and more effective ways for communicating such concepts in their own terms. As such, peer review activities can achieve a wide range of instructional objectives set forth in the APA guidelines. In the next section, we outline some steps to consider when utilizing a peer review activity, and how these considerations may align with APA Guidelines for the Undergraduate Major (see Table 1 for a summary schematic). Thinking through the Steps of a Peer Review Multiple factors appear to contribute to the efficacy of peer review in the context of writing instruction. To highlight factors that should be considered when developing a peer assessment activity, Topping (1998) developed a typology of peer assessment in a post-secondary educational context that emphasizes 17 different sources of variation contributing to the success in a peer assessment activity. In later review articles, Topping (2001, 2005) emphasized 12 organizational aspects of learning that instructors should consider in planning a peer assessment activity. 1 Building on this theoretical framework, Gielen, Dochy, and Onghena (2011) proposed organizing these different variables into five clusters. In the sections below, we focus on 15 of the variables most pertinent to in-class peer review of writing organized around three key categories (i.e., making decisions about peer review, linking peer review to other elements of learning, and managing the assessment process). Figure 1 shows some basic instructional strategies for conducting a peer review. Additional resources for conducting an in-class peer review are available in the online Open Science Framework directory that serves as a companion to this chapter.
Decide when to schedule the peer review, ideally well enough in advance so that students have time to revise their work.
x Provide assessment tools (e.g., grading rubrics) in advance so that students have a framework for understanding the purpose of the assignment.
x Set parameters for making the review constructive ;e͘g͕͘ ͞Praise-Question-Polish͟ technique; Lyons, 1981).
x Think about the structure of the peer review. Will the peer review take place in the classroom, or online? Will students be working in pairs or in small groups; how long will students have to work with each other as reviewer and reviewee; when will you know to nudge students to switch roles? Consider online peer review as a viable option if classroom time is limited. Several tools can make this possible͕ including a content management system͛s Wiki or Word or GoogleDocs with the track changes/suggestions feature turned on.
x Provide example checklists and clear instructions to guide students in providing feedback.
x Help students prioritize certain forms of feedback over others (e.g., content over mechanics, or vice versa).
x Monitor and make sure to spend approximately the same amount of time with each student or group of students.
x Encourage students to incorporate feedback, both in terms of scheduling due dates for drafts, but also by giving some form incentive for revising their work. Is attending and active participation in the peer review part of their grade or are students expected to make a substantive revision as part of their grade?
x Discuss with students different approaches for incorporating suggestions and feedback into the final work or being prepared to provide a justification for declining feedback.
Peer assessment has been the subject of considerable research interest over the last three decades, with numerous educational researchers advocating for the integration of peer assessment into schools and instructional practice. Research synthesis in this area has, however, largely relied on narrative reviews to evaluate the efficacy of peer assessment. Here, we present a meta-analysis (54 studies, k = 141) of experimental and quasi-experimental studies that evaluated the effect of peer assessment on academic performance in primary, secondary, or tertiary students across subjects and domains. An overall small to medium effect of peer assessment on academic performance was found (g = 0.31, p < .001). The results suggest that peer assessment improves academic performance compared with no assessment (g = 0.31, p = .004) and teacher assessment (g = 0.28, p = .007), but was not significantly different in its effect from self-assessment (g = 0.23, p = .209). Additionally, meta-regressions examined the moderating effects of several feedback and educational characteristics (e.g., online vs offline, frequency, education level). Results suggested that the effectiveness of peer assessment was remarkably robust across a wide range of contexts. These findings provide support for peer assessment as a formative practice and suggest several implications for the implementation of peer assessment into the classroom.

Feedback is often regarded as a central component of educational practice and crucial to students’ learning and development (Fyfe & Rittle-Johnson, 2016; Hattie and Timperley 2007; Hays, Kornell, & Bjork, 2010; Paulus, 1999). Peer assessment has been identified as one method for delivering feedback efficiently and effectively to learners (Topping 1998; van Zundert et al. 2010). The use of students to generate feedback about the performance of their peers is referred to in the literature using various terms, including peer assessment, peer feedback, peer evaluation, and peer grading. In this article, we adopt the term peer assessment, as it more generally refers to the method of peers assessing or being assessed by each other, whereas the term feedback is used when we refer to the actual content or quality of the information exchanged between peers. This feedback can be delivered in a variety of forms including written comments, grading, or verbal feedback (Topping 1998). Importantly, by performing both the role of assessor and being assessed themselves, students’ learning can potentially benefit more than if they are just assessed (Reinholz 2016).


Peer assessments tend to be highly correlated with teacher assessments of the same students (Falchikov and Goldfinch 2000; Li et al. 2016; Sanchez et al. 2017). However, in addition to establishing comparability between teacher and peer assessment scores, it is important to determine whether peer assessment also has a positive effect on future academic performance. Several narrative reviews have argued for the positive formative effects of peer assessment (e.g., Black and Wiliam 1998a; Topping 1998; van Zundert et al. 2010) and have additionally identified a number of potentially important moderators for the effect of peer assessment. This meta-analysis will build upon these reviews and provide quantitative evaluations for some of the instructional features identified in these narrative reviews by utilising them as moderators within our analysis.


Evaluating the Evidence for Peer Assessment


Empirical Studies
Despite the optimism surrounding peer assessment as a formative practice, there are relatively few control group studies that evaluate the effect of peer assessment on academic performance (Flórez and Sammons 2013; Strijbos and Sluijsmans 2010). Most studies on peer assessment have tended to focus on either students’ or teachers’ subjective perceptions of the practice rather than its effect on academic performance (e.g., Brown et al. 2009; Young and Jackman 2014). Moreover, interventions involving peer assessment often confound the effect of peer assessment with other assessment practices that are theoretically related under the umbrella of formative assessment (Black and Wiliam 2009). For instance, Wiliam et al. (2004) reported a mean effect size of .32 in favor of a formative assessment intervention but they were unable to determine the unique contribution of peer assessment to students’ achievement, as it was one of more than 15 assessment practices included in the intervention.

However, as shown in Fig. 1, there has been a sharp increase in the number of studies related to peer assessment, with over 75% of relevant studies published in the last decade. Although it is still far from being the dominant outcome measure in research on formative practices, many of these recent studies have examined the effect of peer assessment on objective measures of academic performance (e.g., Gielen et al. 2010a; Liu et al. 2016; Wang et al. 2014a). The number of studies of peer assessment using control group designs also appears to be increasing in frequency (e.g., van Ginkel et al. 2017; Wang et al. 2017). These studies have typically compared the formative effect of peer assessment with either teacher assessment (e.g., Chaney and Ingraham 2009; Sippel and Jackson 2015; van Ginkel et al. 2017) or no assessment conditions (e.g., Kamp et al. 2014; L. Li and Steckelberg 2004; Schonrock-Adema et al. 2007). Given the increase in peer assessment research, and in particular experimental research, it seems pertinent to synthesise this new body of research, as it provides a basis for critically evaluating the overall effectiveness of peer assessment and its moderators.


Fig. 1
figure1


Number of records returned by year. The following search terms were used: ‘peer assessment’ or ‘peer grading or ‘peer evaluation’ or ‘peer feedback’. Data were collated by searching Web of Science (www.webofknowledge.com) for the following keywords: ‘peer assessment’ or ‘peer grading’ or ‘peer evaluation’ or ‘peer feedback’ and categorising by year

Full size image


Previous Reviews
Efforts to synthesise peer assessment research have largely been limited to narrative reviews, which have made very strong claims regarding the efficacy of peer assessment. For example, in a review of peer assessment with tertiary students, Topping (1998) argued that the effects of peer assessment are, ‘as good as or better than the effects of teacher assessment’ (p. 249). Similarly, in a review on peer and self-assessment with tertiary students, Dochy et al. (1999) concluded that peer assessment can have a positive effect on learning but may be hampered by social factors such as friendships, collusion, and perceived fairness. Reviews into peer assessment have also tended to focus on determining the accuracy of peer assessments, which is typically established by the correlation between peer and teacher assessments for the same performances. High correlations have been observed between peer and teacher assessments in three meta-analyses to date (r = .69, .63, and .68 respectively; Falchikov and Goldfinch 2000; H. Li et al. 2016; Sanchez et al. 2017). Given that peer assessment is often advocated as a formative practice (e.g., Black and Wiliam 1998a; Topping 1998), it is important to expand on these correlational meta-analyses to examine the formative effect that peer assessment has on academic performance.

In addition to examining the correlation between peer and teacher grading, Sanchez et al. (2017) additionally performed a meta-analysis on the formative effect of peer grading (i.e., a numerical or letter grade was provided to a student by their peer) in intervention studies. They found that there was a significant positive effect of peer grading on academic performance for primary and secondary (grades 3 to 12) students (g = .29). However, it is unclear whether their findings would generalise to other forms of peer feedback (e.g., written or verbal feedback) and to tertiary students, both of which we will evaluate in the current meta-analysis.


Moderators of the Effectiveness of Peer Assessment


Theoretical frameworks of peer assessment propose that it is beneficial in at least two respects. Firstly, peer assessment allows students to critically engage with the assessed material, to compare and contrast performance with their peers, and to identify gaps or errors in their own knowledge (Topping 1998). In addition, peer assessment may improve the communication of feedback, as peers may use similar and more accessible language, as well as reduce negative feelings of being evaluated by an authority figure (Liu et al. 2016). However, the efficacy of peer assessment, like traditional feedback, is likely to be contingent on a range of factors including characteristics of the learning environment, the student, and the assessment itself (Kluger and DeNisi 1996; Ossenberg et al. 2018). Some of the characteristics that have been proposed to moderate the efficacy of feedback include anonymity (e.g., Rotsaert et al. 2018; Yu and Liu 2009), scaffolding (e.g., Panadero and Jonsson 2013), quality and timing of the feedback (Diab 2011), and elaboration (e.g., Gielen et al. 2010b). Drawing on the previously mentioned narrative reviews and empirical evidence, we now briefly outline the evidence for each of the included theoretical moderators.

Role
It is somewhat surprising that most studies that examine the effect of peer assessment tend to only assess the impact on the assessee and not the assessor (van Popta et al. 2017). Assessing may confer several distinct advantages such as drawing comparisons with peers’ work and increased familiarity with evaluative criteria. Several studies have compared the effect of assessing with being assessed. Lundstrom and Baker (2009) found that assessing a peer’s written work was more beneficial for their own writing than being assessed by a peer. Meanwhile, Graner (1987) found that students who were receiving feedback from a peer and acted as an assessor did not perform better than students who acted as an assessor but did not receive peer feedback. Reviewing peers’ work is also likely to help students become better reviewers of their own work and to revise and improve their own work (Rollinson 2005). While, in practice, students will most often act as both assessor and assessee during peer assessment, it is useful to gain a greater insight into the relative impact of performing each of these roles for both practical reasons and to help determine the mechanisms by which peer assessment improves academic performance.


Peer Assessment Type


The characteristics of peer assessment vary greatly both in practice and within the research literature. Because meta-analysis is unable to capture all of the nuanced dimensions that determine the type, intensity, and quality of peer assessment, we focus on distinguishing between what we regard as the most prevalent types of peer assessment in the literature: grading, peer dialogs, and written assessment. Each of these peer assessment types is widely used in the classroom and often in various combinations (e.g., written qualitative feedback in combination with a numerical grade). While these assessment types differ substantially in terms of their cognitive complexity and comprehensiveness, each has shown at least some evidence of impactive academic performance (e.g., Sanchez et al. 2017; Smith et al. 2009; Topping 2009).

Freeform/Scaffolding


Peer assessment is often implemented in conjunction with some form of scaffolding, for example, rubrics, and scoring scripts. Scaffolding has been shown to improve both the quality peer assessment and increase the amount of feedback assessors provide (Peters, Körndle & Narciss, 2018). Peer assessment has also been shown to be more accurate when rubrics are utilised. For example, Panadero, Romero, & Strijbos (2013) found that students were less likely to overscore their peers.


Online
Increasingly, peer assessment has been performed online due in part to the growth in online learning activities as well as the ease by which peer assessment can be implemented online (van Popta et al. 2017). Conducting peer assessment online can significantly reduce the logistical burden of implementing peer assessment (e.g., Tannacito and Tuzi 2002). Several studies have shown that peer assessment can effectively be carried out online (e.g., Hsu 2016; Li and Gao 2016). Van Popta et al. (2017) argue that the cognitive processes involved in peer assessment, such as evaluating, explaining, and suggesting, similarly play out in online and offline environments. However, the social processes involved in peer assessment are likely to substantively differ between online and offline peer assessment (e.g., collaborating, discussing), and it is unclear whether this might limit the benefits of peer assessment through one or the other medium. To the authors’ knowledge, no prior studies have compared the effects of online and offline peer assessment on academic performance. The goal of this study is to provide meaningful instruction to students on how to effectively review and provide feedback to their peers’ writing in the classroom. Writing is something students and professionals do every day. If individuals are unable to write effectively, personal and professional relationships will be affected negatively. The low percentage of students demonstrating proficiency in writing needs to be addressed. Writing effectively can improve communication skills in general. If the issue of writing proficiency is not solved, particularly with technology and online correspondence playing such a large role in today’s world, these students will struggle to succeed professionally. Writing is a skill, and it also helps predict academic success and plays a substantial role in civic life and the global economy (Graham & Perin, 2007). In a world where the economy is already struggling and many are without jobs, it is essential that students improve their writing skills. Peer evaluation in writing will give students the opportunity to learn from one another through writing, and students will have an opportunity to look at the rubric multiple times to ensure their understanding (Crossman & Kite, 2012). Peer evaluation is also versatile. It can involve the entire class reviewing one document, small groups working together on a document, or student-to-student review of each other’s work. Also, when students write for the teacher, this only means they are writing for a grade (Holley, 1990). When students use peer review, they learn to write for multiple audiences. Students will also gain a sense of camaraderie in that they will enjoy reading and offering advice to peers’ writing When students are able to effectively organize their thoughts, put thoughts in writing, and then defend their ideas with specific examples, they will have developed the skills of analyzing a source and supporting their ideas with specific evidence. Evidence reported above indicates low rates for fourth, eighth, and twelfth graders in writing proficiency (Persky, Daane, & Jin, 2003). The numbers are shockingly low, especially since many of these students are graduating from high school and are continuing on to either a two- or four-year school. A potential solution to the writing proficiency issue is guided peer evaluation. Methods The researcher conducted a literature search pertaining to peer evaluation in the classroom, with the findings demonstrating an overall positive impact. Peer evaluation improves students’ writing; the information students gain from peer evaluation increases versus a typical lecture and test class; students’ attitudes about peer evaluation are overall positive; and students value being the evaluator in the peer evaluation process. All of the research gathered for this study was peer reviewed, with the majority of the sources dating within the past ten years. The research design used for the proposed study was a qualitative case study. Methods of data collection were observations in the classroom, formal essays, and conferences with the students. Also, a college-readiness and peer evaluation process survey was used to gather data. For the research questions defined in this study, different methods of data collection were used. One research question asks how peer evaluation influenced students’ writing skills in the classroom. To determine these factors, the researcher had students complete an essay without any peer evaluation. Before the next essay was due, the researcher provided students with an instructional packet to train students in peerevaluation and guided students through the process. Each day, the researcher would provide students with an example from the packet so students could become familiar with it. All essay grades pre- and post-instruction were recorded, analyzed, graphed, and coded. Another research question asks how peer evaluation influences students’ understanding of the information learned. To gather information about this question, the researcher conducted individual conferences with the students in the classroom where students would fix three conventional or writing process errors within one of their essays. The researcher began with a list of questions to ask the students, and then followed up with them based on their responses as the conference took place. Conferences were recorded, transcribed, and coded. Other research questions ask what students’ perceptions are about preparation for writing effectively in college-level courses following instruction using peer evaluation, and also, what parts of the peer evaluation process students value most in the classroom.The researcher requested that students participate in a survey about college-readiness and writing and peer evaluation to gather data. The data from this study was collected from a large suburban high school in the upper Midwest. The researcher used data collected from response essays from twentytwo twelfth-grade students to analyze the strengths and areas of improvement needed for each individual student. Throughout this process, interventions also included various grammar lessons involved with sentence structure, verb usage, and active and passive voice. Overall, the researcher collected a variety of data including the following: daily, students corrected sentences from past student samples. Once this was complete, students went through the peer evaluation process, and after this, completed a conference with the teacher where they selected three sentences from their own writing, previously identified by the teacher, to verbally correct and re-write. Last, the students took a survey directly related to the peer evaluation process and focused on whether or not they feel better prepared for college-level writing. The researcher analyzed data by creating pre and post charts following the students’ progress through the process of writing.

3.1……….
METHODOLOGY In order to determine the effect of peer evaluation on student achievement in the classroom and whether or not students feel prepared to write at the college level, the researcher deemed it necessary to study twelfth-grade students in a college preparatory and composition course. This chapter outlines this research, including the sample, research context, and research design. Chapter One listed four research questions that shaped the purpose of this study: 1. How does peer evaluation influence students’ writing skills in the classroom? 2. How does peer evaluation influence students’ understanding of the information learned? 3. What are students’ perceptions about preparation to write effectively for college-level courses following instruction using peer evaluation? 4. What types of peer evaluation do students value the most in the classroom? The literature review shed light on some of these questions. The research outlined below was designed to understand them in more detail. Many studies have been conducted on the effects of peer feedback given by student writers to another, both in L1 and L2. Peer feedback or peer review has been found to be beneficial when used correctly in assisting the writing teachers to provide feedback to each and every student‟s piece of writing. Rollinson (2005) believes in process approach to writing and that writing should involve multiple drafts. He believes that peers can provide useful response at different stages of the draft. Hansen & Liu (2005) feel that peer feedback in writing classrooms is beneficial as it allows writing teachers to help their students not only to receive more feedback on their work but also provide students more practice with a range of skills important in the development of language and writing ability, such as meaningful interaction with peers, a greater exposure to ideas, and new perspectives on the writing process. Although getting feedback from peers is time consuming, it can provide a lot of benefits to both receivers and givers. Rollinson (2005) stated that not only the students who receive feedback would benefit but the students who provide feedback would also learn to provide critical response and consequently help them to be self-reliant so that they could self-edit their own work. In fact, Kristi Lundstorm and Wendy Baker (2009) discovered in their study that the reviewers showed more significant improvement in their own writing compared to the receivers who depended solely on their peers” feedback to improve their writing. 2.2 Guided Peer Feedback According to Rollinson (2005), in order for peer feedback to be effective the students involved have to be given pre-training in the techniques of providing useful reviews because leaving the students on their own to comment on others‟ work without proper guidance will not be beneficial. For instance, comments such as “I don’t like your ideas” and “I disagree with your points” are not constructive thus would not be beneficial. Instead, the students should be trained to look at the important aspects of the essay such as the thesis statements and the topic sentences. Stanley (1992) and Zhu (1995) as cited in Min (2006) conducted studies on the effects of feedback training for their freshman composition classes. Stanley‟s study was done in the ESL classroom while Zhu‟s study was focused on L1 peer reviewers. Both studies reported that training had significant effects on the quantity and quality of peer feedback. Hui-Tzu Min herself (2006) conducted a research to examine the impact of trained peer reviewers‟ feedback on EFL college students‟ revision types and quality. It was discovered that after training the students included a significantly higher number of reviewers‟ comments into their revisions compared to pre-training. In addition, peer-triggered revisions comprised 90% of the total revisions, and the number of revisions with enhanced quality was also significantly higher than before training. Therefore, it was concluded that trained peer reviewers‟ feedback can positively impact EFL students‟ revision types and quality of texts directly. 2.3 Providing feedback via on-line medium The use of electronic medium as learning tools has become popular since the introduction of the Internet. Its use is not restricted by time or distance, thus, it provides flexibility and convenience to both students and teachers. Bee-Lay and Yee Ping (1991) conducted a study on two groups of students from Singapore and Canada who used the electronic mail as a medium of communication. They discussed two books from the two countries and both groups benefit from the on-line discussion. Fizler (1995) carried out a study on the effectiveness of using e-mail to teach English and found that the students‟ writing skills as well as their motivation improved. The use of electronic mail as a learning tool has been shown to be effective in improving students‟ writing skills whether individually or in groups as demonstrated by Karnedi (2004). Karnedi conducted a study on the effectiveness of tutoring using the electronic mail to enhance writing skills. In the study, feedback was given by the tutor through electronic mail. He concluded that the advantages of electronic tutorial using e-mail outweigh the disadvantages and proposed its use to enhance writing skills. Ertmer, Richardson, Belland et al. (2007) conducted a study on students’ perceptions of the value of giving and receiving peer feedback regarding the quality of discussion postings in an online course. It was discovered that although the students had put higher value on instructor’s feedback the interview data showed that the participants valued the peer feedback process and benefitted from having to give and receive peer feedback. They also found that feedback given by their peers helped them improve the quality of feedback they in turn provide to others. A study conducted by Guardo & Shi (2007) on ESL students‟ experience of online peer feedback discovered that e-feedback eliminates the logistical problems at the same time retains some of the best features of traditional written feedback, including a text-only environment that pushes students to write balanced comments with an awareness of the audience‟s needs and with an anonymity that allows peers to provide critical feedbacks on each other’s writings. Procedure At the beginning of the study period, a questionnaire was distributed to the participants to collect their demographics and to find out whether they have Facebook accounts. The participants who did not have Facebook accounts were instructed to sign up and they were given time to familiarize themselves with Facebook features especially Facebook Notes. The main researcher in this study had close contact with the participants as he was the class teacher and the participants were his students. In this study, the teacher played a role as participants’ observer. There were three main stages involved in this study which were the instruction of an academic writing process, feedback training and feedback exercise. Since the focus of this study is on the planning stage of the academic writing, the discussion related to instruction would only focus on the process of developing an outline of an academic essay. The participants went through normal classroom instruction on the process of writing an outline which include the formulation of thesis statements, developing main ideas and supporting points at various levels of supports in parallel structure. The mechanics of outlining such as numbering system were also emphasized. At the end of the first stage of the study which took three two-hour lessons, the participants were instructed to write outlines in pairs based on the topics of their choice. Ten outlines that they prepared were submitted to the teacher via e-mail. The outlines were graded and the marks would be used as pre-test marks for this study. Before carrying out the online feedback exercise via Facebook Notes, a short feedback training session was conducted for the students as a preparation procedure for them. The training was the second stage of the study which involved raising the participants’ awareness on what constitutes a good or a weak feedback. For this purpose, three models of both types of feedback were discussed in class. Next, they were given an in-class exercise in which they practised providing written feedbacks on three excerpts – a thesis statement, a topic sentence and a paragraph. Their feedbacks were later discussed as to check and validate their understanding. The third and the last stage of the study was feedback exercise conducted online via Facebook Notes. Six outlines were randomly chosen by the teacher and the participants (twelve students) who wrote these outlines were considered the experimental group and their essays would be posted and reviewed by their peers. The remaining eight students were treated as the control group and although they participated in providing feedback to their peers, their essays were not reviewed. Three tasks were designed to address three main parts of an outline. The first task (Task 1) was for the peer reviewers to look at the development of the thesis statement. The second task (Task 2) was for them to examine the development of the topic sentences from the thesis statement and the coherence of the essay. Finally the third task (Task 3) was for the reviewers to look at a particular paragraph in the outline and to comment on the coherence of the paragraph which includes the relationship of the topic sentence and the supporting details in a particular paragraph. The online feedback exercise started on the third week of the study. Six outlines were posted on the teacher’s Facebook Notes with two outlines addressing each task mentioned above. Each outline posted was accompanied by instructions which were carefully designed to ensure that participants were able to respond to the tasks given. Since the outlines were posted on the teacher’s Facebook Notes the students had to add the teacher as their friend in order for them to have access to his Notes. Since Facebook is an open accessed social network website, everyone who is the teacher’s friend can have access to his Notes. Therefore, the teacher had to specifically grouped the students in this study under a specific group and every task posted was set to a specific privacy setting in which only the group members could view and respond to instructions posted on the Notes. The group was assigned to a group called EM3F and to avoid overlapping of concepts in the discussion later, the students’ feedback was termed as comment aligned with the term used in Facebook. Once everything was set, the students were instructed to proceed with their online feedback exercise. The instruction clearly stated that they were to respond to three tasks involved where two outlines were used for each task and they had to give their feedbacks by posting comments on every task. Finally, the LIKE function was used by the teacher to highlight important or useful comments posted for the rest of the students to take note. The findings above show the potential benefits of using Facebook Notes as a platform for peer guided feedback at the planning stage of an academic writing process. It can be concluded from this exploratory study that the students, with guidance from the writing teacher, can provide constructive feedbacks to their peers. However, cautions have to be exercised by the teacher in designing the tasks to guide the peer reviewers. Instructions should be clear and not too wordy. The problem of providing timely and effective feedback normally faced by the writing teachers can also be solved as they can access to the 'Notes’ anytime and anywhere. Although the number of useful comments was lower than less useful ones it was evident that the feedback exercise had successfully facilitated the learning process. The results of the study show that the reviewers whose writings were not posted to be reviewed also benefitted from the exercise as they learn to be more effective in self-editing their own work. The study also demonstrated that limited class time could be extended by letting the students conduct their learning at their own time outside class. The data collected from this study was less than expected due to the participants‟ low proficiency level. Perhaps a richer data could be gathered if participants with higher proficiency level were used. As a conclusion, ESL teachers now have another teaching tool at their disposal to help them in making their writing classes more interesting and effective. Thus, teachers all over the world are encouraged to make use of this on-line utility as a tool not only in giving feedback but in language learning as a whole. Journal of Studies in Education
ISSN 2162-6952
2013, Vol. 3, No. 4
www.macrothink.org/jse
94
feedback and critical peer feedback may greatly facilitate students improving their writing
skills. In addition, in their quasi-experimental study comparing three methods for teaching
student writing, Plutsky and Wilson (2004) found that peer feedback helped students become
proficient writers. More importantly, most students view peer feedback as effective as the
instructors. Jacobs et al., (1998) found nearly the same percentage 93% of their EFL students
in Hong Kong and Taiwan said they would like to receive peer feedback as one kind of
feedback. According to Wakabayashi (2013) through peer feedback, learners engage in
critical evaluation of peer text for the purpose of exchanging help for revision. Because
learners can learn more about writing and revision by reading other's drafts critically and
their awareness of what makes writing successful and effective can be enhanced and, lastly
learners eventually become more autonomous writers (Maarof et al., 2011).
Advantages of peer feedback
Peer feedback has been advocated in several studies for a number of benefits. For example,
Hyland (2000) mentions that peer feedback encourages student to participate in the classroom
activity and make them less passively teacher- dependent. Yarrow and Topping (2001:262)
claim that peer feedback plays a pivotal role in "increased engagement and time spent on-task,
immediacy and individualization of help, goal specification, explaining, prevention of
information processing overload, promoting, modeling and reinforcement". Moreover, using
peer feedback can lead less writing apprehension and more confidence as well as establish a
social context for writing. Yang et al., (2006) also add that peer feedback is beneficial in
developing critical thinking, learner autonomy and social interaction among students. More
importantly, the practice of peer feedback allows students to receive more individual
comments as well as giving reviewers the opportunity to practice and develop different
language skills (Lundstrom and Baker, 2009).
Disadvantages of peer feedback
Despite its perceived benefits, some researchers found that peer feedback were viewed with
skepticism and produced few benefits. A number of studies challenged the strong positive
comments about peer review and cautioned that some peers are likely to comment on surface
errors and give advice that does not help revision. In doing research on the impact of peer and
teacher feedback on writing of secondary school EFL students in Hong Kong, Tsui and
Ng(2000) discovered that all students prefer teacher feedback than peer feedback. The main
reason is that they assume teacher is the one who is qualified to provide them with useful
comments. So the teacher is defined as the only source of authority for giving the suitable
comments. Saito and Fujita (2004) report that a number of studies indicate that there are a
number of biases associated with peer feedback including friendship, reference,
purpose(development vs. grading) feedback (effects of negative feedback on future
performance), and collusive (lack of differentiation) bias. Another issue of concern is that
most peer responses focused on product rather than the processes of writing, and many
students in L2 contexts focused on sentence- level errors (local errors) rather than on the
content and ideas (global errors) (Storch, 2004).
A rubric is a type of scoring guide that assesses and articulates specific components and expectations for an assignment. Rubrics can be used for a variety of assignments: research papers, group projects, portfolios, and presentations.
Why Use Rubrics? 
Rubrics help instructors: 

  • Assess assignments consistently from student-to-student. 

  • Save time in grading, both short-term and long-term. 

  • Give timely, effective feedback and promote student learning in a sustainable way. 

  • Clarify expectations and components of an assignment for both students and course teaching assistants (TAs). 

  • Refine teaching methods by evaluating rubric results. 

Rubrics help students: 

  • Understand expectations and components of an assignment. 

  • Become more aware of their learning process and progress. 

  • Improve work through timely and detailed feedback. 

Considerations for Using Rubrics 
When developing rubrics consider the following:

  • Although it takes time to build a rubric, time will be saved in the long run as grading and providing feedback on student work will become more streamlined.

  • A rubric can be a fillable pdf that can easily be emailed to students. 

  • Rubrics are most often used to grade written assignments, but they have many other uses: 

    • They can be used for oral presentations. 

    • They are a great tool to evaluate teamwork and individual contribution to group tasks. 

    • Rubrics facilitate peer-review by setting evaluation standards. Have students use the rubric to provide peer assessment on various drafts. 

    • Students can use them for self-assessment to improve personal performance and learning. Encourage students to use the rubrics to assess their own work. 

    • Motivate students to improve their work by using rubric feedback to resubmit their work incorporating the feedback. 

Getting Started with Rubrics 

  • Start small by creating one rubric for one assignment in a semester.

  • Ask colleagues if they have developed rubrics for similar assignments or adapt rubrics that are available online. For example, the AACU has rubrics for topics such as written and oral communication, critical thinking, and creative thinking. RubiStar helps you to develop your rubric based on templates.

  • Examine an assignment for your course. Outline the elements or critical attributes to be evaluated (these attributes must be objectively measurable). 

  • Create an evaluative range for performance quality under each element; for instance, “excellent,” “good,” “unsatisfactory.” 

  • Add descriptors that qualify each level of performance: 

    • Avoid using subjective or vague criteria such as “interesting” or “creative.” Instead, outline objective indicators that would fall under these categories. 

    • The criteria must clearly differentiate one performance level from another. 

    • Assign a numerical scale to each level. 

  • Give a draft of the rubric to your colleagues and/or TAs for feedback. 

  • Train students to use your rubric and solicit feedback. This will help you judge whether the rubric is clear to them and will identify any weaknesses. 

  • Rework the rubric based on the feedback. 

  • When students write for a limited audience - the teacher, they do not experiment with different writing styles. Students write to fulfill the expectations of the teacher; therefore, their writing is not genuine and is often boring (Pianko & Radzik, 1980). Peer evaluating gives students an opportunity to write for a variety of persons, their peers. When students write for a wider audience, they develop a greater awareness of the complexity of writing and the need to fully and clearly develop their thoughts (Pianko & Radzik, 1980). Peer evaluation reinforces the writer's obligation not just to express himself or herself, but, more importantly, to communicate meaningfully to a reader by providing an opportunity to rehearse before a live student audience (Cooper, 1986). Another benefit of peer evaluation is the confidence developed in detecting one's own errors. Self-editing means figuring out what one really means to say, getting it clear in one's own mind, and getting it into the best words while throwing away the rest (Elbow, 1973). Editing another's paper helps in the recognition of common errors. This causes the student to then be more critical of his or her own paper (Pasternack, 1981). Studies show that students actually enjoy the opportunity to critique peer papers. They appreciate the opportunity to work together and do not abuse it. Birkett, 1982). Students value the response from their peers and consider their judgements to be i.partial and accurate (Pianko & Radzik, 1980). Implementation Setting up a peer evaluation program in the classroom would not be without difficulty; however, if done properly, problems would be minimal. Developing peer evaluation skills in students is a long term process (Collins, 1984). The process needs to begin before the actual implementation of the program. By writing specific comments about the content of student essays, teachers begin to model the evaluation process for the students. One way of getting the program started is to bring in sample papers and tape recordings of actual peer editing sessions. The entire class could read, listen to, and discuss the process of evaluating writing. With this kind of practice, teachers can deal with questions or fears about peer feedback and point out the suggestions that are helpful and those that are not (George, 1984). The first step in getting students involved would be to plan a group evaluation of an (anonymous) example essay. Students would be encouraged to make suggestions and comments for improvement. Teachers can elicit positive responses to this activity by praising specific suggestions and illustrating how suggestions improve the essay. Further practice would be given when the class is broken into small groups. Each group would be assigned a sample essay to evaluate and revise. The teacher then would be free to help guide the evaluation process as he or she met with each group. Peer Evaluation 6 Students can be taught to grade papers accurately and reliably by having them focus in on certain aspects of the paper to evaluate each time they read it, including grammar, wording, organization, and development of ideas (Guinagh & Birkett, 1982). To prevent students from writing just a pleasant comment or two or from being too harsh in their criticisms, students could be graded, periodically, on the quality of their evaluations (Pasternack, 1981). The student could also read aloud his or her own paper for the peer editor. This would involve the student in self-editing and provide the peer editor with additional information from which to make comments, since the writer would be present to explain (George, 1984). To help alleviate the fear of writing criticism, the instructor should illustrate the steps of his or her own writing and rewriting process. This would allow students to see the thinking process involved in writing on a concrete, personal level. The instructor might ask the students to comment on his or her personal evaluation of his or her own writing or the instructor's evaluation of an anonymous work. The instructor should praise responses that show encouragement and respect for the writer (Collins, 1984). Once peer evaluation is incorporated into the writing program, the teacher may want to vary the individual groups or student pairs to determine what works best. As part of the class requirement, students should be graded on the quality of their evaluative comments. In order to provide measurable guidelines, the instructor should develop a student evaluation sheet £or students to use as a checklist when evaluating another's writing. (See Appendix A) The success£ul completion o£ these steps £or using peer evaluation in the classroom can determine whether or not this system can, in £act, i.prove student writing. Through peer evaluation students are urged to £orm a personal, meaning£ul understanding o£ writing. this is achieved, students can better i.prove their own writing (Collins, 1984). Results When In a peer editing study conducted by Weeks and White (1982), it was £ound that students progressed in the area o£ mechanics and in the overall £luency o£ writing. The peer editing group was more .otivated and enthusiastic about writing because o£ the opportunity to peer edit, and the students voluntarily increased the length o£ their compositions weekly. As indicated, implementing a peer evaluation program could provide bene£its that are well worth the e££orts it would require. Evaluating the writing o£ peers helps students develop analytical and critical thinking abilities (Broon, 1984). Trained editors not only grade competently and reliably, but also write better as a result o£ their practice (Thompson, 1981). Peer editors develop an enthusiasm £or and con£idence in writing. Most importantly, they will begin to take their writing and the writing o£ others seriously (Weeks & White, 1982). Peter Elbow illustrates these points in his book, Writing Without Teachers: "These readers give you better evidence o£ what is unclear in your writing. They're not just Peer Evaluation 8 telling you the places where they think your writing is awkward because it doesn't conform to their idea of what good writing is. They are people telling you where you actually confused them." (p. 47) Students can become proficient in the peer evaluation process when careful planning and supervision are provided. The following chapters suggest one method used to incorporate the peer editing process in the English classroom. Chapter III: Design of the Study The ulti.ate goal of peer evaluation is to improve student writing. During the program, students should develop their own writing and gain respect for the individual process of writing. To achieve these positive outcomes, teachers must give special consideration to the planning involved in starting the program. Procedures The participants in this study were fifty-five eighth graders from a suburban junior high school in Jacksonville, Florida. Thirty-five students represented the experimental group, and twenty students were in the control group. The groups were heterogenously grouped according to ability, sex, and race. Both groups were given the same creative writing assignments. Ten papers were selected from each group to be evaluated by three other English instructors at the junior high school. Four creative writing assignments were given to both groups over a three week period. The ten students from each group whose papers were selected were determined by the teacher as having varying degrees of writing ability. The final copies of the writing assignments were photocopied, and the English instructors were given the unmarked photocopies to evaluate and score. The English instructors were not aware of which papers were from the control group and which were from the experimental group. Errors .ade in capitalization, Peer Evaluation 10 punctuation, usage, and spelling were noted. Overall quality of the content of the papers was rated using a holistic assessment (i.e. content, organization, development). The instructors graded the photocopies using this same method and were asked to assign the papers a numerical score ranging from one to five. A score of 1 was considered poor, and a score of 5 was excellent. Training sessions on the peer editing process were provided for the experi_ental group. The teacher displayed various, anonymous essays on the overhead projector and explained the steps involved in evaluating writing. As the essays were read aloud, specific comments were made about the content, and the students were asked for further suggestions. Mechanical errors were circled, and the teacher pointed out that the many mechanical errors in writing made reading the essay difficult. Prior to this, both the experimental and control groups had received the same training in mechanics and the composition process. Upon completing the creative writing assignments, the control group's papers were evaluated and commented upon by the teacher. The writing of the experimental group was peer edited. Students were grouped in pairs to evaluate one another's papers. A peer editor's guide was given to each partner to assist in the editing process. (See Appendix) The results of the study were determined by the scores given the photocopies that the three English instructors were asked to evaluate. Informal class observations of the success of the program and evaluation of the writing assignments by Peer Evaluation 11 the teacher were also noted. Recommendations for Implementation For the purposes of the studYt each paper was eventually evaluated by the instructor. When utilizing the peer evaluation program during the school year t the instructor Peer Evaluation 15 would not be expected to do this. Rather, once the students become proficient in the peer evaluation process, the instructor could choose just one in five writing assignments made to evaluate him or herself. This would reduce the teacher paper-grading load greatly while still giving the students plenty of practice in writing and receiving valuable feedback from their peers. To begin this program in the classroom, the instructor must model the peer evaluation process for the students and provide guidance as the students evaluate one another's papers. One full class period should be provided for the students to do the evaluations and discuss their recommendations with one another. In order to discourage students from forming cliques and to add variety and new insights, peer partners should be reassigned at intervals. Allowing students to view their own progress is also crucial to the success of the program. The student's writing assignments should be kept in a folder; a grade should be assigned for having completed all the assignments, and then the folders could be distributed to the students at the end of each grading period. Seeing their own improvement in writing will convince the students of the importance of editing and revising their work.



Download 0.57 Mb.

Do'stlaringiz bilan baham:
1   ...   14   15   16   17   18   19   20   21   22




Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©fayllar.org 2024
ma'muriyatiga murojaat qiling