A review of approaches to assessing writing at the end of primary education
Download 0.91 Mb. Pdf ko'rish
|
International primary writing review - FINAL 28.03.2019
5.2.2 Item types and prompts
Assessment purpose and construct also has implications for the appropriateness of different item types. For example, where an assessment focusses only on specific technical skills, then multiple-choice (including matching type items etc.) or short answer questions may be sufficient. However, where an assessment focusses on writing in a more complete sense, then extended responses may be better at allowing for a demonstration of higher-level compositional skills. Nevertheless, this does not mean that an assessment must just use 1 method, as a combination of approaches could be used. Throughout this report, discussions have A review of approaches to assessing writing at the end of primary education 27 tended to treat the different types of assessments in mutually exclusive terms. While this is often the case (eg most international jurisdictions have chosen just 1 method for their summative assessments), different methods can be used in combination to provide a more comprehensive account of proficiency in writing. Some examples include the JDA (Ontario, Canada), and the CAASPP (California, USA), which incorporate a mixture of multiple-choice and extended-response type items. As mentioned in Section 2, taking a combined approach was also something promoted by the task group set up to help design the first National Curriculum Assessments in England (TGAT, 1988), which may help explain why England has used a combination of external testing (the early writing tests, and the current grammar, punctuation, and spelling test) and teacher assessment since the introduction of the National Curriculum in 1988. The intended skill coverage of an assessment will also have implications for the writing/setting of prompts, where these are used to elicit extended pieces of writing. For example, relatively simplistic, open-ended prompts could be sufficient for the assessment of some technical skills, as the content of the writing produced may be less relevant than its technical features. However, assessment of persuasiveness or narrative writing might require prompts designed to elicit persuasive arguments or story-telling (Weigle, 2002, Chapter 5). Care may need to be taken to avoid overly complicated prompts, however, where one wants to avoid outcomes being affected by pupils’ reading abilities (Weigle, 2002, Chapter 5). Where pupils are allowed a choice over different prompts/tasks, care also needs to be taken to ensure that optional routes have comparable demands (Bramley & Crisp, 2019). 5.2.3 Marking/grading/judging The reliability and validity of an assessment’s outcomes is largely influenced by how assessments are marked/graded/judged. This should be derived from the intended purpose of assessment and the construct being assessed. For example, while any assessment method could allow for the separation of the lowest from the highest attaining pupils, different methods offer different degrees of information on pupils/schools/the jurisdiction. Reliable and valid judgement is desired in any assessment, but may be particularly important/desired in high-stakes contexts, as it would be unfair to base any high-stakes decisions for individuals (pupils or teachers) or schools on invalid and/or unreliable outcomes. Similar to that noted earlier, while different methods tend to be discussed in largely mutually exclusive terms, a combination of methods could be used. For example, Heldsinger and Humphry (2010, p. 14) argued that the best method of establishing validity would be to combine comparative judgements with those based upon mark schemes, to “cross-reference the two sources of information… and to identify anomalous information… in the interests of individual students”. Download 0.91 Mb. Do'stlaringiz bilan baham: |
Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©fayllar.org 2024
ma'muriyatiga murojaat qiling
ma'muriyatiga murojaat qiling