An Introduction to Applied Linguistics
particularly in a foreign language setting
Download 1.71 Mb. Pdf ko'rish
|
Norbert Schmitt (ed.) - An Introduction to Applied Linguistics (2010, Routledge) - libgen.li
particularly in a foreign language setting. Questions • Describe the rhetorical situation for this writing task. Who is the author? Who are the readers? What genre is being used? What pieces of information does the author need to provide? • How well established does this writer sound? A novice researcher? An experienced researcher? A well-established authority? What are some of the textual features that gave you a sense of the author’s level of expertise? • How well does the author relate local issues to the international audience? • Overall, how effective do you think this proposal is in responding to the rhetorical situation? What aspects of the proposal are particularly effective? What aspects of the text could be improved? • Suppose the writer of the proposal has asked you to read and comment on the proposal before submitting it. Provide one page of written feedback for the writer. Assessment Carol A. Chapelle Iowa State University Geoff Brindley Macquarie University What is Language Assessment? In the context of language teaching and learning, ‘assessment’ refers to the act of collecting information and making judgements about a language learner’s knowledge of a language and ability to use it. Although some people consider ‘testing’ and ‘assessment’ to be synonymous (Clapham, 1997), many use the latter term in a broader sense to include both formal measurement tools, which yield quantifiable scores, and other types of qualitative assessment, such as observation, journals and portfolios (Davies, Brown, Elder, Hill, Lumley and McNamara, 1999: 11). What unifies the variety of tests and assessments is that they all involve the process of making inferences about learners’ language capacity on the basis of ‘observed performance’. Despite this common feature, assessment practices vary according to the purpose for which assessment information is required. Broadfoot (1987) identifies a number of broad purposes for educational assessment: • ‘Assessment for curriculum’ (providing diagnostic information and motivating learners). • ‘Assessment for communication’ (informing certification and selection). • ‘Assessment for accountability’ (publicly demonstrating achievement of outcomes). In language programmes, one purpose-related distinction that has conventionally been made is between ‘proficiency assessment’, which is concerned with measuring a person’s general ability, typically for selection decisions, and ‘achievement assessment’, which focuses on determining what has been learned as part of a specific programme of instruction, usually for assigning marks. Assessment purpose is closely tied to the ‘stakes’ attached to testing, and it therefore governs the type of assessment tool that is used and the resources that are invested in its development. In ‘high-stakes’ situations where the results of assessment may have a significant effect on test-taker’s lives (for example, selection for university entry), the instrument should have been developed with great care by suitably qualified professionals and subjected to rigorous piloting and validation. In this and other testing situations, many stakeholders are involved in language assessment, either as those who construct and/or research tests and assessment tools (for example, test development agencies, curriculum developers, teachers, university researchers), as test-takers (students hoping to be certified for a job) or as ‘consumers’ of assessment information (for example, policy-makers, government officials, educational administrators, parents, employers and the media). In recent years, language tests have begun to be used by governments and policy makers for an increasingly wide range of purposes, including citizenship 15 248 An Introduction to Applied Linguistics and immigration decisions and teacher certification. This has increased the need for language testing researchers to explore and understand the ways in which test scores are used, and therefore much has been written in recent years about the intersection of language assessment with language policy (McNamara and Roever, 2006; Spolsky, 2009). One of the important insights that has emerged from this line of research is that the stakeholders in language testing are likely to have different and, at times, conflicting perspectives on the role and purpose of assessment in language programmes, which, according to some writers, can lead to a disproportionate emphasis on assessment for accountability (McKay, 2000; Mencken, 2008). For this reason, it has been suggested that the process of test development needs to become more democratic and to involve a wide range of stakeholders so as to ensure fairness to all (Brindley 1998; Shohamy, 2006). However, the ideal of involvement needs to be balanced against the realities of implementation and technical concerns of validity and reliability, which language assessment experts are able to address. Fundamental issues in Language Assessment On the surface, language assessment may appear to be a simple process of writing test questions and scoring examinees’ responses; however, in view of the important uses that are made of test scores, as well as the complexity of language, one has to examine the process more carefully in order to understand technical issues in language assessment. Figure 15.1 illustrates one way of conceptualizing the factors that come into play in a more complex view of language assessment. The writing of test questions needs to be seen as a choice about the ‘test method’ that is most appropriate for obtaining ‘examinee’s language performance’ that is relevant to the specific language capacities of interest. As the dotted line in Figure 15.1 suggests, the examinee’s language capacity of interest is behind the choice concerning what performance should be elicited. Examinees’ language performance is scored, and the result is the test score, which is assumed to bear some systematic relationship to the language performance, that is, to summarize the quality of the performance in a relevant way. The dotted line between the score and the ‘examinee’s language capacities’ denotes the assumption that the examinee’s score is related to the examinee’s capacities that the test was intended to measure. That test score is used for some purpose, typically for making a ‘decision about the examinee,’ but it might also be used for other purposes such as to allow examinees to make decisions about their own subsequent study or to classify participants for research on second language acquisition. The connections among the components in Figure 15.1 form the basis for the more complex view of language assessment that professionals work with. These concepts should be sufficient for readers new to language assessment to grasp the fundamentals underpinning the process of language assessment. The examinee’s language capacities refer to the ‘construct’ (the examinee’s knowledge and abilities) that the test is intended to measure. The ‘test method’ is what the test designer specifies to elicit a particular type of performance from the examinee. The test score, which serves as a summary of performance that is used for decision making, requires ‘validation’, which refers to the justification of the interpretation made of the test scores and their use. Let us look at each of these three concepts in turn. |
Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©fayllar.org 2024
ma'muriyatiga murojaat qiling
ma'muriyatiga murojaat qiling