An Introduction to Applied Linguistics
Download 1.71 Mb. Pdf ko'rish
|
Norbert Schmitt (ed.) - An Introduction to Applied Linguistics (2010, Routledge) - libgen.li
Figure 15.5 Dimensions defining the options for construct definition
An example of the former would be ChemSPEAK (Douglas and Selinker, 1993), a test that requires examinees to perform tasks that use the lexico-grammatical constructions from chemistry in order to test ability to speak about chemistry. The test does not, however, ask examinees to simulate the performance of ‘doing’ chemistry. An example of a ‘general performance’ test would be the oral proficiency interview of the American Council for Teachers of Foreign Languages (ACTFL). It requires examinees to engage in conversation about themselves and family that might come up in a social situation. In contrast to the specific language content of the various modules of the Occupational English Test, this test avoids requiring the examinee to talk about field specific topics so that the test score can be used to indicate capacity for speaking performance in general. Construct theory is obviously slippery conceptual business, which needs to be anchored in the practices of test design and empirical research. Some steps toward understanding construct theory from the perspective of test design have appeared recently (Read and Chapelle, 2001), but perhaps the most sweeping impact on rethinking construct definition is coming from the use of technology for developing test methods. Bachman’s (2000) review of the state of the art of language testing at the turn of the century included the following observation on the contribution of technology in language testing, ‘... the new task formats and modes of presentation that multi-media computer-based test administration makes possible ... may require us to redefine the very constructs we believe we are assessing’ (Bachman, 2000: 9). Today test developers regularly take into account the role of technology in the way they define constructs, and the test methods they develop (Chapelle and Douglas, 2006). For example, score interpretation for a test of writing which requires learners to compose their response at the keyboard needs to include the construct of composing at the keyboard. Similarly, 253 Assessment core interpretation for a test of listening comprehension that provides video and audio input for the examinee needs to take into account the learner’s ability to productively use the visual channel in addition to the aural one. Test Methods Having defined assessment as ‘the act of collecting information and making judgements’, we can define test methods as the systematic procedures set out for collecting information and making judgements for a particular assessment event. Language testers consider test methods as a set of procedures and describe them as sets of characteristics rather than by cover-all terms such as ‘multiple-choice’. Multiple-choice refers only to one characteristic of a test – the manner by which the examinee responds – but any given testing event is composed of a number of other factors which should be expected to affect performance. Ideally, the test performance (as outlined in Figure 15.1) would be free from any outside influence. However, test methods do affect test performance in various ways. Douglas (1998) theorizes how test methods affect test performance, suggesting a series of processes though which the test-taker perceives cues in the test method, interprets them and uses them to set goals for task completion, as illustrated in Figure 15.6. Consistent with Bachman (1990) and Bachman and Palmer (1996), Douglas (1998) suggests that these strategies are key to the manifestation of Download 1.71 Mb. Do'stlaringiz bilan baham: |
Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©fayllar.org 2024
ma'muriyatiga murojaat qiling
ma'muriyatiga murojaat qiling