Measuring student knowledge and skills


Download 0.68 Mb.
Pdf ko'rish
bet60/94
Sana01.04.2023
Hajmi0.68 Mb.
#1317275
1   ...   56   57   58   59   60   61   62   63   ...   94
Bog'liq
measuring students\' knowledge

44.00 $
30.00 $
Figure 7.
A real but not authentic task
How much does a T-shirt cost?
How much is a soda?
Show the reasoning that led you to these answers.
44.00 $
30.00 $
Figure 7.
A real but not authentic task
How much does a T-shirt cost?
How much is a soda?
Show the reasoning that led you to these answers.


Measuring Student Knowledge and Skills
52
OECD 1999
contexts will have a place in OECD/PISA if they are mathematically interesting and relevant. The use of
mathematics to explain hypothetical scenarios and explore potential systems or situations, even if these
cannot be carried out in reality, is one of its most powerful features.
Task formats
When assessment instruments are devised, the impact of the format of the tasks on student perform-
ance, and hence on the definition of the construct that is being assessed, must be carefully considered.
This issue is particularly pertinent in a project such as OECD/PISA in which the large-scale cross-national
context places serious constraints on the range of feasible item formats.
As in the case of the reading literacy domain, OECD/PISA will assess mathematical literacy through
a combination of items with multiple-choice, closed-constructed response and open-constructed
response formats. Appendix 2 discusses a broader range of formats that might be used when mathemat-
ics becomes a major domain in the second survey cycle.
Travers and Westbury (1989) state when discussing the second IEA mathematics study that: “The
construction and selection of multiple-choice items was not difficult for the lower levels of cognitive
behaviour – computation and comprehension.” But, they continue, “difficulties were presented at the
higher levels”. There is a place for the use of the multiple-choice format (for an example see Figure 10),
but only to a limited extent and only for the lowest goals (or behaviour) and learning outcomes. For any
higher-order goals and more complex processes, other test formats should be preferred, the simplest
being open questions.
Closed constructed-response items pose similar questions as multiple-choice items, but students
are asked to produce a response that can be easily judged to be either correct or incorrect. When
responses are not being machine-marked, this is a preferred format for assessing Competency Class 1,
because guessing is not likely to be a concern, and the provision of distractors (which influence the
construct that is being assessed) is not necessary. For example, for the problem in Figure 11 there is one
correct answer and many possible incorrect answers.
Open constructed-response items require a more extended response from the student, and the
process of producing a response is likely to involve higher-order activities. Often such items not only ask
the student to produce a response, but also require the student to show the steps taken or to explain
Figure 8.

Download 0.68 Mb.

Do'stlaringiz bilan baham:
1   ...   56   57   58   59   60   61   62   63   ...   94




Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©fayllar.org 2024
ma'muriyatiga murojaat qiling