The. Ministry of higher and secondary special education


Download 498.14 Kb.
bet5/12
Sana16.06.2023
Hajmi498.14 Kb.
#1517397
1   2   3   4   5   6   7   8   9   ...   12
Bog'liq
Jo\'rayeva...SH

3.1. Assessing writing.


Writers at the Superior level demonstrate a high degree of control of grammar and syntax, of both general and specialized/pro-fessional vocabulary, of spelling or symbol production, of cohesive devices, and of punctuation. Their vocabulary is precise and varied. Writers at this level direct their writing to their audiences; their writing fl uency eases the reader’s task. Writers at the Superior level do not typically control target-language cultural, organizational, or stylistic patterns. At the Superior level, writers demonstrate no pattern of error; however, occasional errors may occur, particularly in low-frequency structures. When present, these errors do not interfere with comprehension, and they rarely distract the native reader (p. 11). In this description, there is mention of grammar, syntax, lexis, spelling, cohesion, fl uency, style, organization, and other factors. Because of the complex nature of the construct of writing prof i ciency, it is important to more fully understand the pieces that contribute to the whole in order to assess it more accurately. Specif i cally, in relation to lexis, the TOEFL iBT rubric mentions appropriate word choice and idiomaticity, and the ACTFL rubric mentions control of general and specialized vocabulary as well as lexical precision and variation as part of writing prof i ciency.
Additionally, lexical knowledge has consistently been shown to account for a large amount of variance in writing prof i ciency scores. For instance, Stæhr (2008) showed that as much as 72 % of the variance in writing prof i ciency could be accounted for by vocabulary knowledge. Others found less strong, but still highly meaningful results, such as Milton, Wade, and Hopkins (2010) and Miralpeix and Mu˜ noz (2018), who found that vocabulary size accounted for 57.8 % and 32.1 % of the variance in writing prof i ciency scores, respectively. Therefore, in the present study, we explore the relationship between the construct of writing prof i ciency and an aspect of lexical knowledge, lexical diversity (LD), using a battery of LD measures in an English for Academic Purposes setting. While several other studies relating writing prof i ciency and LD exist (see, e.g., Yu, 2010; Gebril & Plakans, 2016; Gonz´ alez, 2017; Crossley & McNamara, 2012; Wang, 2014), the purpose of this study to build on this literature by: 1) using multiple LD measures separately and in combination, 2) exploring the factor of timing condition, and 3) to build understanding of the relationship between different LD measures in writing assessment.
The study of LD and how it relates to language ability has existed for over a century. Thomson and Thompson (1915) fi rst proposed an empirical method for using a person’s vocabulary usage patterns to estimate his or her language knowledge. Twenty years later, Carroll (1938) introduced the term “diversity of vocabulary” and def i ned it as “the relative amount of repetitiveness or the relative variety in vocabulary” (p. 379). Although there remains debate 
on the def i nition of LD, for the purposes of this study, we adhere to Malvern, Richards, Chipere, and Dur´ an (2004)’s def i nition stating that LD is the range or variety of vocabulary within a text. More recently, Vidal and Jarvis (2020) def i ne lexical diversity as “the variety of words used in speech or writing”, which is also consistent with how LD will be considered in this study. That is, it is synonymous with lexical variation, and it is an aspect of lexical richness and complexity.
When LD fi rst began to be studied, the most frequent measure used was Type-to-Token Ratio (TTR) (Johnson, Fairbanks, Mann, & Chotlos, 1944; Osgood & Walker, 1959); however, as research in the fi eld has progressed, the validity of TTR as a measure of LD has been repeatedly challenged, and new measures of LD have been proposed and validated. One of the fi rst alternative measures proposed was vocabulary diversity (vocd-D), which relies on the mathematical modeling of the introduction of new words into longer and longer texts (Malvern & Richards, 2002). Then, the measure of textual LD (MTLD), which analyzes the LD of a text without being impacted by the length, was introduced (McCarthy, 2005). The Moving Average Type-to-Token Ratio (MATTR), which calculates TTR through a series of moving windows and takes the average, leading the measure to be unaffected by text length, was also created (Covington & McFall, 2010). The reliability and validity of these measures have been tested several times over, and they have begun to replace TTR as the standard for LD measurement (Authors, xxxx; Fergadiotis, Fergadiotis, Wright, & Green, 2015; McCarthy & Jarvis, 2010; Jarvis, 2013a).
Over time, LD measurement has steadily progressed, and currently, many LD measures are used for measuring the construct (e.g., Maas, MTLD-MA, MTLD-BID, and many others) (Kyle, 
Crossley, & Jarvis, 2021; McCarthy & Jarvis, 2010). Many studies have compared the values produced by these different measures to assess how well and how similarly they assess the phenomenon of lexical diversity (e.g., Fergadiotis et al., 2015; McCarthy & Jarvis, 2010). In a validation study of MTLD, McCarthy and Jarvis (2010) compared it to several other metrics, vocd-D, TTR, Maas, Yule’s K, and an HD-D index. The study examined several aspects of the different measures using assessments of convergent validity, divergent validity, internal validity, and incremental validity. MTLD performed well across all four types of validity and appeared to capture unique lexical information (i.e., volume, abundance, evenness, dispersion, disparity, and specialness; see Jarvis, 2013b; Kyle et al., 2021), as did Maas and vocd-D (or HD-D). This discovery led McCarthy and Jarvis (2010) to recommend the assessment of LD through all three measures, rather than any one index. In a com-parison of vocd-D, Maas, MTLD, and MATTR, Fergadiotis et al. (2015) found that MTLD and MATTR were stronger indicators of LD than Maas and vocd-D, which appeared to be affected by construct-irrelevant confounding sources.
As these measures are all intended to assess the same phenomenon (i.e., LD), one could reasonably expect similar validities and indicative abilities; however, based on these studies, it appears that the different LD measures do not necessarily correlate well with one another and that using multiple measures might be a better approach to measuring LD than using one alone. For example, McCarthy and Jarvis (2010) found that three measures capture unique lexical information. However, this practice has yet to gain universal or even widespread acceptance among researchers. Thus, the relationship between these measures and the construct of LD that they are intended to measure appears to remain unresolved. Further, the exact nature of how the measures correspond to each As the study of LD has progressed, it has frequently been correlated with language prof i ciency. For example, Jarvis (2002) used best-f i tting curves to analyze written texts produced by Finnish and Swedish L2 English learners and native English speakers. He found two curve-f i tting formulas that produced accurate models for the type-token curves of 90% of the texts. Further analysis revealed a clear relationship 
between LD and instruction, which typically correlates with prof i ciency level. In a study of L2 writing samples, Crossley and Salsbury (2012) divided a selection of texts into beginning, intermediate, and advanced categorizations based on TOEFL and ACT ESL scores. They then performed a discriminant function analysis, using the computational tool Coh-Metrix, to predict the prof i ciency level based on breadth of lexical knowledge, depth of lexical knowledge, and access to core lexical items.

3.2.Simple Ways to Assess the Writing Skills of Students with Learning Disabilities.


A teacher's first responsibility is to provide opportunities for writing and encouragement for students who attempt to write. A teacher's second responsibility is to promote students' success in writing. The teacher does this by carefully monitoring students' writing to assess strengths and weaknesses, teaching specific skills and strategies in response to student needs, and giving careful feedback that will reinforce newly learned skills and correct recurring problems. These responsibilities reveal, upon inspection, that assessment is clearly an integral part of good instruction. In their review of the existing research on effective instruction Christenson, Ysseldyke, and Thurlow (1989) found that, in addition to other factors, the following conditions were positively correlated to pupil achievement:

  • The degree to which there is an appropriate instructional match between student characteristics and task characteristics (in other words, teachers must assess the student's prior knowledge and current level of skills in order to match them to a task that is relevant and appropriate to their aptitudes);

  • The degree to which the teacher actively monitors students' understanding and progress; and

  • The degree to which student performance is evaluated frequently and appropriately (congruent with what is taught).

Assessment, therefore, is an essential component of effective instruction. Airasian (1996) identified three types of classroom assessments. The first he called "sizing-up" assessments, usually done during the first week of school to provide the teacher with quick information about the students when beginning their instruction. The second type, instructional assessments, are used for the daily tasks of planning instruction, giving feedback, and monitoring student progress. The third type he referred to as official assessments, which are the periodic formal functions of assessment for grouping, grading, and reporting. In other words, teachers use assessment for identifying strengths and weaknesses, planning instruction to fit diagnosed needs, evaluating instructional activities, giving feedback, monitoring performance, and reporting progress. Simple curriculum-based methods for assessing written expression can meet all these purposes.

Process, product, and purpose


Curriculum-based assessment must start with an inspection of the curriculum. Many writing curricula are based on a conceptual model that takes into account process, product, and purpose. This conceptual model, therefore, forms the framework for the simple assessment techniques that follow.

Simple ways to assess the process


The diagnostic uses of assessment (determining the reasons for writing problems and the student's instructional needs) are best met by looking at the process of writing, i.e., the steps students go through and strategies they use as they work at writing. How much planning does the student do before he or she writes? Does she have a strategy for organizing ideas? What seem to be the obstacles to getting thoughts down on paper? How does the student attempt to spell words she does not know? Does the student reread what she has written? Does the student talk about or share her work with others as she is writing it? What kind of changes does the student make to her first draft?
In order to make instructionally relevant observations, the observer must work from a conceptual model of what the writing process should be. Educators have reached little consensus regarding the number of steps in the writing process. Writing experts have proposed as few as two (Elbow, 1981) and as many as nine (Frank, 1979). Englert, Raphael, Anderson, Anthony, and Stevens (1991) provided a model of a five-step writing process using the acronym POWER: Plan, Organize, Write, Edit, and Revise. Each step has its own substeps and strategies that become more sophisticated as the students become more mature as writers, accommodating their style to specific text structures and purposes of writing. Assessment of the writing process can be done through observation of students as they go through the steps of writing.
Having students assess their own writing process is also important for two reasons. First, self-assessment allows students an opportunity to observe and reflect on their own approach, drawing attention to important steps that may be overlooked. Second, self-assessment following a conceptual model like POWER is a means of internalizing an explicit strategy, allowing opportunities for the student to mentally rehearse the strategy steps. Figure 1 is a format for both self-observation and teacher observation of the writing process following the POWER strategy. Similar self-assessments or observation checklists could be constructed for other conceptual models of the writing process.


Download 498.14 Kb.

Do'stlaringiz bilan baham:
1   2   3   4   5   6   7   8   9   ...   12




Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©fayllar.org 2024
ma'muriyatiga murojaat qiling