Using authentic materials to develop listening comprehension in the


Download 411.17 Kb.
bet8/24
Sana05.05.2023
Hajmi411.17 Kb.
#1432566
1   ...   4   5   6   7   8   9   10   11   ...   24
Bog'liq
USING AUTHENTIC MATERIALS TO DEVELOP LIS

Table 3 Analysis of Data





Research Questions Sources of Data Data Types



What are the influences of aural authentic materials on the listening comprehension in students of English as a second language?

  1. Interview1 with students




  1. Interview2 with students




  1. Interview with teacher




  1. Class observation




  1. Self-evaluation questionnaire

(1a) Key words from questions #7, 8, 9, 10, 11, 13, 14, 16

(2a) Key words from questions #1, 2, 3, 4


(3a) Key words from question #8


(4a) Number of times students responded to teacher's questions and instructions


(5a) Number of students for each answer




What kinds of learning strategies are most frequently used by ESL students listening to aural authentic materials in the classroom?

    1. Interview1 with students




    1. Interview2 with students




    1. Interview with teacher




    1. Class observation


    1. Language learning strategy questionnaire

(1a) Key words from questions #12, 15

(2a) Key words from question #6


(3a) Key words from questions #6, 7


(4a) Frequency count of different learning strategies


(5a) Number of responses along scale points of each strategy




What are the influences of aural authentic materials on ESL students' attitudes towards learning English?

  1. Interview with students (1a) Key words







Validity and Reliability
Validity is a measure of the degree to which the instrument is measuring what it is intended to measure. Reliability, on the other hand, is a measure of the degree to which the same analysis procedure is likely to give consistent results (Gay, 1996). The worksheets of validity and reliability agreement for the current study are presented in Appendix D.
To establish face validity, 100 isolated unambiguous events of classroom behaviors, derived from transcripts of class videotaping (see Appendix E), were coded once at the beginning and once at the end of the data coding process. Frequency count of coded events for each category of classroom behaviors was compared between that obtained from the first coding and that from the second coding. Simple percentage agreement of 98% was found between the two codings.
The transcript of 100 isolated unambiguous events of classroom behaviors, from the class videotaping, was also submitted to a criterion observer for coding; this is to establish construct validity between the researcher and coding categories. The criterion observer in this study was a Ph.D. candidate, in Education and Human Resource Development at the George Washington University, who used to work as a research assistant and had experience in classroom observation. The researcher and the criterion observer went over the coded data on which they did not agree. While the researcher considered the event, students looked at board when the teacher told them to look at what she wrote on board, as students following teacher’s instruction, the criterion observer coded the event as students looking at board when teacher talking about what she wrote. The event in which students reading material as teacher talking was coded by the researcher as students not paying attention; the criterion observer, however, could not decide if the event should be categorized as paying attention or not paying attention as listening. Percent agreement between the researcher’s and the criterion observer’s coded data was 96%.

To demonstrate observer reliability, the researcher utilized the relationship between codings of class observation (N = 2,017) and class videotaping (N = 838). The correlation coefficient was .94. To establish interrater reliability, the researcher coded events from three 10-minute segments of the videotaped session (see Appendix F) and submitted a transcript to the criterion observer for verification of the accuracy in coding. Agreement of 90% was found between researcher’s and criterion observer’s coded data. Disagreement in codings was found in different categories; for instance, classroom events in which students looking at written material as teacher was talking were generally favored by the criterion observer as students not paying attention; events in which students nodding head when teacher asking a question was favored by the researcher as students answering questions.


As a large number of classroom events are coded over a period of time, a coder has a tendency to change selections from one code to another. To control for observer drift, the researcher coded a segment of videotaped session once at the beginning of the data coding process and once at the end of the process. Percentages were calculated for the coded data, for each category; simple percent agreement along each category, between the first coding and the second coding, was then figured. Intraobserver reliability was established with agreement of 96% between the two codings.

Download 411.17 Kb.

Do'stlaringiz bilan baham:
1   ...   4   5   6   7   8   9   10   11   ...   24




Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©fayllar.org 2024
ma'muriyatiga murojaat qiling