Aimuratova nilufar tengel kizi simultataneous interpreter: difficulties and prospects of the profession


Download 103.75 Kb.
bet5/14
Sana18.06.2023
Hajmi103.75 Kb.
#1559234
1   2   3   4   5   6   7   8   9   ...   14
Bog'liq
CONTENTS

The structure of the research. The dissertation paper embraces introduction, three chapters, conclusion, the list of used literature and appendix.
The first chapter includes the literature review and several meanings of the concept of slang are given. Next, we provided information about the creation and origin of slang words, as well as the peculiar linguistic features of youth slang.
The second chapter is devoted to study slang in depth. We distinguished several types of slang, which can be divided into national and internet ones, and functions of slang, what are the advantages and disadvantages of using slang, as well as methods for forming slang words.
The third chapter is practical one which is intended at conducting a survey to find out the opinion of teachers and students of one of the universities of the Republic of Uzbekistan, towards slang words. Further, there were studied teachers’ and students’ responses and attitude toward slang.
Every chapter is complemented by a brief summary of the major ideas involved, and at the end, an overall conclusion of the paper presents final inferences and beliefs that resulted from the conducted study.
In conclusion a brief summary of the theoretical part, the results of the questionnaire, teachers’ and students’ feedbacks towards slang.
The list of used literature enumerates the authors whose theoretical and hypothetical works and books were utilized for producing this dissertational research work.
Appendix embodies the samples of questionnaire.

CHAPTER I. MAIN THEORETICAL ASSUMPTIONS OF THE RESEARCH
1.1 Simultaneous interpretation as a complex task
In English, the words ‘translation’ and ‘translating’ are often used as an umbrella term to cover both written translation and interpreting, while the words ‘interpretation’ and ‘interpreting’ are generally used to refer to the spoken and/or signed translation modalities only. Similar ambiguities are found in other languages, including French, with traduire and interpréter, German, with Dolmetschen and Übersetzen, Spanish, with traducir and interpreter. Two further points need to be made here: the first is that translation also includes a hybrid, ‘sight translation’, which is the translation of a written text into a spoken or signed speech. Simultaneous interpreting with text, discussed later in this article, combines interpreting and sight translation. The second point is that while in the literature, there is generally a strong separation between the world of spoken (and written) languages and the world of signed languages, in this paper, both will be considered. Both deserve attention and share much common ground when looking at the simultaneous interpreting mode.
In the world of interpreting, the consensus is that there are two basic interpreting modes: simultaneous interpreting, in which the interpreter produces his/her speech while the interpreted speaker is speaking/signing – though with a lag of up to a few seconds – and consecutive interpreting, in which the speaker produces an utterance, pauses so that the interpret can translate it, and then produces the next utterance, and so on.
In everyday interaction between hearing people who speak different languages and need an interpreter, consecutive interpreting is the natural mode: the speaker makes an utterance, the interpreter translates it into the other language, then there is either a response or a further utterance by the first speaker, after which the interpreter comes in again, and so on.
In interaction between a hearing person and a deaf person, or between two deaf persons who do not use the same sign language (American Sign Language, British Sign Language, French Sign Language etc.), simultaneous interpreting is more likely to be used. Conference interpreters also make a distinction between what they often call ‘long/true consecutive’, in which each utterance by a speaker is rather long, up to a few minutes or longer, and requires note-taking by the interpreter, and ‘short consecutive’ or ‘sentence-by-sentence consecutive’, in which each utterance is short, one to a few sentences, and interpreters do not take notes systematically.

Simultaneous with text is clearly in the simultaneous mode, because the interpreter translates while the original speech (‘source speech’). The hybrid sight translation is more difficult to classify. On the one hand, it entails translation after the source text has been produced, which suggests it should be considered a consecutive translation mode. On the other, the sight translator translates while reading the text, which suggests simultaneous reception and production operations.


Simultaneous interpreting is conducted with or without electronic equipment
(microphones, amplifiers, headsets, an interpreting booth), generally in teams of at least two interpreters who take turns interpreting every thirty minutes or so, because it is widely assumed that the pressure is too high for continuous operation by just one person.

In conference interpreting between spoken languages, speakers generally speak into a microphone, the sound is electronically transmitted to an interpreting booth where the interpreter listens to it through a headset and interprets it into a dedicated microphone, and the target language speech is heard in headsets by listeners who need it.


Sometimes, portable equipment is used, without a booth. When simultaneously interpreting for radio or television, listeners/viewers hear the interpreted speech through the radio or TV set’s loudspeakers rather than through a headset, but the principle remains the same.




Sometimes, no equipment at all is used, and the interpreter, who sits next to the listener who does not understand the speaker’s language, interprets the speech into the listener’s ears in what is called ‘whispered (simultaneous) interpreting’.

When interpreting between a signed language and a spoken language or between two signed languages, no equipment is used in dialogue settings. In video relay interpreting, a deaf person with a camera is connected through a telephone or other telecommunications link to a remote interpreting center where interpreters generally sit in booths and send their spoken interpretation of the output back to a hearing interlocutor and vice-versa. In the USA, such free interpreting service has been available to all for a number of years and has allegedly had major implications on deaf signers (e.g. Keating & Mirus, 2003; Taylor, 2009).


To users of interpreting services, the main advantage of simultaneous interpreting over consecutive interpreting is the time gained, especially when more than two languages are used at a meeting. Its main drawbacks are its higher price and lack of flexibility. The former is due to both the cost of the interpreting booth and electronic equipment generally required for simultaneous and to the rule that with the exception of very short meetings, simultaneous interpreters work in teams of at least two people – at least in interpreting between spoken languages – whereas for consecutive interpreting assignments, interpreters tend to accept working alone more often. The lack of flexibility is due to the use of booths and electronic equipment in usual circumstances, with restricts mobility and requires a certain layout of the conference room.




In simultaneous interpreting, the interpreter sits in an interpreting booth, listening to the speaker through a headset and interprets into a microphone while listening. Delegates in the conference room listen to the target-language version through a headset.

Simultaneous interpreting is also done by signed language interpreters (or interpreters for the deaf) from a spoken into a signed language and vice versa. Signed language interpreters do not sit in the booth; they stand in the conference room where they can see the speaker and be seen by other participants.


Whispered 'interpreting is a form of simultaneous interpreting in which the interpreter does not sit in a booth in the conference room, but next to the delegate who needs the interpreting, and whispers the target- language version of the speech in the delegate's ears.

None of these modes of interpreting is restricted to the conference setting. Simultaneous interpreting, for instance, has been used in large conferences, forums and whispered interpreting may be used in a business meeting.


The conference interpreters, in a way, become the delegates they are interpreting. They speak in the first person when the delegate does so, not translating along the lines of 'He says that he thinks this is a useful idea...'


The conference interpreting must empathize with the delegate; put themselves in someone else's shoes.

The interpreter must be able to do this work in two modes: consecutive interpretation, and simultaneous interpretation. In the first of these, the interpreter listens to the totality of speaker's comments, or at least a significant passage, and then reconstitutes the speech with the help of notes taken while listening; the interpreter is thus speaking consecutively to the original speaker. Some speakers prefer to talk for just a few sentences and then invite interpreters. The interpreter can perhaps work without notes and rely solely on their memory to reproduce the whole speech.


However, a conference interpreter should be able to cope with speeches of any length; they should develop the techniques of interpreting.



In practice, if interpreters can do a five-minute speech satisfactorily, they should be able to deal with any length of speech.
It is also clear that conference interpreters work in 'real time'. In simultaneous, by definition, they cannot take longer than the original speaker, except for odd seconds. Even in consecutive they are expected to react immediately after the speaker has finished, and their interpretation must be fast and efficient. This means that interpreters must have the capacity not only to analyze and resynthesise ideas, but also to do so very quickly.

In most cases nowadays simultaneous interpreting is done with the appropriate equipment: delegates speak into microphones, which relay the sound directly to interpreters seated in sound-proofed booths listening to the proceeding through earphones; the interpreters in turn speak into a microphone which relay their interpretation dedicated channel to headphones worn by delegates who wish to listen to interpreting. However, in some cases, such equipment is not available, and simultaneous interpreting is whispered. One of the participants speaks and simultaneously an interpreter whispers into the ear of the one or maximum two people who require interpreting services.


Clearly, simultaneous interpreting takes up less time than consecutive. Moreover, with simultaneous it is much more feasible to provide multilingual interpreting, with as six languages (UN) or even eleven (European Union). Given this advantage and widening membership of international organizations, more and more interpreting is being done in simultaneous.


How is simultaneous interpreting done?

Is simultaneous interpreting possible at all? One of the early objections to simultaneous interpreting between two spoken languages was the idea that listening to a speech in one language while simultaneously producing a speech in another language was impossible.


Intuitively, there were two obstacles. Firstly, simultaneous interpreting required paying attention to two speeches at the same time (the speaker’s source speech and the interpreter’s target speech), whereas people were thought to be able to focus only on one at a time because of the complexity of speech comprehension and speech production. The second, not unrelated to the first, was the idea that the interpreter’s voice would prevent him/her from hearing the voice of the speaker – later, Welford (1968) claimed that interpreters learned to ignore the sound of their own voice (see Moser, 1976, p. 20). Interestingly, while the debate was going
on among spoken language conference interpreters, there are no traces in the literature of anyone raising the case of signed language interpreting, which presumably was done in the simultaneous mode as a matter of routine and showed that attention could be shared between speech comprehension and speech production.
As the evidence in the field showed that simultaneous interpreting was possible between two spoken languages, from the 1950s on, investigators began to speculate on how this seemingly unnatural performance was made possible, how interpreters distributed their attention most effectively between the various components of the simultaneous interpreting process (see Barik, 1973, quoted in Gerver, 1976, 168). One idea was that interpreters use the speaker’s pauses, which occur naturally in any speech, to cram much of their own (‘target’) speech – see Goldman-Eisler, 1968, Barik, 1973.
However, in a study of recordings of 10 English speakers from conferences, Gerver found that 4% of the pauses only lasted more than 2 seconds and 17% lasted more than 1 second. Since usual articulation rates in such speeches range from close to 100 words per minute to about 120 words per minute, during such pauses, it would be difficult for interpreters to utter more than a few words at most, which led him to the conclusion that their use to produce the target speech could only be very limited (Gerver, 1976, 182-183). He also found that even when on average of 75 percent of the time, interpreters listened to the source speech and produced the target speech simultaneously, they interpreted correctly more than 85 percent of the source speech.
There are no longer doubts about the genuine simultaneousness of speaking and listening during simultaneous interpreting – though most of the time, at micro-level, the information provided in the target speech lags behind the speaker’s source speech by a short span. Anticipation also occurs – sometimes, interpreters actually finish their target language utterance before the speaker has finished his/hers. According to Chernov (2004), such anticipation, which he refers to as “probabilistic prognosis”, is what makes it possible to interpret in spite of the cognitive pressure involved in the exercise.
Basically, the simultaneous interpreter analyzes the source speech as it unfolds and
starts producing his/her own speech when s/he has heard enough to start an idiomatic utterance in the target language. This can happen after a few words have been produced by the speaker who is being translated, or a phrase, or more rarely a longer speech segment.

For instance, if, in a conference, after a statement by the Chinese representative, the British speaker says “I agree with the distinguished representative of China”, interpreters can generally anticipate and even start producing their target language version of the statement as soon as they have heard “I agree with the distinguished” with little risk of going wrong. In other cases, the beginning of the sentence is ambiguous, or they have to wait longer until they can start producing their translation because the subject, the objet and the verb are normally positioned at different places in the target language.


One of the earliest and most popular theories in the field, Interpretive Theory, which was developed at ESIT, France, by Danica Seleskovitch and Marianne Lederer in the late 1960s and early 1970s (e.g. Israël & Lederer, 2005), presents the interpreting process in both consecutive and simultaneous as a three-phase sequence. The interpreter listens to the source speech ‘naturally’, as in everyday life, understands its ‘message’, which is then ‘deverbalized’, i.e. stripped of the memory of its actual wording in the source speech. This idea was probably inspired by psychologists, and in particular Sachs (1967), who found that memory for the form of text decayed rapidly after its meaning was understood. The interpreter than reformulates the message in the target language from it's a-lingual mental representation (see Seleskovitch & Lederer, 1989). Central to this theory is the idea that interpreting differs from ‘transcoding’, i.e. translating by seeking linguistic equivalents in the target language (for instance lexical and syntactic equivalents) to lexical units and constituents of the source speech as it unfolds. While the theory that total deverbalization occurs during interpreting has been criticized, the idea that interpreting is based more on meaning than on linguistic form transcoding is widely accepted. As explained later, it is particularly important in simultaneous where the risk of language interference is high.

Lay people often ask how simultaneous interpreters manage to translate highly technical speeches at scientific and technical conferences. Actually, the language of specialized conferences is not particularly complex in terms of syntax, much less so than the language of non-technical flowery speeches, and its main difficulty for interpreters is its specialized lexicon. The relevant terminology needs to be studied before every assignment, which can be done with the appropriate documents, and interpreters tend to prepare ad hoc glossaries for specialized meetings.


Language is not the only challenge that simultaneous interpreters face. There are also cultural challenges, social challenges, affective challenges having to do with their role as message mediators between groups with different cultures and sometimes different interests, as witnesses of events and actions about which they may feel strongly, as persons whose social and cultural status and identity can be perceived differently by the principals in the interpreter-mediated communication and by themselves, but these challenges are not specific to simultaneous interpreting and will not be discussed here.


The main cognitive challenge of simultaneous interpreting is precisely the high pressure on the interpreter’s mental resources which stems from the fact that they must understand a speech and produce another at the same time at a rate imposed by the speaker. A more detailed analysis of the nature of this challenge is presented in Section 4.3. At this point, suffice it to say that interpreters have always been aware of the fact that the difficulty was considerable as soon as the speech was delivered rapidly, and that interpreters could not always cope (see for example George Mathieu’s statement made in 1930 as quoted in Keiser, 2004, p. 585; Herbert, 1952; Moser, 1976; Quicheron, 1981).


The practical consequence of this challenge is the presence of errors, omissions and infelicities (e.g. clumsy wording or syntax) in the simultaneous interpreters’ production. How many there are in any interpreted speech or statement is a topic that interpreters are reluctant to discuss. It depends on a number of factors, including the interpreter’s skills and experience, features of the speech (see the discussion of problem triggers in the next section) and environmental conditions such as the quality of the sound (or image) which reach the interpreter, background noise, the availability of information for thematic and terminological preparation, and probably language-pair specific features. In many cases, interpreters are able to translate a speaker’s statement faithfully and in idiomatic, sometimes elegant language, but in other cases, which are far from rare, errors, omissions and infelicities (EOIs) can be numerous. In a study of authentic online simultaneous interpretations of President Obama’s inaugural speech in January 2009 by 10 professional interpreters working into French, German or Japanese, Gile (2011) found 5 to 73 blatant errors and omissions over the first 5 minutes of the speech. In other words, these experienced, proficient interpreters made on average from 1 to more than 14 blatant meaning errors or omissions every minute when translating a difficult, but not extraordinarily difficult speech.
How this affects the comprehension of the speaker’s message and intentions by users remains to be investigated. Some EOs may have little or no impact, for instance if they affect speech segments which are highly redundant or of little relevance to the message, while others may deprive the users of important information – for example if numbers measuring the financial performance of a company are omitted or translated incorrectly. The number of EOIs is therefore not a sufficiently reliable metric to measure the amount of information actually transmitted to users of the target language, but the image of the simultaneous interpreter producing a very faithful and idiomatic version of the source speech in the target language at all times is clearly not a realistic one.

The Effort Model of Simultaneous Interpreting


In the late 1970 and early 1980s, Gile observed the difficulties even highly experienced interpreters with an excellent reputation encountered via EOIs, reflected upon his own interpreting experience, started reading the literature on cognitive psychology and developed a set of ‘Effort Models’ of interpreting to account for the problems which occurred regularly in the field (e.g. Gile, 2009). The Effort Model for simultaneous interpreting conceptualizes SI as consisting of four ‘Efforts’:


The Reception Effort, which encompasses all mental operations involved in perceiving and understanding the source speech as it unfolds, including the perception of the speech sounds – or signs when working from a signed language – and of other environmental input such as documents on screen or reactions of other people present, the identification of linguistic entities from these auditory or visual signals, their analysis leading to a conclusion about their meaning.


The Production Effort, which encompasses all mental operations leading from decisions on ideas or feelings to be expressed (generally on the basis of what was understood from the source speech) to the actual production of the target speech, be it spoken or signed, including the selection of words or signs and their assembly into a speech, self-monitoring and correction if required.


The Memory Effort, which consists in storing for a short period of up to a few seconds information from the source speech which has been understood or partly understood and awaits further processing or needs to be kept in memory until it is either discarded or reformulated into the target language.


The Coordination Effort, which consists in allocating attention to the other three Efforts depending on the needs as the source and target speeches unfold.


Increasingly, speakers read texts. When these are provided to the interpreters as well, the resulting interpreting mode is called ‘simultaneous with text’. In simultaneous with text, the Reception Effort is actually composed of a Listening Effort and a Reading Effort. This distinction is based firstly on the fact that one relies on sound and the other on visual signals, which means that at least at the perception stage, different processes are involved, and secondly because speakers often depart from the written text and modify, add or omit segments, which forces interpreters to either use the oral signal only or to attend to both the text and the speaker’s voice. They generally do the latter, because read speeches tend to have a prosody and a rhythm that make them more difficult to follow than adlibbed speeches (see Déjean Le Féal, 1978), and having a text is a help – though the additional Effort entails additional cognitive pressure.


Two further Efforts were added later for the case of an interpreter working from a spoken language into a sign language (on the basis of input from Sophie Pointurier-Pournin – see Pointurier-Pournin, 2014). One is the SMS Effort, for Self-Management in Space: besides paying attention to the incoming speech and their own target language speech, interpreters need to be aware of spatial constraints and position themselves physically so as to be able to hear the speaker and see material on screen if available, and at the same time remain visible to the deaf audience without standing out and without causing disturbance to the hearing audience.


The other is the ID Effort, for Interaction with the deaf audience: deaf people often sign while an interpreter is working, either making comments to each other or saying something to the interpreter, for instance asking him/her to repeat or explain or making a comment about the speech being interpreted. This is a disturbance factor for the interpreter, whose attention is distracted from focusing on the incoming speech and outgoing speech.


All these Efforts include non-automatic components: in other words, they require attentional resources (see Gile, 2009). For instance, in the Reception Effort, some processing capacity is required to identify linguistic units from the sounds or visual signals, and more capacity is required to make sense out of the linguistic units. In the Production Effort, the retrieval of lexical units, be they spoken or signed, can also require processing capacity, especially in the case of less than frequently occurring words. So does the assembly of lexical units into idiomatic utterances, especially under the possible interference of the source language. More importantly, the total processing capacity required for all these Efforts tends to be close to the total available attentional resources, close enough for interpreters to be destabilized and risk saturation and EOIs in their output when making errors in their management (such as lagging too far behind the speaker or focusing too strongly on producing an elegant target speech) and when encountering certain difficulties in the speech itself – the so-called ‘problem triggers’. This fragile situation in which simultaneous interpreters find themselves is the ‘Tightrope Hypothesis’, which Gile assumes to be the main sources of EOIs in interpreting (Gile, 2009).




Download 103.75 Kb.

Do'stlaringiz bilan baham:
1   2   3   4   5   6   7   8   9   ...   14




Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©fayllar.org 2024
ma'muriyatiga murojaat qiling