Aimuratova nilufar tengel kizi simultataneous interpreter: difficulties and prospects of the profession


Effect of excessive focus on text in simultaneous with text


Download 103.75 Kb.
bet12/14
Sana18.06.2023
Hajmi103.75 Kb.
#1559234
1   ...   6   7   8   9   10   11   12   13   14
Bog'liq
CONTENTS

Effect of excessive focus on text in simultaneous with text
Having in the booth the text of an address which a speaker reads has both advantages and drawbacks. The attractiveness of the text to interpreters stems from the fact that it makes the linguistic content of the speech easily available to them in print (save for rare handwrittentexts), with virtually no effort required for the linguistic identification of the signals and more freedom to manage the Reading part of the Reception Effort while the Listening part is entirely paced by the speaker. As a result, they are tempted to sight-translate the text and use the speech sounds for the sole purpose of checking the speaker’s progression in the text. This entails two risks: one is that the speaker’s deviations from the text – skipped parts and added comments – can be missed because not enough attention is devoted to the Listening component; the other is that difficulties in sight translation slow down the interpreter while the speaker is speeding ahead, with the associated lag and consequences described.
Lexical problems
Problem triggers in simultaneous interpreting Simultaneous interpreting, while complex and difficult, is feasible provided a certain number of conditions are met. Interpreters need to be proficient in the languages they interpret between; they need to prepare the subject matter to be discussed; they need to have access to as much meeting-related visual and auditory information as possible;. Even then, however, certain features of the input have been identified as constituting particular problem triggers for simultaneous interpreters.These problem triggers do not appear to be limited to faction inhibiting ordinary language comprehension; rather, they only come to bear when multi-lingual language comprehension and language production overlap . They inciude but are not limited to speed and density of input, presence of numbers in source speech, complex syntactic structures and speakers ‘accents.What follows is a short discussion of how these triggers might affect simultaneous interpreting but not ordinary language comprehension.Speed and density.Because of their interaction, speed and density of the source speech should probably be considered together rather than separately: while discourse with low lexical density presented at a high speaking be perceived as slow, discourse with high lexical density presented at a low speaking rate can be perceived as fast. Speed and density have been shown not to hinder ordinary language comprehension. Specifically, speaking rates between 100 and 200 words per minute are considered normal (Mayer, 1988), while discourse presented at up to 500 words per minute does seem to significantly affect comprehension .Early practitioners recommended 100 to 120 words per minute as the ideal speaking rate for simultaneous interpreting while AIIC advocates a speaking rate of approximately words per minute (AIIC, n.d.). In reality, however, these recommended speaking rates are often substantially exceeded. At the Human Rights Council of the United Nations, for example, speaking rates have been found to average 150 words per minute and to reach almost 190 words
per minute (Barghout et a1.,201,2). Discourse presented at this speed may well push the human brain to and beyond its limits and has been found to cause omissions, substitutions and pronunciation errors and to decrease anticipation accuracy.
Numbers are also known to cause problems for simultaneous interpreters, possibly because they differ from ordinary verbal input in at least three aspects: conceptual substrate, frequency and image ability. Un-like ordinary words, numbers are not typically linked to any sort of conceptual representation (with noteworthy exceptions such as well-known dales, etc.), they often cannot be anticipated, they do not usually contain redundant information and they are generally not image able beyond their visual numerical form. Language comprehension normally depends on the fact that, in order to produce meaningful words and sentences within a given language, sounds can only be combined in a finite number of ways. This means that a comprehended will encounter a particular sequence of sounds, words or phrases more frequent than others. Owing to the incremental nature of language comprehension, the human limitation in order to anticipate possible continuations of words and sentences, and by doing so, make the process more efficient Numbers, on the other hand, can be expressed in almost infinite combinations. They, therefore, are not subject to the same kind of frequency-contingent constraints and do not allow the same kind of anticipatory processing,which increases the processing load on the brain. Moreover, two or more words of a sentence might contain the same or very similar information, â phenomenon known as redundancy' Again, the human brain takes advantage of this phenomenon, as it is likely that information already processed can be integrated at less processing cost. During the processing of number, however, such redundancy does not exist, as every digit expresses a unique meaning. Finally, we know that imaginable words, i.e. words describing concrete concepts that can readily be imagined, are easier to process than non-imaginable words. Given that, with few exceptions, numbers are not imaginable, it stands to reason that they might be inherently more difficult to process than other words.From this short discussion, we may conclude that numbers are more difficult to comprehend than ordinary verbal input. It would appear plausible to assume that in a cognitively demanding task such as simultaneous interpreting, this difficulty is exacerbated. This conclusion is supported by experimental evidence showing that the quality of simultaneous interpretation decreases significantly in segments containing numbers.From early reading comprehension studies, we know that some syntactic structures, e.g. nesting structures, are more difficult to process than other structures,such as verb-final sentences (in languages allowing such constructions), counter-intuitively do not seem to affect the comprehension process. In fact, reading studies show an increase in reading speed (and therefore probably facilitated processing) towards the end of verb-final structures . It is thus likely that by the time readers arrive at the end of a sentence, lexical, contextual, computational and frequency-related constraints will have allowed the brain narrow down the possibilities of the sentence final verb to a minimum.Simultaneous interpreters producing elements in their output before they have been uttered in the input, and, by doing so, anticipating the speaker, May therefore merely verbalize elements that any ordinary comprehends may also have conceptualized. To conclude from this evidence that syntactic differences between languages are irrelevant or negligible for the simultaneous interpreting process might be tenuous, however. Unlike readers or comprehends, simultaneous interpreter rarely has the benefit of hearing an entire sentence up to the last missing word prior to starting their interpretation. Given that the average lag between the original input and today's simultaneous interpreter’s' output is approximately two to four seconds, such syntactic differences are indeed likely to cause an increase in processing load, as supported by empirical evidence.
In order to discuss the difficulty that accented source speech may represent during simultaneous interpreting, it is important to unambiguously define the concept. In fact, while linguists use the notion of "accent" to refer to phonological-phonetic variations of speech, and thus its Simultaneous interpreting phonological and prosodic features professional interpreters often seem to include lexical and syntactic deviation specifically in their understanding of foreign accents.
In practical terms, it may be time that non-native speakers display all
the above deviations in their speech. In order to isolate individual problem triggers, however, it is important to differentiate between accents in the strict sense of the term, and grammatically, stylistically and idiomatically non-standard forms of a language. This is particular time for English, which has been increasingly prevalent as the language used by speakers (regardless of native language) at international conferences. English spoken by non-native speakers has recently been labeled English as a lingua franca (ELF) with some of its proponents advocating its recognition as a language in its own right (for an overview see Channing, 2005). Such nonstandard forms of English constitute a challenge for simultaneous interpreters because they already represent a compound of potential individual problem triggers (e.g., non-standard use of lexicon, grammar, syntax, sfyle, intonation and accent). At the same time, however, its heterogeneous nature makes ELF impossible to define finally and thus impossible to isolate as en individual problem trigger. I will therefore limit the discussion of accents to two types of speakers: firstly, proficient speakers of a standard variety of a language who, while producing grammatically and syntactically correct sentences, can be identified as non-native speakers due to phonological and phonetic deviations; and secondly, native speakers of unfamiliar (regional) varieties of a language, such as Scottish English or Australian English. 'We know, for instance, that native listeners make more comprehension mistakes when listening co speakers of a second language and that speech processing is less efficient for unfamiliar native accents. This decline in processing deficiency is rather subtle in quiet conditions but becomes more noticeable under adverse listening conditions, possibly because phonetic cues relevant for comprehension might be masked (ibid.). As simultaneous interpreting is a task taking place under adverse listening conditions, with interpreters having to listen to the original input and their own output, comprehension is likely to be negatively affected by accented speech. This rationale is borne out by empirical studies showing that interpreting accuracy decreases significantly with phonetic and prosodic deviations.
Stylistic problems
A lot of scholars [Bassnett, 2014; 134; Valdeon, 2017; 117, Baker, 2009; 186]note that a socio-political translation remains as a sphere that has not been studied fully in translation science. Bassett thinks that a socio-political text is created for a specific audience and the main focus is directed to the study of the relationship with the audience through the analysis of class, gender, political and other factors. Higher education institutions that train translators/interpreters andlinguists include courses in the translation of socio-political vocabulary in their curricula.It should be noted that most of the presented manuals contain only practical material.It is important to note that the features of socio-political translation are presented mainly in the educational literature based on the material of the Roman languages. The aim of the article is to investigate the theoretical foundations of socio-political discourse translation, in particular the translation of socio-political vocabulary, and describe the characteristic features of the translation. Scholars identified linguistic, textual, pragmatic, cultural, semiotic, extralinguistic problems in socio-political discourse translation, problems related to the understanding of the text, which can be noticed by the following facts in the translation process: pauses, the translator's reference to dictionaries, the use of translation strategies, omissions in translation, corrections, commenting on the source, etc. The study of scientific literature has shown that translation problems are analyzed in different aspects even within the framework of the socio-political translation.
The linguistic problems, faced by translators of socio-political texts are associated with a number of other problems (for example, how to convey signs of political and cultural differences in the target language and how to meet the expectations of the target audience), i.e. elements in the text that go beyond linguistic understanding become crucial. However, translators may have difficulties while translating words and phrases that represent concepts in the source language but absent in the target language (political realies, non-equivalent vocabulary, neologisms, etc.). The analysis of English socio-political discourse and their official translations revealed words, phrases and sentences, which have equivalent (dictionary) correspondences (cold war- холодная война, the Senator– сенатор, vice president –вице-президент); for which the translator tries to find an equivalent from the set of meanings given in the dictionary or suggested by the context (rhetorical fire – pompous desire (rhetorical fire – сильное желание for which the translator creates "own" equivalent, or translates a word that is not present in dictionaries, which requires the translator to apply certain interlanguage transformations (“get-out-the-vote” operations – активные действия, ориентированные на участие в процессе голосования, ). 1. “What does Washington do in between pointless votes to end Obamacare?” [Stromberg, URL, 2013]–Может ли Вашингтон таким образом положить конец бесполезным голосованиям против "Obamacare"? 2. “The White House keeps changing Obamacare” [Kliff, URL, 2013]– Белый дом продолжает вносить изменения в Obamacare 3. “He has tried, so far unsuccessfully, to abolish Obamacare” –Он пытался, пока безуспешно, отменить Obamacare”
In modern translation media, this neologism is found without translation. In some cases, translators use explanations (See example 3). We reckon that not all representatives of the Russian-speaking culture are aware of such foreign-language inclusions. Let's break the word “Obamacare” down into parts to translate it correctly: Obama (last name of the former US President) + care (care, attention, care). The neologism is formed by analogy with the word Healthcare, (- care) is a component of complex words with the meaning of attention, supervision, protection. Therefore, the meaning of “Obamacare” is the health policy pursued by Barack Obama. 4. “One administration official, who dealt with Kumar during the fiscal-cliff talks, called him an “evil genius” whom the White House never trusted totally but viewed as a principled opponent who knew the policy, had “a nose for the deal” and was always genuinely trying to defuse the bomb du jour” [Montgomery, URL, 2013].
The appearance of the term fiscal cliff, which refers to the situation associated
with tax increases and spending cuts in the United States [Geiko, 2013; 60], is
associated with the name of a political show called “Fiscal Cliff”, which is mostly known to speakers of English culture. 5. “It would be wise for them,
how many consecutive years they have spent on the front benches, to take breaks to refresh themselves – Для них было бы лучше сделать перерыв, чем соревноваться с его профессиональным опытом в Кабинете министров.

Simultaneous interpretation has been strongly linked to technology since its early days, serving the speakers of different languages participating in conferences and needing to understand each other.
The emergence of new technological tools has meant the interpretation sector has been evolving and adapting to the increasing need for proximity and connection between people and services worldwide.
The advances in technology have, however, generated speculation within the industry about the future of simultaneous interpretation at conferences. Will it be an opportunity or a challenge for interpreters?
Simultaneous interpretation: a technology-dependent profession
Simultaneous interpreting is the most recent service of interpretation work and it is, at root, truly linked to technology, or even dependent on its use for the success of the services provided by interpreters.
It first appeared in 1925, as an answer for the strenuous efforts at consecutive interpreting – taking notes of a speech and translating it in the end – in the political meetings of the League of Nations.
The simultaneous interpreting proposal submitted by the US businessman Edward Filene suggested setting up a booth with microphones attached to headphones of the participants through an amplifier.
The idea was accepted and this system was successfully used in the League of Nations and later on by the United Nations and at the Nuremberg Trials, after World War II.
These interpreters were not initially isolated in the booth: they were organised into groups of three, behind a glass panel without sound insulation and only with poor quality microphone and headphones.
The interpreters, with no specific training and equipment that still had to be tested, had to adapt to this innovation, which made technology available that eventually aided their work, almost replacing consecutive interpreting.
The natural evolution from early simultaneous interpreting at conferences was to place the interpreters in a space that is not shared with the speakers, i.e. the remote interpretation of the speeches and interventions at the conference.
Remote interpretation gained strength, especially from the 2000s, with the introduction of better audio connections and video transmission and responding to space constraints that required interpreters to be located in other rooms.
The present and the future of conference interpreting
Simultaneous interpreting at conferences is today radically different from the initial decades, even though the technology has always been present in soundproof booths, in the interpreting consoles and the audio connection to headphones.
Over nearly one hundred years of simultaneous interpreting, interpreters have been adopting new technologies in their lives and in their work, innovating processes and creating new possibilities for remote interpretation.
Computerized systems have revolutionized remote interpreting, allowing interpreters to have more control over their notes and the linking of monitors with video of the speakers and auxiliary presentations.
Internet use has, of course, become widespread in the prior preparation of interpretation services at conferences, for updating language skills and providing access to dictionaries and other online terminology sources.
An interpreter can use a tablet or smartphone, both easy to carry and silent, to have access to apps of notes and useful terminology for interpretation work on very specific topics and with difficult vocabulary.
Technology is also at the service of simultaneous interpreting breaking down physical barriers since the interpreter and speakers can be a great distance from each other. The internet can be used to establish a video and audio connection for the interpreting.
It is remote interpretation raised to another level: the interpreter does not have to go to the place of interpretation, which saves time and money, performing the interpretation through a video conference link as though in the same room.
This method is widely used for more unusual languages and in hospitals, for example, for patients that do not speak the language of that country, or even in courts, in trials of foreign defendants.
Other remote interpreting systems have been developed, such as interpreting software via video in which video participants in a web or telephone conference are connected to a device and choose the language they want to speak in.
All communicate and have access to the communications of the other speakers in their own language, without any additional hardware or the assistance of an interpreter. These innovative systems work with the aid of machine interpreting software.
Machine interpreting is based on previously translated terms and a history that is being built up and improved, combining two existing technologies: machine translation and voice recognition software.
The technology captures the voice, converts the content into text, translates it and then formulates the content through an electronic voice in the target language.
Such machine interpreting software is multiplying, and it is moving ever closer to simultaneous interpreting at conferences as its processing speed increases and it becomes progressively better.
Despite the limitations in terms of efficiency and above all quality that still exist, interpreters see these technologies as an opportunity in the present and a challenge to be faced, due to the possibilities of greater automation in the future.


Download 103.75 Kb.

Do'stlaringiz bilan baham:
1   ...   6   7   8   9   10   11   12   13   14




Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©fayllar.org 2024
ma'muriyatiga murojaat qiling