The topicality of the work


Chapter II. Analysis of the Functional Aspect of Speech Sounds


Download 172 Kb.
bet6/7
Sana08.01.2022
Hajmi172 Kb.
#246372
1   2   3   4   5   6   7
Bog'liq
kurs ishi

Chapter II. Analysis of the Functional Aspect of Speech Sounds

2.1 Aspects of the Phoneme and their Critical Analysis.

Speech sounds can be analysed from the viewpoint of three as­pects: (1) acoustic, (2) physiological and articulatory, (3) functional.

Phonetics is connected with linguistic and non-linguistic sciences: acoustics, physiology, psychology, etc. Speech sounds have a number of physical properties, the first of them is frequency, i.e. the number of vibrations persecond.

The vocal cords vibrate along the whole of their length, producing fundamental frequency, and along the varying portions of their length, producing overtones, or harmonics. When the vibrations pro­duced by the vocal cords are regular they produce the acoustic impression of voice or musical tone. When they are irregular noise is produced. When there is a combination of tone and noise, either noise or tone prevails. When tone prevails over noise sonorants are produced. When noise prevails over tone voiced consonants are produced.

The complex range of frequencies which make up the quality of a sound is known as the acoustic spectrum.

The second physical property of sound is intensity. Changes in intensity are perceived as variation in the loudness of a sound. The greater the amplitude of vibration, the greater the intensity of a sound; the greater the pressure on the ear-drums, the louder the sound. Intensity is measured in decibels (dbs).

Although acoustic descriptions, definitions and classifications of speech sounds are considered to be more precise than articulatory ones, they are practically inapplicable in language teaching, because the acoustic features of speech sounds cannot be seen directly or felt by the language learner. Acoustic descriptions, however, can be ap­plied in the fields of technical acoustics. They are also of great theo­retical value. The research work made in acoustic phonetics is connected with 1) the methods of speech synthesis and perceptual experiment for the study of cues of phonemic distinctions and for the exploration of dif­ferences in tone and stress; 2) the design of speech recognizing ma­chines, the teaching of languages, the diagnosis and treatment of pathological conditions involving speech and language.



The future work in acoustic phonetics will be connected with brain functioning and ar­tificial intelligence. «Experimentation will involve the whole of speech programming and processing, including the relations between the acoustic level of speech and operations at the grammatical, syntactical, lexical and phonological levels.»12 Phonetics is a branch of linguistics that studies how humans produce and perceive sounds, or in the case of sign languages, the equivalent aspects of sign.[1] Phoneticians—linguists who specialize in phonetics—study the physical properties of speech. The field of phonetics is traditionally divided into three sub-disciplines based on the research questions involved such as how humans plan and execute movements to produce speech (articulatory phonetics), how different movements affect the properties of the resulting sound (acoustic phonetics), or how humans convert sound waves to linguistic information (auditory phonetics). Traditionally, the minimal linguistic unit of phonetics is the phone—a speech sound in a language—which differs from the phonological unit of phoneme; the phoneme is an abstract categorization of phones.

Phonetics broadly deals with two aspects of human speech: production—the ways humans make sounds—and perception—the way speech is understood. The communicative modality of a language describes the method by which a language produces and perceives languages. Languages with oral-aural modalities such as English produce speech orally (using the mouth) and perceive speech aurally (using the ears). Sign languages, such as Auslan and ASL, have a manual-visual modality, producing speech manually (using the hands) and perceiving speech visually (using the eyes). ASL and some other sign languages have in addition a manual-manual dialect for use in tactile signing by deafblind speakers where signs are produced with the hands and perceived with the hands as well.

Language production consists of several interdependent processes which transform a non-linguistic message into a spoken or signed linguistic signal. After identifying a message to be linguistically encoded, a speaker must select the individual words—known as lexical items—to represent that message in a process called lexical selection. During phonological encoding, the mental representation of the words are assigned their phonological content as a sequence of phonemes to be produced. The phonemes are specified for articulatory features which denote particular goals such as closed lips or the tongue in a particular location. These phonemes are then coordinated into a sequence of muscle commands that can be sent to the muscles, and when these commands are executed properly the intended sounds are produced.

These movements disrupt and modify an airstream which results in a sound wave. The modification is done by the articulators, with different places and manners of articulation producing different acoustic results. For example, the words tack and sack both begin with alveolar sounds in English, but differ in how far the tongue is from the alveolar ridge. This difference has large effects on the air stream and thus the sound that is produced. Similarly, the direction and source of the airstream can affect the sound. The most common airstream mechanism is pulmonic—using the lungs—but the glottis and tongue can also be used to produce airstreams.

Language perception is the process by which a linguistic signal is decoded and understood by a listener. In order to perceive speech the continuous acoustic signal must be converted into discrete linguistic units such as phonemes, morphemes, and words. In order to correctly identify and categorize sounds, listeners prioritize certain aspects of the signal that can reliably distinguish between linguistic categories. While certain cues are prioritized over others, many aspects of the signal can contribute to perception. For example, though oral languages prioritize acoustic information, the McGurk effect shows that visual information is used to distinguish ambiguous information when the acoustic cues are unreliable.

Modern phonetics has three main branches:



  • Articulatory phonetics which studies the way sounds are made with the articulators

  • Acoustic phonetics which studies the acoustic results of different articulations

  • Auditory phonetics which studies the way listeners perceive and understand linguistic signals.

2.2 The Functions of the Phoneme

The opposition of phonemes serves to distinguish the meaning of the whole phrase as well. For example, He was heard badly and He was hurt badly.

Thus we may say that the phoneme can fulfill the distinctive function. Secondly, it is material, real and objective. It is realized in speech of all English-speaking people in the form of speech sounds, that is the allophones belonging to the same phoneme are not identical in their articulatory content through there remains some phonetic similarity between them. The allophones which do not undergo any distinguishable changes in the chain of speech (dark, done, dot, etc.) are called principal. There are quite predictable changes in the articulation of allophones that occur under the influence of the neighbouring sounds (God thanks, riddle, etc) in different phonetic situations. Such phonemes are called subsidiary. The examples below will show the articulatory modifications of the phoneme [d] in various phonetic contexts:


  • [d] is slightly palatalized before front vowels, deal, did;

  • [d] is pronounced without any plosion before another stop, bedtime, bad dog, good pat;

  • [d] is pronounced with the nasal plosion before nasal sonorants [n] and [m], garden, admire, could not, etc.;

  • The plosion is lateral before the lateral sonorant [l], middle, rapidly, good luck, etc.;

  • when [d] is followed by the labial [w] it becomes labialized, dweller, etc.;

  • [d] followed by the interdental [T] and [D] it becomes dental, bad things,

read the text, etc.;

  • in the initial position [d] is partially devoiced, doctor, deep, etc.;

  • in the intervocalic position or when followed by a sonorant [d] is fully

voiced, leader, drip;

  • in the word-final position it is voiceless, road, caused, etc.

Thus, allophones of the same phoneme never occur in similar phonetic contexts, they are quite predictable according to the phonetic environment, but they cannot differentiate meanings.”13

2.3 The Main Problems of Phonological Analysis.

scholars consider that frequency of occurrence of phonemes and phonemic clusters may be a factor of stability in language in the sense that frequent phonemes resist modifications and modify the rare one.

Consequently, the main problems of phonological analysis are as follows:

a) the identification of the phonemic inventory for each individual language;

b) the identification of the inventory of phonologically relevant (distinctive) features of a language;

c) the interrelationships among the phonemes of a language.

There is one more big problem in phonology- theory of distinctive features.

It was originated by N.S. Trubetskoy and developed by such foreign scientists as R. Jacobson, C.G. Frant, M. Halle, N. Chomsky, P. Ladefoged, H.Kučera, G.K. Monroe and many Soviet/Russian phonologists, such as L.R. Zinder, G. S. Klychkov, V.Ya. Plotkin, Steponavičius and many others.

The first problem of phonological analysis is to establish the phonemes in a definite language. This can be carried out only by phonological analysis based on phonological rules. There are two methods to do that: the distributionalmethod and semantic method.

The distributional method is based on the phonological rule that different phonemes can freely occur in one and the same position, while allophones of one and the same phoneme occur in different positions and, therefore, cannot be phonologically opposed to each other. The aim of the phonological analysis is, firstly, to determine which differences of sounds are phonemic (i.e. relevant for the differentiation of the phonemes) and which are non-phonemic and, secondly, to find the inventory of the phonemes of this or that language. "A number of principles have been established for ascertaining the phonemic structure of a language. For an unknown languagethe procedure of identifying the phonemes of a language as the smallest language units has several stages. 1) The first step is to determine the minimum recurrent segments (segmentation of speech continuum) and to record them graphically by means of allophonic transcription. To do this an analyst gathers a number of sound sequences with different meanings and compares them. For example, the comparison of [stIk] and [stæk] reveals the segments (sounds) [I] and [æ], comparison of [stIk] and [spJk] reveals the segments [st] and [sp] and the further comparison of these two with [tIk] and [txk], [sIk] and [sæk] splits these segments into smaller segments [s], [t], [p]. If we try to divide them further there is no comparison that al lows us to divide [s] or [t] or [p] into two, and we have therefore arrived at the minimal segments. From what we have shown it follows that it is possible to single out the minimal segments opposing them to one another in the same phonetic context or, in other words, in sequences which differ in one element only. The next step in the procedure is the arranging of sounds into functionally similar groups. We do not know yet what sounds are contrastive in this language and what sounds are merely allophones of one and the same phoneme. There are two most widely used methods of finding it out. They are the distributional method and the semantic method."14

The following outline provides an overview of theoretical and practical aspects of the different (initial) stages of phonological analysis: carrying out preliminary inquiries, performing phonetic transcription and analysis, getting the analysis started. This outline should not be regarded as exhaustive and will need to be adapted to specific situations. It is a general plan of action”.15

When seen from the point of view of the phonology, the suprasegmental features (i.e., pitch, duration, intensity) are involved in the realization of the prosodic structure of languages. This structure consists of the prosodic hierarchy and an autosegmental tonal structure. The prosodic hierarchy consists of a set of hierarchical constituents: mora, syllable, foot, phonological word, phonological phrase, intonational phrase, utterance, while additionally clitic group and accentual phrase have been argued for. These constituents may define the context of segmental or prosodic processes like consonant assimilation and stress shift, while the higher ones are characterized by preboundary lengthening. The tonal structure is a string of tones parallel to the segmental string. If tone enters into the specification of (also segmentally specified) morphemes, the language is a tone language. Languages vary in tonal density, or the number of locations where tone is specified. Pitch accent languages specify one location in the word for lexical tone. Focus, the marking of information status, is frequently expressed prosodically, for example, through phrasing, special pitch accents, or deaccentuation. The tonal phonology of languages will leave speakers free to use pitch variation for the expression of universal, ethologically developed meanings in the phonetic implementation.16

The study of a language's particular sound properties has a number of aspects. First, it must be able to characterize the language's inventory: which phonetically possible sound types occur in utterances in the language? Second, it must characterize matters of contrast: which of the phonetic differences that occur in the language can serve to distinguish utterances (words, sentences, etc.) from one another? Third is the matter of contextual limitation: even though some property P occurs in language L, are there some environments from which P is excluded? And when P is apparently a property of (some part of) some element but that element occurs in a position from which P is excluded, what other property—if any—appears in its place? And finally, there is the description of alternation: when the ‘same’ linguistic element appears in different overt forms in different environments, what systematic differences occur? What conditions govern the range of phonetically distinct forms that can count as the ‘same’ word, morpheme, etc.? Different answers to these questions yield different phonological theories.

It should be noted that the present article is limited to the sound structure of spoken languages, and ignores the expression systems of manual or signed languages (see Sign Language). This is misleading in important respects; research has shown that most of the basic principles of spoken language phonology are also characteristic of the organization of the expression systems of signed languages as well (Coulter 1993). Just as words are composed of sounds, and sounds of component properties, signs are also composed from structured, language-particular systems of more basic constituent elements. Units such as the syllable have close parallels in signed languages. While there are clear differences that depend on modality, these appear on further examination to be relatively superficial. A comprehensive theory of phonology as a part of the structure of natural language ought to take these broader issues into account. Until quite recently, however, the possibility of deep structural parallels between speaking and signing has not been raised, and the discussion below reflects this (undoubtedly unfortunate) limitation as well.

Prior to the early twentieth century, studies of sound in language concentrated on the ways in which sounds are made (articulatory phonetics), often confusing the letters of a language's writing system with its sounds. Toward the end of the nineteenth century, however, increasing sophistication of measurement techniques made it possible to explore a much wider range of differences among sounds, and to lay out the structure of speech in vastly greater detail. Somewhat ironically, perhaps, the explosion of data which resulted from the ability of phoneticians to measure more and more things in greater and greater detail began to convince them that they were on the wrong track, at least as far as increasing their understanding of particular languages was concerned.

Much of what was found, for example, involved the observation that speech is continuous, such that whatever is going on at any particular moment is at least a little different from what has gone on just before and what will go on immediately afterwards. A full characterization of an utterance as a physical event requires the recognition of a potentially unlimited number of distinct points in time, but it is clear that our understanding of an utterance as a linguistic event is hindered, rather than helped, by the recognition of this continuous character of speech. Speech normally is represented as a sequence of a small number of discrete segments, strung out in consecutive fashion like beads on a string; such a segmental representation vastly facilitates the discovery of regularity and coherence, but it must be emphasized that there is no direct warrant for it in either the acoustic or the articulatory observable data of speech, and it constitutes a fairly abstract (though undoubtedly appropriate) theory of how sound is organized for linguistic purposes.

It is clear that the role of particular sound differences varies considerably from one language to another. Thus, in English, the vowel sound in the word bad is much longer than that in bat (more than half again as long), but such a difference in length is always predictable as a function of the following sound, and never serves by itself to distinguish one word from another. In Tahitian, in contrast, essentially the same difference in length is the only property distinguishing, for example, paato ‘to pick, pluck’ from pato ‘to break out.’ A theory of sound that attends only to physical properties has no way of clarifying the quite different functions these properties may have across various languages. This is not to suggest that phonetics is wrong, but rather that there is more to be said.

The great Swiss linguist Saussure (1916) was the first to stress that in order to understand the role of sound in language it is necessary to focus not (just) on the positive properties of sounds, but on their differences. He suggested that in the study of individual languages, as opposed to general phonetics, utterances should be characterized in such a way that two such representations might differ only in ways that could potentially correspond to a difference between two distinct messages in the language in question. Thus, since long and short vowels never (by themselves) distinguish two distinct utterances in English, the difference should not be indicated in that language; while for Tahitian, it must be. A representation with this property will be called phonological; it will obviously be specific to a particular language, and the distinctive elements that appear in it can be called the phonemes of that language.

While de Saussure enunciated this principle quite forcefully and persuasively, he provided few specific details of just what a phonological representation should look like. There are in fact a variety of ways in which his insight could potentially be realized, and much subsequent discussion in phonology hinges on these differences of interpretation.

Various individual investigators arrived at conclusions similar to de Saussure's about the importance of attention to language-particular sound contrasts. One of these was the Polish linguist Baudouin de Courtenay (1972), whose work actually antedated de Saussure's, but attracted little attention due to his isolation in Kazan. He developed a sophisticated view of the relation between phonetics and phonology both in individual grammars and in linguistic change. As transmitted by his later students, Baudouin's views on the nature of the phoneme constituted an important strand in thinking about language as this developed in Russian linguistics in the early years of the twentieth century. This, in turn, provided the background from which the work associated with the Linguistic Circle of Prague grew in the 1920s and 1930s.

Two of the most prominent members of the Prague Circle were Trubetzkoy (1939) and Jakobson (1941). In their studies of Slavic languages and their histories, they stressed the notion that the collection of potentially contrastive sound types in a language was not simply an inventory, but a highly structured system. This system is organized in terms of a small number of mutually orthogonal dimensions (such as voicing, stop vs. continuant, nasality, etc.), each of which serves in parallel fashion as the basis of multiple contrasts. The notion that the fundamental terms of sound structure in language are these properties themselves and not (or at least not only) the complete sounds they characterize has remained an important component of most subsequent theorizing. The analysis of phonological structure in terms of its constituent basic contrasts, in turn, has served as a model for a variety of other disciplines in the Humanities and the Social Sciences apart from linguistics.

Early thinking about sound structure in America was dominated by the anthropological interests of Franz Boas, and focused on an accurate rendering of the sound contrasts in the comparatively ‘exotic’ indigenous languages of the new world. Boas's student Edward Sapir, however, was concerned to place the study of language in the broader context of an understanding of the human mind and society. As such, he stressed (Sapir 1925) the notion that the elements of sound contrast in a language should be regarded as having a primarily mental reality, part of the speaker/hearer's cognitive organization rather than as external, physical events.

The rise of positivist views of science in the 1930s, and especially of behaviorist psychology, made Sapir's sort of mentalism quite marginal, and replaced it with more rigorous operational procedures for investigating notions of contrast. Especially associated with the ideas of Bloomfield (1933) and later structuralists such as Bloch and Harris (1951), the result was a theory of the phoneme based exclusively (at least in principle) on a set of mechanical manipulations of corpora of observed linguistic data, from which a set of contrasting minimal elements was to be derived. The central notion of this theory was a phonemic representation related to surface phonetic form in a way that would later be formulated explicitly as a condition of Bi-uniqueness: the requirement that given either a phonetic or a phonemic representation of an utterance in a given language, that could be converted uniquely into the other (disregarding free variation) without additional information. The perceived rigor of this notion led to its being widely taken as a model not only for the study of other areas of linguistic structure, but for other sciences as well.

The phonemic theories of American Structuralists provided a way to characterize linguistic contrasts, the inventories of sound types used in a given language, and the ways sounds can be combined into larger structures, but other aspects of sound structure were less satisfactorily accommodated within those views. In particular, questions of the ways in which unitary meaningful elements change in shape according to their sound context (or ‘allomorphy’: see Morphology in Linguistics) failed to receive systematic treatment. Since any difference among sounds that could serve to contrast linguistic elements was ipso facto a difference between irreducibly basic terms, there was really no way to express the notion that a single item could take a variety of forms (as in the prefixes in efficient, precise, egular, legal, etc.) except by simply listing the variants. Such a list is undoubtedly appropriate for cases such as the forms of English to be (am, are, is, was, were, etc.) which are unrelated to one another in form; but in many other cases, the variation is transparently systematic and a function of the sounds in the element's environment. This sort of variation was recognized by structuralist phonologists, but relegated to marginal status.

Beginning with work of Morris Halle, a student of Jakobson, linguists began to question the centrality of surface contrasts in sound structure. The result was a new view that allowed morphophonemic regularities as well as more superficial ones to be accommodated within a phonological description. The success of this more abstract notion of sound structure in dealing with hitherto irresolvable problems in the description of stress (see Suprasegmentals) contributed greatly to its success, and the resulting theory of Generative Phonology as developed in the work of Halle together with Noam Chomsky rapidly became the dominant view in the field by the middle of the 1960s.

A basic insight in the development of Generative Phonology was the proposal that it is not only the representation of linguistic elements in terms of basic contrasts that matters: an adequate theory must characterize what a speaker knows about the sound system of the language, and that includes regularities of variation and alternation as well as inventories of basic elements. Combining these two aspects of phonological knowledge required the explicit recognition of a system of rules (expressions of regular patterns of contrast and variation in sound shape) in addition to the theory of representations. Developing an adequate theory of phonological rules, in turn, necessitated a notion of phonological representation that was related to surface phonetic reality in a much more complex and indirect way than the phonemic representations of structuralist linguistics. The central problems of phonological theory came to be articulated in terms of the theory of rules, their nature, and their interaction, and the underlying phonological representations that need to be posited in order to allow them to be expressed in their full generality.

"In analyzing speech phoneticians carry out a phonetic and a phonological analyses. Phonetic analysis is concerned with the articulatory and acoustic characteristics of particular sounds and their combinations. Phonological analysis is concerned with the role of those sounds in communication. The main problems in phonological analysis are as follows: 1. The establishment of the inventory of phonemes of a certain language. (The inventory of phonemes of a language is all phonemes of this language. Every language has it's own inventory of speech sounds that it uses to contrast meaning. English has one of the larger inventories among the world's languages. Cantonese has up to 52 vowels when vowel + tone combinations are considered. Many languages include consonants not found in English). 2. The establishment of phonologically relevant (distinctive features of a language). 3. The interrelationships among the phonemes of a language."17

CONCLUSION



To sum up,Spoken language communication is arguably the most important activity that distinguishes humans from non-human species. This paper provides an overview of the review papers that make up this theme issue on the processes underlying speech communication. The volume includes contributions from researchers who specialize in a wide range of topics within the general area of speech perception and language processing. It also includes contributions from key researchers in neuroanatomy and functional neuro-imaging, in an effort to cut across traditional disciplinary boundaries and foster cross-disciplinary interactions in this important and rapidly developing area of the biological and cognitive sciences. Spoken language communication is arguably the most important activity that distinguishes humans from non-human species. While many animal species communicate and exchange information using sound, humans are unique in the complexity of the information that can be conveyed using speech, and in the range of ideas, thoughts and emotions that can be expressed.


Download 172 Kb.

Do'stlaringiz bilan baham:
1   2   3   4   5   6   7




Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©fayllar.org 2024
ma'muriyatiga murojaat qiling