Some of the most prominent differences between phonetics and phonology can be elaborated as follows


Download 50.97 Kb.
bet1/7
Sana06.04.2023
Hajmi50.97 Kb.
#1330331
  1   2   3   4   5   6   7
Bog'liq
XURSHID


The difference between Phonetic and Phonological transcriptions
Plan:
Abstract:
Keywords:
Introduction
Main part:
1. There are three main differences between phonemic and phonetic transcriptions
2. Some of the most prominent differences between phonetics and phonology can be elaborated as follows
3. What is phonological transcription/with its rules

Conclusion:


References:

Abstract
Phonetics and phonology are two absolutely different terms, but equally important. Phonology is a subfield of linguistics that studies how sounds work in a language or languages ​​in general at the mental and abstract level and phonetics studies the physiological and acoustic nature of sounds. The phonetics come from the Greek fōnḗ which means sound or voice. It is about the study of the physical sounds of the human language. It is a branch of linguistics that is responsible for studying the production and perception of the sounds of a language in relation to its physical manifestations.
The study of human speech sound is called phonetics or it deals with the production or articulation of speech sounds
Keywords
phonology, phonetics, the physiological and acoustic nature of sounds, perception of the sounds, Phonetic transcriptions, the IPA, interpret, stressed syllable, homonyms,

Introduction


As far as I know, there are three main differences between phonemic and phonetic transcriptions:


Phonetic transcriptions deal with phones or sounds, which can occur across different languages and speakers of these languages all over the world. On the other hand, phonemic transcriptions deal with phonemes, which can change the meaning of the words in which they are contained if replaced; for example, /bɪt/ and /pɪt/.
Phonetic transcriptions provide more details on how the actual sounds are pronounced, while phonemic transcriptions represent how people interpret such sounds.
We use square brackets to enclose phones or sounds and slashes to enclose phonemes. So, for instance, it would be incorrect to place the aspirated allophone of the phoneme "t" between slashes--it should go inside square brackets because it's not a phoneme (which means that its omission wouldn't change the meaning of the word that contains it).
You characterisation is basically right, but could be emended a bit, to reflect different ideologies of phonology (since there isn't just one). The most important is whether or not there are "phones" (or whether phones have a radically different status). A "phone" might be a real mental unit (not physically tangible), or it might be a conventionalised representation of an part of an acoustic waveform. Substance-free theories especially favor the latter interpretation, and SPE-era generativists favor the former (though neither particularly favors the use of the term "phone"). SPE theoreticians do hold that "phones" are comparable across languages and you can say that this "m" and that one "are "the same", but the substance-free response to that claim is the same as a phonetician's response would be, namely producing a graph of the differences between the waveforms.
I don't think the "change the meaning of words" characterization of phoneme is right, in that it misses the essential property of phonemes and focuses on a possible consequence. Phoneme are whatever underlying sounds exist in a language, which can be the building blocks of lexical and grammatical formatives. Other sounds come into existence by application of rules. That is it. The question arises as to how we know what those underlying sounds are, and a number of operational tests have been called on: one of them is asking whether there are any two words that differ in just the choice of a particular pair of sounds (i.e. a "minimal pair"). Typically, people conceptualize "two words" as being about having different meanings. So, the "change meaning" characterization is sufficient for phoneme identification, but it isn't necessary. There are two types of phonetic transcription ;
· Broad phonetic transcription
· Narrow phonetic transcription
It is important to understand the difference between a narrow transcription and a broad one. The term narrow is applied to a transcription which contains a certain amount of phonetic detail: the narrower a transcription is, the more phonetic detail it contains and the more diacritic signs and special symbols it requires. This kind of transcription is a phonetic tanscription and is placed between square brackets ( [...] ). A broad transcription shows an absence of phonetic detail. The broadest transcription contains only phonemes. It is referred to as a phonemic transcription and is written between slants ( /.../ ).
Phonetic transcription may aim to transcribe the phonology of a language, or it may wish to go further and specify the precise phonetic realisation. In all systems of transcription we may therefore distinguish between broad transcription and narrow transcription. Broad transcription indicates only the more noticeable phonetic features of an utterance, whereas narrow transcription encodes more information about the phonetic variations of the specific allophones in the utterance. The difference between broad and narrow is a continuum. One particular form of a broad transcription is a phonemic transcription, which disregards all allophonic difference, and, as the name implies, is not really a phonetic transcription at all, but a representation of phonemic structure.
For example, one particular pronunciation of the English word little may be transcribed using the IPA as /?l?t?l/ or [?l????]; the broad, phonemic transcription, placed between slashes, indicates merely that the word ends with phoneme /l/, but the narrow, allophonic transcription, placed between square brackets, indicates that this final /l/ ([?]) is dark (velarized).
The advantage of the narrow transcription is that it can help learners to get exactly the right sound, and allows linguists to make detailed analyses of language variation. The disadvantage is that a narrow transcription is rarely representative of all speakers of a language. Most Americans and Australians would pronounce the /t/ of little as a tap [?]. Some people in southern England would say /t/ as [?] (a glottal stop) and/or the second /l/ as [w] or something similar. A further disadvantage in less technical contexts is that narrow transcription involves a larger number of symbols that may be unfamiliar to non-specialists.
The advantage of the broad transcription is that it usually allows statements to be made which apply across a more diverse language community. It is thus more appropriate for the pronunciation data in foreign language dictionaries, which may discuss phonetic details in the preface but rarely give them for each entry. A rule of thumb in many linguistics contexts is therefore to use a narrow transcription when it is necessary for the point being made, but a broad transcription whenever possible. Broad phonetic transcription of speech does not attempt to record the extremely large number of idiosyncratic or contextual variations in pronunciation that occur in normal speech nor does it attempt to describe the individual variations that occur between speakers of a language or dialect. The goal of a broad transcription is to record the phonemes that a speaker uses rather than the actual spoken variants of those phonemes that are produced when a speaker utters a word.

I would not agree with your second interpretation. Actual sound, which is studied in phonetics, is both produced and perceived in real life. Perception involves hearing, and perception is broader than the concept of "phoneme" (e.g. also applies to cognitive states in humans that arise from being exposed to non-linguistic and non-human sound). Interpretation likewise is something that we do to anything that we have perceived. In ordinary language, we would talk of how speakers "interpret" a given acoustic waveform as referring to "do" vs. "two". What you are saying in point 2 most closely resembles the distinction between phonetics and phonology, but that's more at the level of introductory linguistics where we teach sound bites that are memorable but not really true.


Phonologists and phoneticians generally (though not universally) recognize a distinction between the discrete, symbolic vs. continuous physical aspects of speech: formant measurments are an example of the latter, and the vowel distinction [i] vs. [ɪ] is an example of the former. Any transcription is, by necessity, a discrete symbolic representation. Up to a point, a transcription can give you more details about the physical sound, but there is a huge limit on the precision possible with a transcription, which numeric measurements go far beyond.
Use of square vs. slash brackets is also problematic. I will disregard the problem that many people just don't care what brackets are used and use them indiscriminately. Square brackets represent something that is closer to the physical output, and slash brackets are used to represent something that is further from the physical output. The standard generative convention is that slash brackets go around underlying forms and square brackets go around surface forms. Underlying is not the same as phonemic (note that generative phonology a la Halle rejects the concept of "phoneme" as a distinct representational level). This does lead to a quandry in representing intermediate forms, for example /apa/ → aba → [ab], where aba is not the underlying form and not the surface form.
You may have notices that in American English there is a difference in the pronunciation of /t/ in "militaristic" vs. in "capitalistic", where it is aspirated in "militaristic" and flapped in "capitalistic". The governing factor is stress and syllable position, and reduces to the fact that "militaristic" is from "military" vs. "capitalistic" being from "capital" -- the stress patterns of those words differ. So the distinction is derived by applying the relevant rules to get ˈmɪlɪˌtɛri+ɪstɪk whence ˈmɪlɪˌtʰɛri+ɪstɪk, vs. ˈkæpɪtl̩+ɪstɪk whence ˈkʰæpɪtl̩+ɪstɪk. But -istic also causes a stress shift, and thus you get a surface contrast in aspiration vs. flapping. The phonetic outputs are [ˌkʰæpɪtl̩ˈɪstɪk] and [ˌmɪlɪtʰɛˈrɪstɪk]. The intermediate form contains a non-phoneme so shouldn't be in slash brackets, but it isn't an actually pronounced form, so shouldn't be in square brackets either.
If "phones" or "allophones" i.e. actual physical outputs are the things in square brackets, then rules of grammar cannot refer to them, because rules of grammar are mental operations on mental objects. A reasonable amount of experimental evidence is emerging to show that the notion of "allophone" or "phone" is suspect, in that it conflates two different things. One is that languages can have distinct sets of sounds which are not yet exploited to make lexical distinctions (so that one sound is a rule-derived variant of the other). Vowel nasalization in Sundanese is a prime example. The other is that languages can have physical variations of sounds which are gradient in degree and timing, which resemble categorial distinctions in sounds in other languages -- patterns of nasal airflow in English are a prime example. Nasal airflow patterns in English really require physiological investigation because you can't hear the point at which air flows through the nose, or when it reaches its peak. You can, however, hear in Sundanese whether a vowel is nasal or oral, just as you can hear that distinction in French (the difference being that in French, vowel nasalization is phonemic, but it is not in
You characterisation is basically right, but could be emended a bit, to reflect different ideologies of phonology (since there isn't just one). The most important is whether or not there are "phones" (or whether phones have a radically different status). A "phone" might be a real mental unit (not physically tangible), or it might be a conventionalised representation of an part of an acoustic waveform. Substance-free theories especially favor the latter interpretation, and SPE-era generativists favor the former (though neither particularly favors the use of the term "phone"). SPE theoreticians do hold that "phones" are comparable across languages and you can say that this "m" and that one "are "the same", but the substance-free response to that claim is the same as a phonetician's response would be, namely producing a graph of the differences between the waveforms.
I don't think the "change the meaning of words" characterization of phoneme is right, in that it misses the essential property of phonemes and focuses on a possible consequence. Phoneme are whatever underlying sounds exist in a language, which can be the building blocks of lexical and grammatical formatives. Other sounds come into existence by application of rules. That is it. The question arises as to how we know what those underlying sounds are, and a number of operational tests have been called on: one of them is asking whether there are any two words that differ in just the choice of a particular pair of sounds (i.e. a "minimal pair"). Typically, people conceptualize "two words" as being about having different meanings. So, the "change meaning" characterization is sufficient for phoneme identification, but it isn't necessary.
I would not agree with your second interpretation. Actual sound, which is studied in phonetics, is both produced and perceived in real life. Perception involves hearing, and perception is broader than the concept of "phoneme" (e.g. also applies to cognitive states in humans that arise from being exposed to non-linguistic and non-human sound). Interpretation likewise is something that we do to anything that we have perceived. In ordinary language, we would talk of how speakers "interpret" a given acoustic waveform as referring to "do" vs. "two". What you are saying in point 2 most closely resembles the distinction between phonetics and phonology, but that's more at the level of introductory linguistics where we teach sound bites that are memorable but not really true.
Phonologists and phoneticians generally (though not universally) recognize a distinction between the discrete, symbolic vs. continuous physical aspects of speech: formant measurments are an example of the latter, and the vowel distinction [i] vs. [ɪ] is an example of the former. Any transcription is, by necessity, a discrete symbolic representation. Up to a point, a transcription can give you more details about the physical sound, but there is a huge limit on the precision possible with a transcription, which numeric measurements go far beyond.
Use of square vs. slash brackets is also problematic. I will disregard the problem that many people just don't care what brackets are used and use them indiscriminately. Square brackets represent something that is closer to the physical output, and slash brackets are used to represent something that is further from the physical output. The standard generative convention is that slash brackets go around underlying forms and square brackets go around surface forms. Underlying is not the same as phonemic (note that generative phonology a la Halle rejects the concept of "phoneme" as a distinct representational level). This does lead to a quandry in representing intermediate forms, for example /apa/ → aba → [ab], where aba is not the underlying form and not the surface form.
You may have notices that in American English there is a difference in the pronunciation of /t/ in "militaristic" vs. in "capitalistic", where it is aspirated in "militaristic" and flapped in "capitalistic". The governing factor is stress and syllable position, and reduces to the fact that "militaristic" is from "military" vs. "capitalistic" being from "capital" -- the stress patterns of those words differ. So the distinction is derived by applying the relevant rules to get ˈmɪlɪˌtɛri+ɪstɪk whence ˈmɪlɪˌtʰɛri+ɪstɪk, vs. ˈkæpɪtl̩+ɪstɪk whence ˈkʰæpɪtl̩+ɪstɪk. But -istic also causes a stress shift, and thus you get a surface contrast in aspiration vs. flapping. The phonetic outputs are [ˌkʰæpɪtl̩ˈɪstɪk] and [ˌmɪlɪtʰɛˈrɪstɪk]. The intermediate form contains a non-phoneme so shouldn't be in slash brackets, but it isn't an actually pronounced form, so shouldn't be in square brackets either.
If "phones" or "allophones" i.e. actual physical outputs are the things in square brackets, then rules of grammar cannot refer to them, because rules of grammar are mental operations on mental objects. A reasonable amount of experimental evidence is emerging to show that the notion of "allophone" or "phone" is suspect, in that it conflates two different things. One is that languages can have distinct sets of sounds which are not yet exploited to make lexical distinctions (so that one sound is a rule-derived variant of the other). Vowel nasalization in Sundanese is a prime example. The other is that languages can have physical variations of sounds which are gradient in degree and timing, which resemble categorial distinctions in sounds in other languages -- patterns of nasal airflow in English are a prime example. Nasal airflow patterns in English really require physiological investigation because you can't hear the point at which air flows through the nose, or when it reaches its peak. You can, however, hear in Sundanese whether a vowel is nasal or oral, just as you can hear that distinction in French (the difference being that in French, vowel nasalization is phonemic, but it is not in



Download 50.97 Kb.

Do'stlaringiz bilan baham:
  1   2   3   4   5   6   7




Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©fayllar.org 2024
ma'muriyatiga murojaat qiling