Понятие транскрипции дискурса. Использование транскрипции Джефферсона для обучения устной речи

Аннотация
Эта статья исследует Систему транскрипции Джефферсон как ценный инструмент для обучения устной речи на английском, узбекском и русском языках. В ней представлена дискурсивная транскрипция как метод, позволяющий зафиксировать не только то, что говорится, но и как это говорится, включая такие детали, как паузы, наложения, интонация и акцент. В статье подчеркивается полезность этой системы в педагогических контекстах, позволяющая изучающим язык получить более глубокое и нюансированное понимание подлинного устного взаимодействия. Тщательно документируя особенности, часто теряющиеся в стандартных орфографических транскрипциях, система Джефферсон позволяет анализировать смену реплик, исправления и другие разговорные явления, имеющие решающее значение для развития коммуникативной компетенции. Обсуждение, вероятно, охватывает конкретные условные обозначения транскрипции Джефферсон и демонстрирует ее практическое применение в многоязычных языковых классах, способствуя более критическому и детальному подходу к освоению устной речи. Кроме того, подчеркивается, что использование данной системы способствует развитию навыков аудирования, прагматического восприятия и межкультурной компетенции, позволяя учащимся погружаться в динамику реального разговора и лучше понимать коммуникативные намерения говорящих.
Ключевые слова:
Устная речь транскрипция прагматическая компетентность транскрипция Джефферсона социальное взаимодействиеWhat is discourse transcription? Discourse transcription can be defined as the process of creating a representation in writing of a speech event so as to make it accessible to discourse research (Du Bois et al. forthcoming a). What it means to make the event “accessible to discourse research” will of course depend on what kinds of research questions one seeks to answer. Although speech events are always viewed through the lens of some theory, one can try to ensure that the theory – the framework for explanation and understanding does justice to the spoken reality, or rather, to selected aspects of it. The process of discourse transcription is never mechanical, but crucially relies on interpretation within a theoretical frame of reference to arrive at functionally significant categories, rather than raw acoustic facts (cf. Ochs 1979, I-adefoged 1990, Du Bois et al. forthcoming a). The nature of discourse transcription is necessarily shaped by its end. Why transcribe? Transcription documents language use, but language use is attested equally in written discourse, which has the advantage of being easy to obtain without transcribing. What makes speaking worth the extra effort? Spoken language differs in structure from written language, in ways that remain surprisingly little studied: many aspects of spoken grammar, meaning, and even lexicon remain to be documented.
Moreover, it is in spoken discourse that the process of the production of language is most accessible to the observer. Hesitations, pauses, glottal constrictions, false starts, and numerous other subtle evidences observable in speech but not in writing provide clues to how participants mobilize resources to plan and produce their utterances, and to how they negotiate with each other the ongoing social interaction. Prosodic features like accent and intonation contour provide important indicators of the flow of new and old information through the discourse (Chafe, 1987). And the moment by-moment flux of speech displays a rich index of the shifting social interactional meanings that participants generate and attend to, as well as of the larger dimensions of culture embodied in social interactions (Goodwin, 1981). A transcription of spoken discourse can provide a broad array of information about these and other aspects of language, with powerful implications for grammar, semantics, pragmatics, cognition, social interaction, culture, and other domains that meet at the crossroads of discourse. But discourse transcription cannot be equated with simply writing down speech, because there is not, nor ever can be, a single standard way of putting spoken word to paper.
An oral historian, a phonetician, a journalist, and a dialectologist will all produce very different renderings of the same recording, and a discourse researcher's transcription will differ yet from all of these. Indeed, because these other methodologies for writing speech are available, the discourse transcriber can defer to them when necessary, relying on the relevant specialist, equipped with the appropriate analytical framework and notational conventions, to deal with those features which are specific to his or her domain of inquiry. This is not to say that the domain of discourse is insulated from, say, phonetics, since clearly a small detail of pronunciation can ironically reverse a conveyed meaning, or even signal a speaker’s alignment with one conversational participant against a third. But at least the predictable regularities of phonetics, phonology, grammar and lexicon can generally be assumed to have been described elsewhere (perhaps even in a descriptive grammar written by the same researcher wearing another hat), so that these facts need not be recapitulated in the discourse transcription. The discourse transcription is designed for answering some but not all questions about speaking, and will necessarily contain both more and less information than another discipline’s representation of the same event.
What kinds of speech events will be transcribed? A conversation, classroom lecture, committee meeting, political speech, service encounter, or even just a few words exchanged hastily in the hallway might form the object of scrutiny. Each of these speech event types presents somewhat different demands but a good transcription system should be able to accommodate, with adaptation, if need be, the full range of events that are likely to be of interest. In many respects the most challenging case is the free-wheeling multi-party conversation, and any system that can meet its vast demands will have passed the severest test, and positioned itself well to handle other speech events that may be encountered. Who will use the transcriptions? Discourse researchers, of course, in all their variety. But these days their interest in discourse is shared by an ever-widening circle. Grammarians and general linguists use transcriptions as sources of linguistic data on a range of topics, and to follow the action in theories grounded in discourse; computational linguists use them to test speech recognition protocols against actual language use; language teachers use them to illustrate realistic uses of spoken language; social scientists use them for understanding the nature of social interaction; curious folks find it intriguing to look closely at how people really talk; and the students of any of these may use transcriptions to learn more about their field of study. And, as we shall see, one of the most important groups of users is the transcribers themselves. A good transcription system should be flexible enough to accommodate the needs of all of these kinds of users.
How will people use the transcriptions? The most fundamental thing they do is to read them, perhaps browsing through a transcription (or a stack of them) to look for a particular phenomenon or pattern, or to formulate a hypothesis. This requires the transcription not only to present the needed information but to present it in a way that is easily assimilated. Second, many users will want to search the transcription using a computer in conjunction with various kinds of data management software, which may include a word processor, a database manager, and a concordance maker, among other things. For this, the transcription should make all the necessary distinctions in ways that ensure that searches will be exhaustive and economical (Edwards 1989). It would be hard to overestimate the impact of the microcomputer on discourse transcription design; many of the possibilities, and many of the constraints, that are spoken about in this article would not exist if the computer had not in recent years made itself such an indispensable tool for many, though certainly not all, types of discourse research. This is not to say that the needs of the discourse researcher should be bent to the requirements of the machine; as Edwards has persuasively argued (1989), and as I will reaffirm below, an aware and purposeful pursuit of certain basic design principles can insure from the outsethat it is the computer that adjusts to the needs of the researcher. Finally, one key function that is often overlooked is embedded in the transcribing process itself. Through the experience of transcribing the transcriber is constantly learning about discourse, not only gaining skill in discriminating the categories implicit in the transcription system but also acquiring a vivid image of the conversational reality that he or she is seeking to represent. To the extent that there is more going on than the transcription system can capture, it is the transcriber, immersed in the recorded speech event and grounded in discourse theory, who is in a position to rectify this, to advance the potential of the transcription system and its theoretical framework. Although transcribing is sometimes thought of as a kind of manual labor, merely a necessary means of producing certain valuable end products, in reality the process itself has tremendous potential for enlightening its practitioners, and for generating the level of keen perception and intimate knowledge that can translate into theoretical insight and new research directions. With this in mind, the transcription system should contribute to making the transcribing process a valuable experience in itself. The system should be convenient and comfortable to use, reasonably easy to learn, and through its implicit categories it should promote insightful perception and classification of discourse phenomena, which in the end may feed back into advances in the system itself.
In the intricate tapestry of human communication, spoken discourse stands as a dynamic, multifaceted phenomenon. Yet, in conventional language pedagogy, its richness is often flattened by reliance on simplistic orthographic transcriptions. These conventional methods, while serving to capture the literal words, frequently overlook the vibrant, often subtle, non-lexical elements that imbue natural conversation with meaning, emotion, and interactional prowess. This is precisely where Jefferson Transcription, a highly detailed and rigorously developed system rooted in Conversation Analysis (CA), emerges as an invaluable tool, offering an unprecedented microscope for educators and learners to dissect, understand, and ultimately master authentic spoken interaction. For nations like Uzbekistan, actively engaged in enhancing communicative competence and critical thinking skills among its citizens, the strategic implementation of Jefferson Transcription holds the key to a truly transformative approach to language education.
Jefferson Transcription operates on the premise that every observable detail in spoken interaction is potentially meaningful. It meticulously chronicles not just the utterances themselves, but the precise timing and manner of their delivery. This goes far beyond the semantic content to encompass:
- The system precisely marks pauses of varying durations (micropauses, timed pauses, extended silences) and instances of overlap and These seemingly minor details are, in fact, the very building blocks of turn-taking. By visualizing these temporal dynamics, students can discern how speakers navigate conversational space, negotiate who speaks when, and how silences can be laden with meaning (e.g., signaling hesitation, topic shift, or even a challenge). The absence of a discernible gap (latching) reveals a seamless, often cooperative, transition, while overlaps can indicate enthusiastic agreement, competitive vying for the floor, or simultaneous expressions of emotion.
- Jeffersonian notation meticulously captures the intonational contours (rising, falling, level arrows) and pitch shifts(up/down arrows) that shape the emotional landscape of speech. Furthermore, volume variations (capitalization for increased loudness, degree signs for quiet speech) and emphatic stress (underlining) are precisely marked. These features are critical for conveying sincerity, sarcasm, questioning, assertion, or emotional states. Students learn that “Yes” with a falling intonation signifies agreement, while “Yes?” with a rising intonation denotes a question or surprise, allowing them to decode and replicate the full spectrum of communicative intent.
- The system integrates symbols for a range of non-linguistic vocalizations, such as laughter (h), in-breaths (.hhh), out-breaths (hhh), sniffles, and even creaky voice (#word#) or shaky voice (~word~). These paralinguistic cues are not mere incidental noises; they are integral to the meaning-making process, conveying emotional states, marking transitions, or even serving as interactional responses. By analyzing these elements, learners grasp the holistic nature of spoken communication, understanding how the entire vocal apparatus contributes to the message.
- Jefferson Transcription meticulously documents cut-offs (-), hesitations (um, uh), and speech rate variations(>word<, <word>). These features offer a window into the speaker's cognitive processes during real-time speech production. Students can observe how speakers self-correct, plan their utterances, or articulate their thoughts under pressure. This provides valuable insights into fluency, disfluency, and the very real-time, often imperfect, nature of spontaneous talk.
By meticulously capturing these elements, Jefferson Transcription transforms ephemeral spoken interactions into tangible, analyzable data, allowing for a level of detailed scrutiny unachievable with simpler transcription methods.
The profound level of detail offered by Jefferson Transcription translates into a myriad of pedagogical benefits for teaching spoken discourse:
- Beyond simply identifying words, students are trained to listen for how words are delivered. The explicit notation of pauses, overlaps, intonation, and emphasis compels learners to attend to these critical acoustic features. This cultivates a heightened sensitivity to the subtle cues that often carry significant interactional weight, enabling them to discern politeness, sarcasm, certainty, or hesitation that might be missed in a purely orthographic transcript. This deep listening is foundational for effective communication.
- Traditional pronunciation instruction often focuses on isolated sounds and word stress. Jefferson Transcription provides a visual blueprint for connected speech, rhythm, and intonation patterns as they occur in natural conversation. Students can analyze how native speakers use intonation to ask questions, express surprise, or convey finality. They can observe where emphasis falls naturally and how speech rate varies. This allows for targeted practice in mimicking natural prosodic contours, leading to more authentic and intelligible speech, moving them beyond an accented, albeit grammatically correct, delivery.
- Understanding what to say is only half the battle; knowing how to say it appropriately in different social contexts is crucial. Jeffersonian transcripts are a goldmine for teaching Students can examine how participants accomplish social actions like making requests, giving compliments, offering apologies, or even disagreeing politely. They observe how timing, intonation, and hesitation can soften a request or make a criticism more palatable. This helps learners internalize the unwritten rules of social interaction, enabling them to navigate complex communicative situations with greater confidence and appropriateness, a critical skill for interactions in diverse social and professional settings within Uzbekistan and beyond.
- One of the most powerful applications of Jefferson Transcription is enabling students to transcribe their ownspoken interactions (e.g., group discussions, role-plays, short interviews). This process fosters remarkable self-awareness. By seeing their own speech on paper with all its pauses, overlaps, and intonational patterns, learners can identify their personal communication habits, areas of disfluency, or pragmatic missteps. This reflective practice leads to highly targeted self-correction and a deeper understanding of their own evolving communicative competence, promoting true learner autonomy.
- Unlike carefully scripted textbook dialogues, Jeffersonian transcripts allow educators to bring authentic, naturally occurring spoken data into the classroom. This exposes students to the “messiness” of real conversation – false starts, repairs, overlaps, and non-standard grammar – which are all integral parts of everyday talk. Analyzing such data prepares students for the unpredictable nature of real-world communication, bridging the gap between classroom learning and practical application. This is particularly valuable in a multilingual context like Uzbekistan, where learners may encounter a variety of speech patterns and code-switching phenomena in daily life.
- The act of transcribing using Jeffersonian conventions is a demanding intellectual exercise. It requires meticulous attention to detail, careful observation, and constant analytical decision-making. Students are not just copying words; they are interpreting and representing interactional phenomena. This process hones their analytical abilities, critical observation skills, and their capacity to infer meaning from subtle cues, preparing them not just for language use but for broader academic and professional endeavors.
- For learners who are highly visual, Jefferson Transcription provides a concrete, graphical representation of speech that can be challenging to grasp purely aurally. It makes the ephemeral tangible, offering a different pathway to understanding complex linguistic phenomena. This visual aid can be particularly beneficial for students who might struggle with rapid auditory processing, providing a structured framework for analyzing the flow of conversation at their own pace.
While the benefits are compelling, it's important to acknowledge the practical challenges of integrating Jefferson Transcription:
- Both teachers and students will require initial training and practice to become proficient in using the conventions. The transcription process itself is significantly more time-consuming than standard orthographic transcription.
- It’s unlikely that Jefferson Transcription would replace all forms of spoken discourse practice. Instead, it is most effectively used for focused analysis of specific, short segments of authentic data, perhaps as part of a dedicated pragmatics or advanced speaking
- Access to clear audio/video recordings of natural conversation is essential, as is the time and technological support to process and share these with students.
Despite these hurdles, the profound insights gained from engaging with Jefferson Transcription make the investment worthwhile. For educators in Uzbekistan, committed to producing graduates with robust communicative abilities and critical thinking skills, strategically incorporating this powerful tool can be a game-changer. By moving beyond the surface level of spoken words to unravel the intricate dance of interaction, we empower learners to become not just speakers, but truly competent, nuanced, and insightful communicators in an increasingly interconnected world.
Библиографические ссылки
Chafe, W. (1987). Cognitive constraints on information flow. In R. S. Tomlin (Ed.), Coherence and grounding in discourse (pp. 21-51). Amsterdam: John Benjamins.
Du Bois, J. W., Schuetze-Coburn, S., Paolino, D., & Cumming, S. (forthcoming a). Outline of discourse transcription. In J. A. Edwards & M. D. Lampert (Eds.), Talking data: Transcription and coding in discourse research. Hillsdale, NJ: Lawrence Erlbaum Associates.
Edwards, J. A. (1989). Transcribing natural discourse. Paper presented at the International Pragmatics Conference, Antwerp.
Edwards, J. A. (forthcoming). Principles for the transcription of discourse. In J. A. Edwards & M. D. Lampert (Eds.), Talking data: Transcription and coding in discourse research. Hillsdale, NJ: Lawrence Erlbaum Associates.
Goodwin, C. (1981). Conversational organization: Interaction between speakers and hearers. New York: Academic Press.
Ladefoged, P. (1990). A course in phonetics (3rd ed.). San Diego, CA: Harcourt Brace Jovanovich.
Ochs, E. (1979). Transcription as theory. In E. Ochs & B. B. Schieffelin (Eds.), Developmental pragmatics (pp. 43-72). New York: Academic Press.
Опубликован
Загрузки
Как цитировать
Выпуск
Раздел
Лицензия
Copyright (c) 2025 Незире Абдураманова

Это произведение доступно по лицензии Creative Commons «Attribution» («Атрибуция») 4.0 Всемирная.