Machine Transcription for Call Center Efficiency

Slide Note
Embed
Share

Explore the benefits of machine transcription in call centers for improving processes like scripted responses, identifying new questions, and monitoring agent performance. Learn how developing a transcription baseline helps evaluate machine transcription accuracy, enhancing customer experience. Discover the role of machine transcription in the 2020 Census operations and how it aids in automating processes to handle high call volumes effectively.


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.



Uploaded on Mar 08, 2024 | 1 Views


Presentation Transcript


  1. Developing a Manual Spanish Developing a Manual Spanish Transcription Baseline to Evaluate Transcription Baseline to Evaluate Machine Transcription of Call Center Machine Transcription of Call Center Calls Calls Marcus Berger, Betsar Otero Class, Crystal Hernandez Center for Behavioral Science Methods, U.S. Census Bureau FedCASIC, Virtual Conference April 11, 2023 Disclaimer: This presentation is released to inform interested parties of research and to encourage discussion. The views expressed are those of the authors and not those of the U.S. Census Bureau. The presentation has been reviewed for disclosure avoidance and approved under CBDRB-FY23-CBSM002-011 1

  2. Advantages of Machine Transcription Machine transcriptions are useful for keeping records of otherwise unwritten products Podcasts, interviews, other recordings Legal depositions Call Centers Transcriptions are needed for machine learning models to improve processes like: Finding the correct scripted answer to a caller s question Identify new questions that callers might have Monitor agent performance and script readability The presentation has been reviewed for disclosure avoidance and approved under CBDRB-FY23-CBSM002-011 2

  3. Machine Transcription As part of 2020 Census operations, we collected audio data from calls made to our Census Questionnaire Assistance (CQA) Centers These are call centers across the country with live agents providing assistance in English and 12 non-English languages The different available languages had separate phone lines Due to the volume of calls, manual transcription is not feasible Machine transcription will aid in analysis of these calls to automate processes and improve customer experience We have another presentation on this tomorrow The presentation has been reviewed for disclosure avoidance and approved under CBDRB-FY23-CBSM002-011 3

  4. Developing a Machine Transcription Base We tested several machine transcription models to determine which model offered the best transcriptions in English and Spanish We needed a high quality, human generated transcript to use as a baseline to compare against This provides a ground truth against which to categorize errors We compare manual and machine transcription based on word error rate Manual Baseline Example Machine Transcription Example U.S. Census Bureau U.S. Sentence Bureau As part of an earlier phase of this project, we found the best model for calls in English The presentation has been reviewed for disclosure avoidance and approved under CBDRB-FY23-CBSM002-011 4

  5. Developing a Machine Transcription Base Spanish Transcription: We manually transcribed segments of Spanish language calls made to CQA Centers during 2020 Census operations Pulled from over 400,000 total Spanish language calls Over 350,000 Stateside Over 50,000 Puerto Rico 30 second call segments 1 hour of audio from stateside callers and Customer Service Representatives (CSRs) Total of 120 segments 15 minutes of audio from the Puerto Rican Spanish line Total of 30 segments The presentation has been reviewed for disclosure avoidance and approved under CBDRB-FY23-CBSM002-011 5

  6. Selecting Call Segments Incoming Call (respondents) Customer Service Rep 30 second segments The presentation has been reviewed for disclosure avoidance and approved under CBDRB-FY23-CBSM002-011 6

  7. Developing a Machine Transcription Base We had 3 Spanish speaking transcribers work to manually transcribe each of these 30 second segments Each segment was manually transcribed by one transcriber Once complete, each segment was reviewed by a second transcriber Any disagreement between transcribers was adjudicated by the third transcriber The presentation has been reviewed for disclosure avoidance and approved under CBDRB-FY23-CBSM002-011 7

  8. Guidelines for Transcription We did need to follow certain conventions to match the machine transcription No use of punctuation Proper use of accent marks Spelling out the names of numbers and letters e.g. Twenty twenty instead of 2020 Filler words We established a shared document with our own standardized spelling of filler words E.g. Umm , Hmm , Ahh The presentation has been reviewed for disclosure avoidance and approved under CBDRB-FY23-CBSM002-011 8

  9. Example of Manual vs. Machine Transcription Manual Spanish Transcript pero ac arriba dice seleccione una o m s casillas y anote los or genes para este Censo los or genes hispanos no son razas entonces no puedo poner hispano don- Machine Transcription pero oh ac riba dice selecciona una masca silla la note los or genes para este censo los or genes hispanos no son razas entonces no podr poner hispano ___ The presentation has been reviewed for disclosure avoidance and approved under CBDRB-FY23-CBSM002-011 9

  10. Language Misidentification We did encounter some call segments on the Spanish line that were not in Spanish Some call segments were in English, others had no audio at all In these cases, we searched for adjacent call segments and transcribed the nearest segment with Spanish audio In cases where this was not possible, we replaced the segment with data from a new call The presentation has been reviewed for disclosure avoidance and approved under CBDRB-FY23-CBSM002-011 10

  11. Difficulties encountered Some calls included code-switching, both within and across sentences Different dialects among speakers from different backgrounds Capturing stutters and incomplete words Different machine transcription models might handle stutters differently The presentation has been reviewed for disclosure avoidance and approved under CBDRB-FY23-CBSM002-011 11

  12. Procedure Summary Select a representative array of segments of audio Ensure enough audio is selected to be able to train the model Round 1 of Transcription: Initial transcription Develop standardized spellings for filler words Ensure all transcribers understand conventions and formatting to match the machine transcription model Round 2 of Transcription: Review of transcription Ensure each transcription is reviewed by a second transcriber Round 3 of Transcription: Adjudication (if necessary) The presentation has been reviewed for disclosure avoidance and approved under CBDRB-FY23-CBSM002-011 12

  13. Future Research We would like to continue this research in other non-English, non- Spanish languages Would any of these procedures change in character-based languages? E.g. spelling out names of letters and numbers For languages with multiple writing systems (e.g. Simplified and Traditional Chinese), is there a preference for which system is used for base transcription? While we used this process for transcription, can the same process be used to establish a base for machine translation? The presentation has been reviewed for disclosure avoidance and approved under CBDRB-FY23-CBSM002-011 13

  14. Thank you! Marcus Berger Research Sociolinguist Language and Cross-Cultural Research Group marcus.p.berger@census.gov The presentation has been reviewed for disclosure avoidance and approved under CBDRB-FY23-CBSM002-011 14

Related


More Related Content