Automatic speech recognition (ASR) expertise has made conversations extra accessible with live captions in distant conferencing software program, cellular purposes, and head-worn shows. However, to keep up real-time responsiveness, live caption techniques usually show interim predictions which can be up to date as new utterances are obtained. This may cause text instability (a “flicker” the place beforehand displayed text is up to date, proven in the captions on the left in the video under), which may impair customers’ studying expertise because of distraction, fatigue, and issue following the dialog.
In “Modeling and Improving Text Stability in Live Captions”, introduced at ACM CHI 2023, we formalize this downside of text stability by a number of key contributions. First, we quantify the text instability by using a vision-based flicker metric that makes use of luminance distinction and discrete Fourier remodel. Second, we additionally introduce a stability algorithm to stabilize the rendering of live captions through tokenized alignment, semantic merging, and easy animation. Finally, we performed a person examine (N=123) to know viewers’ expertise with live captioning. Our statistical evaluation demonstrates a powerful correlation between our proposed flicker metric and viewers’ expertise. Furthermore, it exhibits that our proposed stabilization strategies considerably improves viewers’ expertise (e.g., the captions on the precise in the video above).
Raw ASR captions vs. stabilized captions |
Metric
Inspired by earlier work, we suggest a flicker-based metric to quantify text stability and objectively consider the efficiency of live captioning techniques. Specifically, our aim is to quantify the glint in a grayscale live caption video. We obtain this by evaluating the distinction in luminance between particular person frames (frames in the figures under) that represent the video. Large visible modifications in luminance are apparent (e.g., addition of the phrase “bright” in the determine on the underside), however delicate modifications (e.g., replace from “… this gold. Nice..” to “… this. Gold is nice”) could also be troublesome to discern for readers. However, changing the change in luminance to its constituting frequencies exposes each the apparent and delicate modifications.
Thus, for every pair of contiguous frames, we convert the distinction in luminance into its constituting frequencies utilizing discrete Fourier remodel. We then sum over every of the low and excessive frequencies to quantify the glint in this pair. Finally, we common over all the frame-pairs to get a per-video flicker.
For occasion, we are able to see under that two an identical frames (high) yield a flicker of 0, whereas two non-identical frames (backside) yield a non-zero flicker. It is value noting that increased values of the metric point out excessive flicker in the video and thus, a worse person expertise than decrease values of the metric.
Illustration of the glint metric between two an identical frames. |
Illustration of the glint between two non-identical frames. |
Stability algorithm
To enhance the stability of live captions, we suggest an algorithm that takes as enter already rendered sequence of tokens (e.g., “Previous” in the determine under) and the brand new sequence of ASR predictions, and outputs an up to date stabilized text (e.g., “Updated text (with stabilization)” under). It considers each the pure language understanding (NLU) side in addition to the ergonomic side (show, format, and so on.) of the person expertise in deciding when and learn how to produce a steady up to date text. Specifically, our algorithm performs tokenized alignment, semantic merging, and easy animation to attain this aim. In what follows, a token is outlined as a phrase or punctuation produced by ASR.
We present (a) the beforehand already rendered text, (b) the baseline format of up to date text with out our merging algorithm, and (c) the up to date text as generated by our stabilization algorithm. |
Our algorithm deal with the problem of manufacturing stabilized up to date text by first figuring out three courses of modifications (highlighted in crimson, inexperienced, and blue under):
- Red: Addition of tokens to the top of beforehand rendered captions (e.g., “How about”).
- Green: Addition / deletion of tokens, in the center of already rendered captions.
- B1: Addition of tokens (e.g., “I” and “buddies”). These might or might not have an effect on the general comprehension of the captions, however might result in format change. Such format modifications should not desired in live captions as they trigger vital jitter and poorer person expertise. Here “I” doesn’t add to the comprehension however “friends” does. Thus, it is very important stability updates with stability specifically for B1 sort tokens.
- B2: Removal of tokens, e.g., “in” is eliminated in the up to date sentence.
- Blue: Re-captioning of tokens: This contains token edits which will or might not have an effect on the general comprehension of the captions.
- C1: Proper nouns like “disney land” are up to date to “Disneyland”.
- C2: Grammatical shorthands like “it is” are up to date to “It was”.
Classes of modifications between beforehand displayed and up to date text. |
Alignment, merging, and smoothing
To maximize text stability, our aim is to align the outdated sequence with the brand new sequence utilizing updates that make minimal modifications to the present format whereas making certain correct and significant captions. To obtain this, we leverage a variant of the Needleman-Wunsch algorithm with dynamic programming to merge the 2 sequences relying on the category of tokens as outlined above:
- Case A tokens: We instantly add case A tokens, and line breaks as wanted to suit the up to date captions.
- Case B tokens: Our preliminary research confirmed that customers most popular stability over accuracy for beforehand displayed captions. Thus, we solely replace case B tokens if the updates don’t break an current line format.
- Case C tokens: We examine the semantic similarity of case C tokens by remodeling authentic and up to date sentences into sentence embeddings, measuring their dot-product, and updating them provided that they’re semantically completely different (similarity < 0.85) and the replace is not going to trigger new line breaks.
Finally, we leverage animations to scale back visible jitter. We implement easy scrolling and fading of newly added tokens to additional stabilize the general format of the live captions.
User analysis
We performed a person examine with 123 members to (1) look at the correlation of our proposed flicker metric with viewers’ expertise of the live captions, and (2) assess the effectiveness of our stabilization strategies.
We manually chosen 20 movies in YouTube to acquire a broad protection of subjects together with video conferences, documentaries, educational talks, tutorials, information, comedy, and extra. For every video, we chosen a 30-second clip with at the very least 90% speech.
We ready 4 forms of renderings of live captions to check:
- Raw ASR: uncooked speech-to-text outcomes from a speech-to-text API.
- Raw ASR + thresholding: solely show interim speech-to-text outcome if its confidence rating is increased than 0.85.
- Stabilized captions: captions utilizing our algorithm described above with alignment and merging.
- Stabilized and easy captions: stabilized captions with easy animation (scrolling + fading) to evaluate whether or not softened show expertise helps enhance the person expertise.
We collected person scores by asking the members to look at the recorded live captions and fee their assessments of consolation, distraction, ease of studying, ease of following the video, fatigue, and whether or not the captions impaired their expertise.
Correlation between flicker metric and person expertise
We calculated Spearman’s coefficient between the glint metric and every of the behavioral measurements (values vary from -1 to 1, the place damaging values point out a damaging relationship between the 2 variables, optimistic values point out a optimistic relationship, and zero signifies no relationship). Shown under, our examine demonstrates statistically vital ( < 0.001) correlations between our flicker metric and customers’ scores. The absolute values of the coefficient are round 0.3, indicating a average relationship.
Behavioral Measurement | Correlation to Flickering Metric* |
Comfort | -0.29 |
Distraction | 0.33 |
Easy to learn | -0.31 |
Easy to comply with movies | -0.29 |
Fatigue | 0.36 |
Impaired Experience | 0.31 |
Spearman correlation exams of our proposed flickering metric. *p < 0.001. |
Stabilization of live captions
Our proposed approach (stabilized easy captions) obtained persistently higher scores, vital as measured by the Mann-Whitney U check (p < 0.01 in the determine under), in 5 out of six aforementioned survey statements. That is, customers thought of the stabilized captions with smoothing to be extra comfy and simpler to learn, whereas feeling much less distraction, fatigue, and impairment to their expertise than different forms of rendering.
User scores from 1 (Strongly Disagree) – 7 (Strongly Agree) on survey statements. (**: p<0.01, ***: p<0.001; ****: p<0.0001; ns: non-significant) |
Conclusion and future route
Text instability in live captioning considerably impairs customers’ studying expertise. This work proposes a vision-based metric to mannequin caption stability that statistically considerably correlates with customers’ expertise, and an algorithm to stabilize the rendering of live captions. Our proposed resolution might be probably built-in into current ASR techniques to reinforce the usability of live captions for quite a lot of customers, together with these with translation wants or these with listening to accessibility wants.
Our work represents a considerable step in direction of measuring and improving text stability. This might be developed to incorporate language-based metrics that target the consistency of the phrases and phrases used in live captions over time. These metrics might present a mirrored image of person discomfort because it pertains to language comprehension and understanding in real-world situations. We are additionally in conducting eye-tracking research (e.g., movies proven under) to trace viewers’ gaze patterns, reminiscent of eye fixation and saccades, permitting us to raised perceive the forms of errors which can be most distracting and learn how to enhance text stability for these.
Illustration of monitoring a viewer’s gaze when studying uncooked ASR captions. |
Illustration of monitoring a viewer’s gaze when studying stabilized and smoothed captions. |
By improving text stability in live captions, we are able to create simpler communication instruments and enhance how individuals join in on a regular basis conversations in acquainted or, by translation, unfamiliar languages.
Acknowledgements
This work is a collaboration throughout a number of groups at Google. Key contributors embrace Xingyu “Bruce” Liu, Jun Zhang, Leonardo Ferrer, Susan Xu, Vikas Bahirwani, Boris Smus, Alex Olwal, and Ruofei Du. We want to lengthen our due to our colleagues who offered help, together with Nishtha Bhatia, Max Spear, and Darcy Philippon. We would additionally prefer to thank Lin Li, Evan Parker, and CHI 2023 reviewers.