Live Caption, which was first introduced during Google I/O 2019 as a Pixel 4 exclusive, is a game-changing addition to the suite of accessibility features built into Android 10. Using Live Caption allows those with deafness or other hearing disabilities to follow along with video content, while Android generates captions in real-time. It appears that the feature may be getting ready to make the leap from smartphones to computers as work is underway to bring the feature to Chrome, according to a new code commit to the Chromium Gerrit.
According to the commit, SODA (the Speech On-Device API) is a crucial component of the Chrome team’s work. This lightweight model, which is stored entirely on the device, allows you to observe the language processing as it happens in real-time, working to piece together exactly what you’re trying to say.
While the feature initially launched as a Pixel 4 exclusive, it quickly made its way to the Pixel 3 and 3a as well. At this point, it’s safe to say that Google has plans to expand Live Caption, as this news comes hot on the heels of the recent announcement that the Galaxy S20 would be the first non-Pixel device to support Live Caption.
Considering that Google’s Chrome browser is one of its most widely used products, the addition of a source-agnostic caption system would not only be invaluable to users who depend on accessibility services but also the millions who use the Chrome browser. However, this feature appears to still be in the early stages of development, so we may be waiting for a while before we get to try it out for ourselves.