Apple launched its Podcasts transcription feature in March as part of its massive iOS 17.4 update. Here's why Apple took six years to perfect it, and make it a reality.
If you're one of the 15% of Americans who struggle with some form of hearing difficulty, you probably know how useful features like closed captions are when watching movies, TV, or even listening to music. But -- until recently, at least -- you may have felt left out of the podcast craze.
Apple wanted to change that.
"Our goal is obviously to make podcasts more accessible, more immersive," Ben Cave, Apple's global head of podcasts, told The Guardian.
But getting there wasn't easy. Cave notes that Apple had set some high expectations for itself and wanted to provide users with accurate, easy-to-follow transcripts.
The road to Apple Podcast transcripts
Apple's journey to transcripts began in 2018 with its indexing software, designed to help users search for specific podcasts based on a line they remembered from it.
"What we did then is we offered a single line of the transcript to give users context on a result when they're searching for something in particular," Cave says. "There's a few different things that we did in the intervening seven years, which all came together into this [transcript] feature."
Expanding upon that feature meant that Apple needed to figure out how to display the transcripts to listeners.
"In this case, we took the best of what we learned from reading in Apple Books and lyrics in Apple Music," Apple's senior director of Global Accessibility Policy & Initiatives Senior Director, Sarah Herrlinger, told The Guardian.
This meant borrowing features like time-synced highlighting from Apple Music and Apple Books' font and color schemes.
While it took Apple six years to nail down transcription, it's worth noting that, as is often the case, Apple is already doing more with the feature than its competitors. Amazon Music, for example, has offered transcripts since 2021, but they're only available for original programs and select popular shows.
Spotify launched its AI-powered transcription feature with word-by-word highlighting in September 2023, but it's only available for Spotify originals and any shows hosted directly on its platform.
Apple starts by transcribing every new episode uploaded to its platform. Still, over time, the entire library of episodes will be transcribed, although it doesn't say exactly when older episodes will be transcribed.
"We wanted to do it for all the shows, so it's not just for like a narrow slice of the catalog," says Cave.
"It wouldn't be appropriate for us to put an arbitrary limit on the number of shows who get it ... We think that's important from an accessibility standpoint because we want to give people the expectation that transcripts are available for everything that they want to interact with."
And the disability community has noticed, too. Many activists have noted that they'd rather wait for a fully functional feature to launch rather than deal with something that had been rushed out.
Experts have pointed to YouTube as an example of how not to launch a product. YouTube's auto-generated closed-caption tool launched in 2009 and has been laughably bad at times.
As is often the case, Apple has noticed that an accessibility feature hasn't helped only its target audience but also podcast listeners as a whole.
"We often find that by building for the margins, we make a better product for the masses," says Herrlinger. "Other communities will find those features and find ways to use them in some cases where we know this could benefit someone else."
Apple's transcript feature for Apple Podcasts is available in English, French, Spanish, and German, and can be viewed on the iPhone, iPad, and even macOS Sonoma via the Podcast app.