This Week's Sponsor:

Listen Later

Listen to Articles as Podcasts


The Way Siri Learns New Languages

Stephen Nellis, writing for Reuters, shares an interesting look into Apple’s method for teaching Siri a new language:

At Apple, the company starts working on a new language by bringing in humans to read passages in a range of accents and dialects, which are then transcribed by hand so the computer has an exact representation of the spoken text to learn from, said Alex Acero, head of the speech team at Apple. Apple also captures a range of sounds in a variety of voices. From there, an acoustic model is built that tries to predict words sequences.

Then Apple deploys “dictation mode,” its text-to-speech translator, in the new language, Acero said. When customers use dictation mode, Apple captures a small percentage of the audio recordings and makes them anonymous. The recordings, complete with background noise and mumbled words, are transcribed by humans, a process that helps cut the speech recognition error rate in half.

After enough data has been gathered and a voice actor has been recorded to play Siri in a new language, Siri is released with answers to what Apple estimates will be the most common questions, Acero said. Once released, Siri learns more about what real-world users ask and is updated every two weeks with more tweaks.

The report also shares that one of Siri’s next languages will be Shanghainese, a dialect of Wu Chinese spoken in Shanghai and surrounding areas. This addition will join the existing 21 languages Siri currently speaks, which are localized across a total of 36 different countries.

Debating the strengths and weaknesses of Siri has become common practice in recent years, particularly as competing voice assistants from Amazon, Google, and Microsoft have grown more intelligent. But one area Siri has long held the lead over its competition is in supporting a large variety of different languages. It doesn’t seem like Apple will be slowing down in that regard.