This Week's Sponsor:

Winterfest 2024

The Festival of Artisanal Software


Posts tagged with "siri"

The Way Siri Learns New Languages

Stephen Nellis, writing for Reuters, shares an interesting look into Apple’s method for teaching Siri a new language:

At Apple, the company starts working on a new language by bringing in humans to read passages in a range of accents and dialects, which are then transcribed by hand so the computer has an exact representation of the spoken text to learn from, said Alex Acero, head of the speech team at Apple. Apple also captures a range of sounds in a variety of voices. From there, an acoustic model is built that tries to predict words sequences.

Then Apple deploys “dictation mode,” its text-to-speech translator, in the new language, Acero said. When customers use dictation mode, Apple captures a small percentage of the audio recordings and makes them anonymous. The recordings, complete with background noise and mumbled words, are transcribed by humans, a process that helps cut the speech recognition error rate in half.

After enough data has been gathered and a voice actor has been recorded to play Siri in a new language, Siri is released with answers to what Apple estimates will be the most common questions, Acero said. Once released, Siri learns more about what real-world users ask and is updated every two weeks with more tweaks.

The report also shares that one of Siri’s next languages will be Shanghainese, a dialect of Wu Chinese spoken in Shanghai and surrounding areas. This addition will join the existing 21 languages Siri currently speaks, which are localized across a total of 36 different countries.

Debating the strengths and weaknesses of Siri has become common practice in recent years, particularly as competing voice assistants from Amazon, Google, and Microsoft have grown more intelligent. But one area Siri has long held the lead over its competition is in supporting a large variety of different languages. It doesn’t seem like Apple will be slowing down in that regard.

Permalink

Upcoming watchOS 3.2 Includes New Theater Mode and Siri Improvements

Alongside beta versions of iOS, macOS, and tvOS, Apple today announced the release of the first beta of watchOS 3.2. The beta has yet to appear on Apple’s developer portal, but it should be available soon. Besides the standard bug fixes and performance improvements, this update includes a couple new features, one of which is called Theater Mode. From Apple’s developer release notes:

Theater Mode lets users quickly mute the sound on their Apple Watch and avoid waking the screen on wrist raise. Users still receive notifications (including haptics) while in Theater Mode, which they can view by tapping the screen or pressing the Digital Crown.

This sounds like an interesting new option that could be useful in scenarios besides being at the movie theater. Personally, I’m likely to use Theater Mode when I wear my Apple Watch overnight for sleep tracking. My normal practice is to turn off Raise to Wake in the Settings app before going to bed, but this could prove an easier method.

Besides Theater Mode, the most significant update in 3.2 is enhancements to Siri. Last year iOS 10 improved Siri by enabling it to handle queries from third-party apps that fit into specific categories:

  • Messaging
  • Payments
  • Ride booking
  • Workouts
  • Calling
  • Searching photos

Though all of those areas could be handled by Siri on iOS 10, Siri on Apple Watch was previously only able to direct you to your iPhone to perform those actions. But with watchOS 3.2, that is longer the case, as Siri on the Watch is now able to perform these third-party requests.

watchOS 3.2 will likely see a public release this spring, after a couple months of beta testing is complete.


Steven Aquino on AirPods and Siri

Some interesting thoughts about the AirPods by Steven Aquino. In particular, he highlights a weak aspect of Siri that isn’t usually mentioned in traditional reviews:

The gist of my concern is Siri doesn’t handle speech impediments very gracefully. (I’ve found the same is true of Amazon’s Alexa, as I recently bought an Echo Dot to try out.) I’m a stutterer, which causes a lot of repetitive sounds and long breaks between words. This seems to confuse the hell out of these voice-driven interfaces. The crux of the problem lies in the fact that if I don’t enunciate perfectly, which leaves several seconds between words, the AI cuts me off and runs with it. Oftentimes, the feedback is weird or I’ll get a “Sorry, I didn’t get that” reply. It’s an exercise in futility, sadly.
[…]
Siri on the AirPods suffers from the same issues I encounter on my other devices. It’s too frustrating to try to fumble my way through if she keeps asking me to repeat myself. It’s for this reason that I don’t use Siri at all with AirPods, having changed the setting to enable Play/Pause on double-tap instead (more on this later). It sucks to not use Siri this way—again, the future implications are glaringly obvious—but it’s just not strong enough at reliably parsing my speech. Therefore, AirPods lose some luster because one of its main selling points is effectively inaccessible for a person like me.

That’s a hard problem to solve in a conversational assistant, and exactly the kind of Accessibility area where Apple could lead over other companies.

Permalink

Phil Schiller on How the iPhone Changed Apple

Steven Levy, writing for Backchannel, interviewed Apple’s Phil Schiller for the tenth anniversary of the iPhone’s introduction:

“If it weren’t for iPod, I don’t know that there would ever be iPhone.” he says. “It introduced Apple to customers that were not typical Apple customers, so iPod went from being an accessory to Mac to becoming its own cultural momentum. During that time, Apple changed. Our marketing changed. We had silhouette ads with dancers and an iconic product with white headphones. We asked, “Well, if Apple can do this one thing different than all of its previous products, what else can Apple do?’”

In the story, Schiller also makes an interesting point about Siri and conversational interfaces after being asked about Alexa and competing voice assistants:

“That’s really important,” Schiller says, “and I’m so glad the team years ago set out to create Siri — I think we do more with that conversational interface that anyone else. Personally, I still think the best intelligent assistant is the one that’s with you all the time. Having my iPhone with me as the thing I speak to is better than something stuck in my kitchen or on a wall somewhere.”
[…]
“People are forgetting the value and importance of the display,” he says “Some of the greatest innovations on iPhone over the last ten years have been in display. Displays are not going to go away. We still like to take pictures and we need to look at them, and a disembodied voice is not going to show me what the picture is.”

Permalink

AirPods, Siri, and Voice-Only Interfaces

Ben Bajarin makes a strong point on using Siri with the AirPods:

There is, however, an important distinction to be made where I believe the Amazon Echo shows us a bit more of the voice-only interface and where I’d like to see Apple take Siri when it is embedded in devices without a screen, like the AirPods. You very quickly realize, the more you use Siri with the AirPods, how much the experience today assumes you have a screen in front of you. For example, if I use the AirPods to activate Siri and say, “What’s the latest news?” Siri will fetch the news then say, “Here is some news — take a look.” The experience assumes I want to use my screen (or it at least assumes I have a screen near me to look at) to read the news. Whereas, the Amazon Echo and Google Home just start reading the latest news headlines and tidbits. Similarly, when I activate Siri on the AirPods and say, “Play Christmas music”, the query processes and then plays. Where with the Echo, the same request yields Alexa to say, “OK, playing Christmas music from top 50 Christmas songs.” When you aren’t looking at a screen, the feedback is important. If I was to ask that same request while I was looking at my iPhone, you realize, as Siri processes the request, it says, “OK” on the screen but not in my ear. In voice-only interfaces, we need and want feedback that the request is happening or has been acknowledged.

Siri already adapts to the way it’s activated – it talks more when invoked via “Hey Siri” as it assumes you’re not looking at the screen, and it uses UI elements when triggered from the Home button.

Currently, activating Siri from AirPods yields the same feedback of the “Hey Siri” method. I wonder if future Siri will talk even more when it detects AirPods in your ear as it means only you will be able to hear its responses.

Permalink

MKBHD Compares Siri and Google Assistant

This is a good video by Marques Brownlee on where things stand today between Siri (iOS 10) and the Google Assistant (running Android Nougat on a Google Pixel XL). Three takeaways: Google Assistant is more chatty than old Google Voice Search; Google still seems to have an edge over Siri when it comes to follow-up questions based on topic inference (which Siri also does, but not as well); and, Siri holds up well in most types of questions asked by Brownlee.

In my daily experience, however, Siri still falls short of basic tasks too often (two examples) and deals with questions inconsistently. There is also, I believe, a perception problem with Siri in that Apple fixes obvious Siri shortcomings too slowly or simply isn’t prepared for new types of questions – such as asking how the last presidential debate went. In addition, being able to text with Google Assistant in Allo for iOS has reinforced a longstanding wish of mine – the ability to converse silently with a digital assistant. I hope Siri gets some kind of textual mode or iMessage integration in iOS 11.

One note on Brownlee’s video: the reason Siri isn’t as conversational as Google Assistant is due to the way Brownlee activates Siri. When invoked with the Home button (or by tapping the microphone icon), Siri assumes the user is looking at the screen and provides fewer audio cues, prioritizing visual feedback instead. If Brownlee had opened Siri using “Hey Siri” hands-free activation, Siri would have likely been just as conversational as Google. I prefer Apple’s approach here – if I’m holding a phone, it means I can look at the UI, and there’s no need to speak detailed results aloud.

Permalink

Siri and the Suspension of Disbelief

Julian Lepinski has a thoughtful response to last week’s story by Walt Mossberg on Siri’s failures and inconsistencies. In particular, about the way Siri handles failed queries:

Apple’s high-level goal here should be to include responses that increase your faith in Siri’s ability to parse and respond to your question, even when that isn’t immediately possible. Google Search accomplishes this by explaining what they’re showing you, and asking you questions like “_Did you mean ‘when is the debate’?_” when they think you’ve made an error. Beyond increasing your trust in Siri, including questions like this in the responses would also generate a torrent of incredible data to help Apple tune the responses that Siri gives.

Apple has a bias towards failing silently when errors occur, which can be effective when the error rate is low. With Siri, however, this error rate is still quite high and the approach is far less appropriate. When Siri fails, there’s no path to success short of restarting and trying again (the brute force approach).

The comparison between conversational assistants and iOS’ original user interface feels particularly apt. It’d be helpful to know what else to try when Siri doesn’t understand a question.

Permalink

Walt Mossberg on Siri’s Failures and Inconsistencies

Walt Mossberg, writing for The Verge, shares some frustrations with using Siri across multiple Apple devices:

In recent weeks, on multiple Apple devices, Siri has been unable to tell me the names of the major party candidates for president and vice president of the United States. Or when they were debating. Or when the Emmy awards show was due to be on. Or the date of the World Series. When I asked it “What is the weather on Crete?” it gave me the weather for Crete, Illinois, a small village which — while I’m sure it’s great — isn’t what most people mean when they ask for the weather _on _Crete, the famous Greek island.

Google Now, on the same Apple devices, using the same voice input, answered every one of these questions clearly and correctly. And that isn’t even Google’s latest digital helper, the new Google Assistant.

It’s a little odd that Mossberg didn’t mention Siri’s new third-party abilities at all, but it’s hard to disagree with the overall assessment.

Like Mossberg, I think Siri has gotten pretty good at transcribing my commands (despite my accent) but it still fails often when it comes to doing stuff with transcribed text. Every example mentioned by Mossberg sounds more of less familiar to me (including the egregious presidential debate one).

Five years on, Siri in iOS 10 is much better than its first version, but it still has to improve in key areas such as consistency of results, timeliness of web-based queries (e.g. Grammys, presidential debates, news stories, etc.), and inferred queries (case in point). Despite the improvements and launch of a developer platform, these aspects are so fundamental to a virtual assistant, even the occasional stumble makes Siri, as Mossberg writes, seem dumb.

Permalink