This Week's Sponsor:

Washing Machine X9

Spring Clean Your Mac Effortlessly


Posts tagged with "siri"

Phil Schiller on How the iPhone Changed Apple

Steven Levy, writing for Backchannel, interviewed Apple’s Phil Schiller for the tenth anniversary of the iPhone’s introduction:

“If it weren’t for iPod, I don’t know that there would ever be iPhone.” he says. “It introduced Apple to customers that were not typical Apple customers, so iPod went from being an accessory to Mac to becoming its own cultural momentum. During that time, Apple changed. Our marketing changed. We had silhouette ads with dancers and an iconic product with white headphones. We asked, “Well, if Apple can do this one thing different than all of its previous products, what else can Apple do?’”

In the story, Schiller also makes an interesting point about Siri and conversational interfaces after being asked about Alexa and competing voice assistants:

“That’s really important,” Schiller says, “and I’m so glad the team years ago set out to create Siri — I think we do more with that conversational interface that anyone else. Personally, I still think the best intelligent assistant is the one that’s with you all the time. Having my iPhone with me as the thing I speak to is better than something stuck in my kitchen or on a wall somewhere.”
[…]
“People are forgetting the value and importance of the display,” he says “Some of the greatest innovations on iPhone over the last ten years have been in display. Displays are not going to go away. We still like to take pictures and we need to look at them, and a disembodied voice is not going to show me what the picture is.”

Permalink

AirPods, Siri, and Voice-Only Interfaces

Ben Bajarin makes a strong point on using Siri with the AirPods:

There is, however, an important distinction to be made where I believe the Amazon Echo shows us a bit more of the voice-only interface and where I’d like to see Apple take Siri when it is embedded in devices without a screen, like the AirPods. You very quickly realize, the more you use Siri with the AirPods, how much the experience today assumes you have a screen in front of you. For example, if I use the AirPods to activate Siri and say, “What’s the latest news?” Siri will fetch the news then say, “Here is some news — take a look.” The experience assumes I want to use my screen (or it at least assumes I have a screen near me to look at) to read the news. Whereas, the Amazon Echo and Google Home just start reading the latest news headlines and tidbits. Similarly, when I activate Siri on the AirPods and say, “Play Christmas music”, the query processes and then plays. Where with the Echo, the same request yields Alexa to say, “OK, playing Christmas music from top 50 Christmas songs.” When you aren’t looking at a screen, the feedback is important. If I was to ask that same request while I was looking at my iPhone, you realize, as Siri processes the request, it says, “OK” on the screen but not in my ear. In voice-only interfaces, we need and want feedback that the request is happening or has been acknowledged.

Siri already adapts to the way it’s activated – it talks more when invoked via “Hey Siri” as it assumes you’re not looking at the screen, and it uses UI elements when triggered from the Home button.

Currently, activating Siri from AirPods yields the same feedback of the “Hey Siri” method. I wonder if future Siri will talk even more when it detects AirPods in your ear as it means only you will be able to hear its responses.

Permalink

MKBHD Compares Siri and Google Assistant

This is a good video by Marques Brownlee on where things stand today between Siri (iOS 10) and the Google Assistant (running Android Nougat on a Google Pixel XL). Three takeaways: Google Assistant is more chatty than old Google Voice Search; Google still seems to have an edge over Siri when it comes to follow-up questions based on topic inference (which Siri also does, but not as well); and, Siri holds up well in most types of questions asked by Brownlee.

In my daily experience, however, Siri still falls short of basic tasks too often (two examples) and deals with questions inconsistently. There is also, I believe, a perception problem with Siri in that Apple fixes obvious Siri shortcomings too slowly or simply isn’t prepared for new types of questions – such as asking how the last presidential debate went. In addition, being able to text with Google Assistant in Allo for iOS has reinforced a longstanding wish of mine – the ability to converse silently with a digital assistant. I hope Siri gets some kind of textual mode or iMessage integration in iOS 11.

One note on Brownlee’s video: the reason Siri isn’t as conversational as Google Assistant is due to the way Brownlee activates Siri. When invoked with the Home button (or by tapping the microphone icon), Siri assumes the user is looking at the screen and provides fewer audio cues, prioritizing visual feedback instead. If Brownlee had opened Siri using “Hey Siri” hands-free activation, Siri would have likely been just as conversational as Google. I prefer Apple’s approach here – if I’m holding a phone, it means I can look at the UI, and there’s no need to speak detailed results aloud.

Permalink

Siri and the Suspension of Disbelief

Julian Lepinski has a thoughtful response to last week’s story by Walt Mossberg on Siri’s failures and inconsistencies. In particular, about the way Siri handles failed queries:

Apple’s high-level goal here should be to include responses that increase your faith in Siri’s ability to parse and respond to your question, even when that isn’t immediately possible. Google Search accomplishes this by explaining what they’re showing you, and asking you questions like “_Did you mean ‘when is the debate’?_” when they think you’ve made an error. Beyond increasing your trust in Siri, including questions like this in the responses would also generate a torrent of incredible data to help Apple tune the responses that Siri gives.

Apple has a bias towards failing silently when errors occur, which can be effective when the error rate is low. With Siri, however, this error rate is still quite high and the approach is far less appropriate. When Siri fails, there’s no path to success short of restarting and trying again (the brute force approach).

The comparison between conversational assistants and iOS’ original user interface feels particularly apt. It’d be helpful to know what else to try when Siri doesn’t understand a question.

Permalink

Walt Mossberg on Siri’s Failures and Inconsistencies

Walt Mossberg, writing for The Verge, shares some frustrations with using Siri across multiple Apple devices:

In recent weeks, on multiple Apple devices, Siri has been unable to tell me the names of the major party candidates for president and vice president of the United States. Or when they were debating. Or when the Emmy awards show was due to be on. Or the date of the World Series. When I asked it “What is the weather on Crete?” it gave me the weather for Crete, Illinois, a small village which — while I’m sure it’s great — isn’t what most people mean when they ask for the weather _on _Crete, the famous Greek island.

Google Now, on the same Apple devices, using the same voice input, answered every one of these questions clearly and correctly. And that isn’t even Google’s latest digital helper, the new Google Assistant.

It’s a little odd that Mossberg didn’t mention Siri’s new third-party abilities at all, but it’s hard to disagree with the overall assessment.

Like Mossberg, I think Siri has gotten pretty good at transcribing my commands (despite my accent) but it still fails often when it comes to doing stuff with transcribed text. Every example mentioned by Mossberg sounds more of less familiar to me (including the egregious presidential debate one).

Five years on, Siri in iOS 10 is much better than its first version, but it still has to improve in key areas such as consistency of results, timeliness of web-based queries (e.g. Grammys, presidential debates, news stories, etc.), and inferred queries (case in point). Despite the improvements and launch of a developer platform, these aspects are so fundamental to a virtual assistant, even the occasional stumble makes Siri, as Mossberg writes, seem dumb.

Permalink


Inside Your iPhone’s Brain

Steven Levy has a fascinating inside look at Apple’s artificial intelligence and machine learning efforts on Backchannel. Levy spent most of a day with Eddy Cue, Phil Schiller, Craig Federighi, Tom Gruber, and Alex Acero in a wide-ranging discussion of the products impacted by those efforts. Perhaps the most interesting parts of the interviews revolved around what Levy refers to as the Apple Brain inside the iPhone:

How big is this brain, the dynamic cache that enables machine learning on the iPhone? Somewhat to my surprise when I asked Apple, it provided the information: about 200 megabytes, depending on how much personal information is stored (it’s always deleting older data). This includes information about app usage, interactions with other people, neural net processing, a speech modeler, and “natural language event modeling.” It also has data used for the neural nets that power object recognition, face recognition, and scene classification.

And, according to Apple, it’s all done so your preferences, predilections, and peregrinations are private.

Levy also covers the replacement of Siri’s smarts on July 30, 2014 with neural-net system. The impact according to Eddy Cue was immediate:

This was one of those things where the jump was so significant that you do the test again to make sure that somebody didn’t drop a decimal place.

Many people have commented that Siri has improved over time, but without the context beyond one’s own experience or metrics from Apple, the perceived change has been largely anecdotal. According to Acero, however:

The error rate has been cut by a factor of two in all the languages, more than a factor of two in many cases.… That’s mostly due to deep learning and the way we have optimized it — not just the algorithm itself but in the context of the whole end-to-end product.

Levy also delves into whether Apple’s stance on privacy hobbles its ability to effectively implement AI and machine learning. According to Apple, it does not. The most personal information remains on-device in the ‘Apple Brain.’ Other data, which is transmitted to Apple uses techniques like differential privacy, which is coming in iOS 10, to obfuscate a user’s identity.

The entire article is worth a read to get a sense of the breadth and depth of Apple’s AI and machine learning efforts and the impact on its products. It’s also fascinating to see Apple continue to open up on its own terms as a way to rebut recent criticisms leveled against it.

Permalink


Comparing Siri and Alexa

Rene Ritchie at iMore, in an article titled “Siri vs. Alexa is hilarious to people outside the U.S.”:

Imagine if, on a weekly basis, you saw or heard “Xinghua” being compared to Siri. But “Xinghua” was available only in China and only to people who spoke Mandarin. How meaningful would those comparisons really be to you in the U.S.? That’s about as meaningful as headlines comparing Amazon’s virtual assistant, Alexa to Apple’s Siri are to the vast majority of the world’s population.

Right now Alexa is solving only for people in America who speak English. That’s an incredibly small subset of what Siri, which just recently added Hebrew and several other languages in several other reasons, solves for.

With all due respect to Rene, I think this is a disingenuous way of defending Siri from the comparisons to the Amazon Echo’s Alexa.

It is, of course, a fair complaint that the Amazon Echo is not available in countries outside the United States, and that it can only understand US English.1 But I do not think it is legitimate to imply that the Echo’s geographic and lingual limitations somehow undermines the advances that the Echo offers in other areas such as its integrations with services which is seeing it receive praise from all-corners of the industry in recent months.

A large part of the praise of the Amazon Echo is because in 18 months it has gone from a product that didn’t exist, into one that many in the US find incredibly useful. Also significant is that in those 18 months it has evolved rapidly, adding great new features that make it even more useful. That is why people are comparing it to Siri, which launched in 2011 and has undoubtedly improved, but at a much slower pace and in less substantial ways (multi-lingual support aside).

I’m an Australian and I don’t think this Siri vs Alexa debate is “laughably US-centric”, I think it’s important, even if I can’t personally use Alexa. Just last week, Google announced that it will be releasing a very similar product later this year, and credited Amazon for their pioneering work with the Echo. I am certain Apple has taken similar notice of Amazon’s (seemingly successful) efforts with the Echo, and if Apple acts on those observations, then everyone with access to Siri will benefit.

So I’m not laughing, I’m grateful, if a little envious that my friends in the US are (yet again) getting a taste of the future before me. But I know it’ll reach me soon enough, whether it’s via Apple, Google, Amazon, or even Microsoft.


  1. I regularly make these kinds of observations/complaints about various products and services. Two years ago I even spent days researching and putting together this extensive examination of just how far ahead Apple was in terms of the availability of media content in countries around the world, so I understand this frustration very well. ↩︎
Permalink