Posts tagged with "siri"

Apple’s Commitment to AI Is Clear, But Its Execution Is Uneven

The day has finally arrived. iOS 18.1, iPadOS 18.1, and macOS 15.1 are all out and include Apple’s first major foray into the world of artificial intelligence. Of course, Apple is no stranger to AI and machine learning, but it became the narrative that the company was behind on AI because it didn’t market any of its OS features as such. Nor did it have anything resembling the generative AI tools from OpenAI, Midjourney, or a host of other companies.

However, with today’s OS updates, that has begun to change. Each update released today includes a far deeper set of new features than any other ‘.1’ release I can remember. Not only are the releases stuffed with a suite of artificial intelligence tools that Apple collectively refers to as Apple Intelligence, but there are a bunch of other new features that Niléane has written about, too.

The company is tackling AI in a unique and very Apple way that goes beyond just the marketing name the features have been given. As users have come to expect, Apple is taking an integrated approach. You don’t have to use a chatbot to do everything from proofreading text to summarizing articles; instead, Apple Intelligence is sprinkled throughout Apple’s OSes and system apps in ways that make them convenient to use with existing workflows.

If you don't want to use Apple Intelligence, you can turn it off with a single toggle in each OS's settings.

If you don’t want to use Apple Intelligence, you can turn it off with a single toggle in each OS’s settings.

Apple also recognizes that not everyone is a fan of AI tools, so they’re just as easy to ignore or turn off completely from System Settings on a Mac or Settings on an iPhone or iPad. Users are in control of the experience and their data, which is refreshing since that’s far from given in the broader AI industry.

The Apple Intelligence features themselves are a decidedly mixed bag, though. Some I like, but others don’t work very well or aren’t especially useful. To be fair, Apple has said that Apple Intelligence is a beta feature. This isn’t the first time that the company has given a feature the “beta” label even after it’s been released widely and is no longer part of the official developer or public beta programs. However, it’s still an unusual move and seems to reveal the pressure Apple is under to demonstrate its AI bona fides. Whatever the reasons behind the release, there’s no escaping the fact that most of the Apple Intelligence features we see today feel unfinished and unpolished, while others remain months away from release.

Still, it’s very early days for Apple Intelligence. These features will eventually graduate from betas to final products, and along the way, I expect they’ll improve. They may not be perfect, but what is certain from the extent of today’s releases and what has already been previewed in the developer beta of iOS 18.2, iPadOS 18.2, and macOS 15.2 is that Apple Intelligence is going to be a major component of Apple’s OSes going forward, so let’s look at what’s available today, what works, and what needs more attention.

Read more


The New York Times Declares that Voice Assistants Have Lost the ‘AI Race’

Brian Chen, Nico Grant, and Karen Weise of The New York Times set out to explain why voice assistants like Siri, Alexa, and Google Assistant seem primitive by comparison to ChatGPT. According to ex-Apple, Amazon, and Google engineers and employees, the difference is grounded in the approach the companies took with their assistants:

The assistants and the chatbots are based on different flavors of A.I. Chatbots are powered by what are known as large language models, which are systems trained to recognize and generate text based on enormous data sets scraped off the web. They can then suggest words to complete a sentence.

In contrast, Siri, Alexa and Google Assistant are essentially what are known as command-and-control systems. These can understand a finite list of questions and requests like “What’s the weather in New York City?” or “Turn on the bedroom lights.” If a user asks the virtual assistant to do something that is not in its code, the bot simply says it can’t help.

In the case of Siri, former Apple engineer John Burkey said the company’s assistant was designed as a monolithic database that took weeks to update with new capabilities. Burkey left Apple in 2016 after less than two years at the company according to his LinkedIn bio. According to other unnamed Apple sources, the company has been testing AI based on large language models in the years since Burkey’s departure:

At Apple’s headquarters last month, the company held its annual A.I. summit, an internal event for employees to learn about its large language model and other A.I. tools, two people who were briefed on the program said. Many engineers, including members of the Siri team, have been testing language-generating concepts every week, the people said.

It’s not surprising that sources have told The New York Times that Apple is researching the latest advances in artificial intelligence. All you have to do is visit the company’s Machine Learning Research website to see that. But to declare a winner in ‘the AI race’ based on the architecture of where voice assistants started compared to today’s chatbots is a bit facile. Voice assistants may be primitive by comparison to chatbots, but it’s far too early to count Apple, Google, or Amazon out or declare the race over, for that matter.

Permalink

Apple’s Fall OS Updates Promise Deeper HomeKit and Entertainment Integration

Apple’s fall OS updates will include a variety of HomeKit and home entertainment features. Unsurprisingly, some of those changes can be found in the company’s Home and TV apps, but this year, those apps only tell part of the overall story. To get the full picture, you need to zoom out from the apps, where you’ll find an interesting mix of new smart home device and entertainment features sprinkled throughout each platform.

Let’s start with HomeKit devices. This year, many of the changes coming to Apple’s OSes relate to two important categories: video cameras and door locks. Controlling both types of devices will become easier this fall, thanks to deeper integration with the upcoming OS releases.

Read more


WWDC 2021: All The Small Things in Apple’s Upcoming OS Releases

WWDC keynotes cover a lot of ground, hitting the highlights of the OS updates Apple plans to release in the fall. However, as the week progresses, new details emerge from session videos, developers trying new frameworks, and others who bravely install the first OS betas. So, as with past WWDCs, we’ve supplemented our iOS and iPadOS 15, macOS Monterey, and watchOS 8, and tvOS 15 coverage with all the small things we’ve found interesting this week:

Read more


Siri Adds Two New English Speaking Voices and Lets Users Choose Among Them

Matthew Panzarino, reporting for TechCrunch says the latest beta version of iOS and iPadOS 14.5 includes two new English Siri voices. The report elaborates that the existing female voice is no longer the default and that users will choose the voice they want to use with Apple’s voice assistant when setting up a device for the first time.

In a statement to TechCrunch, an Apple said:

We’re excited to introduce two new Siri voices for English speakers and the option for Siri users to select the voice they want when they set up their device. This is a continuation of Apple’s long-standing commitment to diversity and inclusion, and products and services that are designed to better reflect the diversity of the world we live in.

Panzarino says he’s heard the new voices and likes them a lot and will be embedding samples in his story once he has the sixth iOS 14.5 beta installed.

I’m surprised that Apple is adding new Siri voices this late in the iOS 14 cycle, but it’s a welcome change that eliminates bias and makes Siri a more diverse and inclusive service.

Permalink

Two Months with the HomePod mini: More Than Meets the Eye

As a smaller, affordable smart speaker tightly integrated with Apple services, the HomePod mini is a compelling product for many people. The mini is little enough to work just about anywhere in most homes. At $99, the device’s price tag also fits more budgets and makes multiple HomePod minis a far more realistic option than multiple original HomePods ever were. Of course, the mini comes with tradeoffs compared to its larger, more expensive sibling, which I’ll get into, but for many people, it’s a terrific alternative.

As compelling as the HomePod mini is as a speaker, though, its potential as a smart device reaches beyond the original HomePod in ways that have far greater implications for Apple’s place in customers’ homes. Part of the story is the mini’s ability to serve as a border router for Thread-compatible smart devices, forming a low-power, mesh network that can operate independently of your Wi-Fi setup. The other part of the story is the way the mini extends Siri throughout your home. Apple’s smart assistant still has room to improve. However, the promise of a ubiquitous audio interface to Apple services, apps, HomeKit devices, and the Internet is more compelling than ever as Siri-enabled devices proliferate.

For the past couple of months, I’ve been testing a pair of HomePod minis that Apple sent me. That pair joined my original HomePods and another pair of minis that I added to the setup to get a sense of what having a whole-home audio system with Siri always within earshot would be like. The result is a more flexible system that outshines its individual parts and should improve over time as the HomeKit device market evolves.

Read more


Hands-On with the HomePod’s New Intercom Feature, Alarms, and Siri Tricks

With yesterday’s releases of iOS 14.1 and HomePod Software Version 14.1, which could really use a catchier name, Apple has introduced several new features announced last week at its iPhone 12 and HomePod mini event. Most readers are probably already familiar with what’s in the updates based on our iPhone 12 and HomePod mini overviews, so I thought I’d update my HomePods and devices to provide some hands-on thoughts about the changes.

Most of the new features are related to the HomePod. Although proximity-based features are exclusive to the HomePod mini, which features Apple’s U1 Ultra Wideband chip, some of the other functionality revealed last week is available on all HomePod models.

Read more


Apple’s HomePod mini: The MacStories Overview

I have two HomePods: one in our living room and another in my office. They sound terrific, and I’ve grown to depend on the convenience of controlling HomeKit devices, adding groceries to my shopping list, checking the weather, and being able to ask Siri to pick something to play when I can’t think of anything myself. My office isn’t very big, though, and when rumors of a smaller HomePod surfaced, I was curious to see what Apple was planning.

Today, those plans were revealed during the event the company held remotely from the Steve Jobs Theater in Cupertino. Apple introduced the HomePod mini, a diminutive $99 smart speaker that’s just 3.3 inches tall and 3.8 inches wide. In comparison, the original HomePod is 6.8 inches tall and 5.6 inches wide. At just .76 pounds, the mini is also considerably lighter than the 5.5-pound original HomePod.

Read more


John Giannandrea on the Broad Reach of Machine Learning in Apple’s Products

Today Samuel Axon at ArsTechnica published a new interview with two Apple executives: SVP of Machine Learning and AI Strategy John Giannandrea and VP of Product Marketing Bob Borchers. The interview is lengthy yet well worth reading, especially since it’s the most we’ve heard from Apple’s head of ML and AI since he departed Google to join the company in 2018.

Based on some of the things Giannandrea says in the interview, it sounds like he’s had a very busy two years. For example, when asked to list ways Apple has used machine learning in its recent software and products, Giannandrea lists a variety of things before ultimately indicating that it’s harder to name things that don’t use machine learning than ones that do.

There’s a whole bunch of new experiences that are powered by machine learning. And these are things like language translation, or on-device dictation, or our new features around health, like sleep and hand washing, and stuff we’ve released in the past around heart health and things like this. I think there are increasingly fewer and fewer places in iOS where we’re not using machine learning. It’s hard to find a part of the experience where you’re not doing some predictive [work].

One interesting tidbit mentioned by both Giannandrea and Borchers is that Apple’s increased dependence on machine learning hasn’t led to the company talking about ML non-stop. I’ve noticed this too – whereas a few years ago the company might have thrown out ‘machine learning’ countless times during a keynote presentation, these days it’s intentionally more careful and calculated in naming the term, and I think for good reason. As Giannandrea puts it, “I think that this is the future of the computing devices that we have, is that they be smart, and that, that smart sort of disappear.” Borchers expounds on that idea:

This is clearly our approach, with everything that we do, which is, ‘Let’s focus on what the benefit is, not how you got there.’ And in the best cases, it becomes automagic. It disappears… and you just focus on what happened, as opposed to how it happened.

The full interview covers subjects like Apple’s Neural Engine, Apple Silicon for Macs, the benefits of handling ML tasks on-device, and much more, including a fun story from Giannandrea’s early days at Apple. You can read it here.

Permalink