Posts tagged with "artificial intelligence"

Notes on the Apple Intelligence Delay

Simon Willison, one of the more authoritative independent voices in the LLM space right now, published a good theory on what may have happened with Apple’s delay of Apple Intelligence’s Siri personalization features:

I have a hunch that this delay might relate to security.

These new Apple Intelligence features involve Siri responding to requests to access information in applications and then perform actions on the user’s behalf.

This is the worst possible combination for prompt injection attacks! Any time an LLM-based system has access to private data, tools it can call and potentially malicious instructions (like emails and text messages from untrusted strangers) there’s a risk that an attacker might subvert those tools and use them to damage or exfiltration a user’s data.

Willison has been writing about prompt injection attacks since 2023. We know that Mail’s AI summaries were (at least initially?) sort of susceptible to prompt injections (using hidden HTML elements), as were Writing Tools during the beta period. It’s scary to imagine what would happen with a well-crafted prompt injection when the attack’s surface area becomes the entire assistant directly plugged into your favorite apps with your data. But then again, one has to wonder why these features were demoed at all at Apple’s biggest software event last year and if those previews – absent a real, in-person event – were actually animated prototypes.

On this note, I disagree with Jason Snell’s idea that previewing Apple Intelligence last year was a good move no matter what. Are we sure that “nobody is looking” at Apple’s position in the AI space right now and that Siri isn’t continuing down its path of damaging Apple’s software reputation, like MobileMe did? As a reminder, the iPhone 16 lineup was advertised as “built for Apple Intelligence” in commercials, interviews, and Apple’s website.

If the company’s executives are so certain that the 2024 marketing blitz worked, why are they pulling Apple Intelligence ads from YouTube when “nobody is looking”?

On another security note: knowing Apple’s penchant for user permission prompts (Shortcuts and macOS are the worst offenders), I wouldn’t be surprised if the company tried to mitigate Siri’s potential hallucinations and/or the risk of prompt injections with permission dialogs everywhere, and later realized the experience was terrible. Remember: Apple announced an App Intents-driven system with assistant schemas that included actions for your web browser, file manager, camera, and more. Getting any of those actions wrong (think: worse than not picking your mom up at the airport, but actually deleting some of your documents) could have pretty disastrous consequences.

Regardless of what happened, here’s the kicker: according to Mark Gurman, “some within Apple’s AI division” believe that the delayed Apple Intelligence features may be scrapped altogether and replaced by a new system rebuilt from scratch. From his story, pay close attention to this paragraph:

There are also concerns internally that fixing Siri will require having more powerful AI models run on Apple’s devices. That could strain the hardware, meaning Apple either has to reduce its set of features or make the models run more slowly on current or older devices. It would also require upping the hardware capabilities of future products to make the features run at full strength.

Inference costs may have gone down over the past 12 months and context windows may have gotten bigger, but I’m guessing there’s only so much you can do locally with 8 GB of RAM when you have to draw on the user’s personal context across (potentially) dozens of different apps, and then have conversations with the user about those results. It’ll be interesting to watch what Apple does here within the next 1-2 years: more RAM for the same price on iPhones, even more tasks handed off to Private Cloud Compute, or a combination of both?

We’ll see how this will play out at WWDC 2025 and beyond. I continue to think that Apple and Google have the most exciting takes on AI in terms of applying the technology to user’s phones and apps they use everyday. The only difference is that one company’s announcements were theoretical, and the other’s are shipping today. It seems clear now that Apple got caught off guard by LLMs while they were going down the Vision Pro path, and I’ll be curious to see how their marketing strategy will play out in the coming months.


Gemini for iOS Gets Lock Screen Widgets, Control Center Integration, Basic Shortcuts Actions

Gemini for iOS.

Gemini for iOS.

When I last wrote about Gemini for iOS, I noted the app’s lackluster integration with several system features. But since – unlike others in the AI space – the team at Google is actually shipping new stuff on a weekly basis, I’m not too surprised to see that the latest version of Gemini for iOS has brought extensive support for widgets.

Specifically, Gemini for iOS now offers a collection of Lock Screen widgets that also appear as controls in iOS 18’s Control Center, and there are barebones Shortcuts actions to go along with them. In both the Lock Screen’s widget gallery and Control Center, you’ll find Gemini widgets to:

  • type a prompt,
  • Talk Live,
  • open the microphone (for dictation),
  • open the camera,
  • share an image (with a Photos picker), and
  • share a document (with a Files picker).

It’s nice to see these integrations with Photos and Files; notably, Gemini now also has a share extension that lets you add the same media types – plus URLs from webpages – to a prompt from anywhere on iOS.

The Shortcuts integration is a little less exciting since Google implemented old-school actions that do not support customizable parameters. Instead, Gemini only offers actions to open the app in three modes: type, dictate, or Talk Live. That’s disappointing, and I would have preferred to see the ability to pass text or images from Shortcuts directly to Gemini.

While today’s updates are welcome, Google still has plenty of work left to do on Apple’s platforms. For starters, they don’t have an iPad version of the Gemini app. There are no Home Screen widgets yet. And the Shortcuts integration, as we’ve seen, could go much deeper. Still, the inclusion of controls, basic Shortcuts actions, and a share extension goes a long way toward making Gemini easier to access on iOS – that is, until the entire assistant is integrated as an extension for Apple Intelligence.


“Everyone Is Caught Up, Except for Apple”

Good post by Parker Ortolani (who’s blogging more frequently now; I recommend subscribing to his blog) on the new (and surprisingly good looking?) Alexa+ and where Apple stands with Siri:

So here we are. Everyone is caught up, except for Apple. Siri may have a pretty glowing animation but it is not even remotely the same kind of personal assistant that these others are. Even the version of Siri shown at WWDC last year doesn’t appear to be quite as powerful as Alexa+. Who knows how good the app intents powered Siri will even be at the end of the day when it ships, after all according to reports it has been pushed back and looks like an increasingly difficult endeavor. I obviously want Siri to be great. It desperately needs improvement, not just to compete but to make using an iPhone an even better experience.

I continue to think that Apple has immense potential for Apple Intelligence and Siri if they get both to work right with their ecosystem. But at this point, I have to wonder if we’ll see GTA 6 before Siri gets any good.

Permalink

Beyond ChatGPT’s Extension: How to Redirect Safari Searches to Any LLM

xSearch for Safari.

xSearch for Safari.

Earlier this week, OpenAI’s official ChatGPT app for iPhone and iPad was updated with a native Safari extension that lets you forward any search query from Safari’s address bar to ChatGPT Search. It’s a clever approach: rather than waiting for Apple to add a native ChatGPT Search option to their list of default search engines (if they ever will), OpenAI leveraged extensions’ ability to intercept queries in the address bar and redirect them to ChatGPT whenever you type something and press Return.

However, this is not the only option you have if you want to redirect your Safari search queries to a search engine other than the one that’s set as your default. While the solution I’ll propose below isn’t as frictionless as OpenAI’s native extension, it gets the job done, and until other LLMs like Claude, Gemini, Perplexity, and Le Chat ship their own Safari extensions, you can use my approach to give Safari more AI search capabilities right now.

Read more


One AI to Rule Them All?

I enjoyed this look by M.G. Siegler at the current AI landscape, evaluating the positions of all the big players and trying to predict who will come out on top based on what we can see today. I’ve been thinking about this a lot lately. The space is changing so rapidly, with weekly announcements and rumors, that it’s challenging to keep up with all the latest models, app integrations, and reasoning modes. But one thing seems certain: with 400 million weekly users, ChatGPT is winning in the public eye.

However, I was captivated by this analogy, and I wish I’d thought of it myself:

Professionals and power users will undoubtedly pay for, and get value out of, multiple models and products. But just as with the streaming wars, consumers are not going to buy all of these services. And unlike that war, where all of the players had differentiating content, again, the AI services are reaching some level of parity (for consumer use cases). So whereas you might have three or four streaming services that you pay for, you will likely just have one main AI service. Again, it’s more like search in that way.

I see the parallels between different streaming services and different AI models, and I wonder if it’s the sort of diversification that happens before inevitable consolidation. Right now, I find ChatGPT’s Deep Research superior to Google Gemini, but Google has a more fascinating and useful ecosystem story; Claude is better at coding, editing prose, and following complex instructions than any other model I’ve tested, but it feels limited by a lack of extensions and web search (for now). As a result, I find myself jumping between different LLMs for different tasks. And that’s not to mention the more specific products I use on a regular basis, such as NotebookLM, Readwise Chat, and Whisper. Could it be that, just like I’ve always appreciated distinct native apps for specific tasks, maybe I also prefer dedicated AIs for different purposes now?

I continue to think that, long term, it’ll once again come down to iOS versus Android, as it’s always been. But I also believe that M.G. Siegler is correct: until the dust settles (if it ever does), power users will likely use multiple AIs in lieu of one AI to rule them all. And for regular users, at least for the time being, that one AI is ChatGPT.

Permalink

NotebookLM Plus Is Now Available to Google One AI Premium Subscribers

In this week’s extended post-show for AppStories+ subscribers, Federico and I covered the AI tools we use. NotebookLM is one we have in common because it’s such a powerful research tool. The service allows you to upload documents and other files to a notebook and then query what you’ve collected. It’s better than a traditional search tool because you can ask complex questions, discover connections between topics, and generate materials like timelines and summaries.

Yesterday, Google announced that NotebookLM Plus is now available to Google One AI Premium subscribers, significantly expanding its reach. Previously, the extended functionality was only available as an add-on for Google Workspace subscribers.

The Plus version of NotebookLM increases the number of notebooks, sources, and audio overviews available, allows users to customize the tone of their notebooks, and lets users share notebooks with others. Google One AI Premium also includes access to Gemini Advanced and Gemini integration with Gmail, Docs, and other Google services, plus 2 TB of Google Drive cloud storage.

My DMA notebook.

My DMA notebook.

I’ve only begun to scratch the surface of what is possible with NotebookLM and am currently moving my notebook setup from one Google account to another, but it’s already proven to be a valuable research tool. Examples of the types of materials I’ve collected for querying include:

  • legislative material and articles about Apple’s DMA compliance,
  • my past macOS reviews,
  • summaries of and links to stories published on MacStories and Club MacStories,
  • video hardware research materials, and
  • manuals for home appliances and gadgets.

Having already collected and read these materials, I find navigating them with NotebookLM to be far faster than repeatedly skimming through them to pull out details. I also appreciate the ability to create materials like timelines for topics that span months or years.

Google One AI Premium is available from Google for $19.99 per month.


DeepSeek Tops the App Store Charts and Sends AI Stocks on a Wild Ride

DeepSeek's newfound popularity has made it impossible to log in as of the publication of this story.

DeepSeek’s newfound popularity has made it impossible to log in as of the publication of this story.

And just like that, ChatGPT has been dethroned from its perch at the top of the App Store’s free app list, replaced by DeepSeek, another AI app. What’s interesting is that DeepSeek, which was developed by a Chinese startup, was reportedly created at a fraction of the cost of ChatGPT and other large language models developed in the US, which has tech stocks in turmoil.

Last week, DeepSeek revealed its latest LLM, which matches or outperforms OpenAI’s o1 model in some tests. That’s nothing new. AI companies have been one-upping each other for months. What’s different is that DeepSeek was reportedly built with a fraction of the hardware and at a fraction of the cost of OpenAI’s o1 and models like Anthropic’s Claude.

DeepSeek is also open source, potentially undermining the financial viability of U.S. and other for-profit companies that have spent hundreds of millions of dollars developing models that require a paid subscription. And, because it’s free, DeepSeek rocketed to the top of the App Store’s free app list, passing OpenAI’s ChatGPT, which has been at or near the top of the list for months.

That has caused a stir in Silicon Valley. As VentureBeat’s Carl Franzen puts it:

The open-source availability of DeepSeek-R1, its high performance, and the fact that it seemingly “came out of nowhere” to challenge the former leader of generative AI, has sent shockwaves throughout Silicon Valley and far beyond, based on my conversations with and readings of various engineers, thinkers and leaders. If not “everyone” is freaking out about it as my hyperbolic headline suggests, it’s certainly the talk of the town in tech and business circles.

Now, as DeepSeek is starting to look like the real deal, the stock market is causing competitors’ stocks to drop, including NVIDIA’s, which, according to the Financial Times, fell 13% at the opening of the New York Stock Exchange.

If there’s one thing that has been a truism of the AI industry over the past couple of years, it’s that it moves very fast. Today’s leaders are tomorrow’s laggards. Will DeepSeek dethrone the U.S. AI companies? It’s far too early to know, but it certainly is beginning to look like there’s a new horse in the race.


Apple Reveals A Partial Timeline for the Rollout of More Apple Intelligence Features

Last week, Apple released the first developer betas of iOS 18.2, iPadOS 18.2, and macOS 15.2, which the press speculated would be out by the end of the year. It turns out that was a good call because today, Apple confirmed that timing. In its press release about the Apple Intelligence features released today, Apple revealed that the next round is coming in December and will include the following:

  • Users will be able to describe changes they want made to text using Writing Tools. For example, you can have text rewritten with a certain tone or in the form of a poem.
  • ChatGPT will be available in Writing Tools and when using Siri.
  • Image Playground will allow users to create images with Apple’s generative AI model.
  • Users will be able to use prompts to create Genmoji, custom emoji-style images that can be sent to friends in iMessage and used as stickers.
  • Visual intelligence will be available via the Camera Control on the iPhone 16 and iPhone 16 Pro. The feature will allow users to point the iPhone’s camera at something and learn about it from Google or ChatGPT. Apple also mentions that visual intelligence will work with other unspecified “third-party tools.”
  • Apple Intelligence will be available in localized English in Australia, Canada, Ireland, New Zealand, South Africa, and the U.K.

Apple’s press release also explains when other languages are coming:

…in April, a software update will deliver expanded language support, with more coming throughout the year. Chinese, English (India), English (Singapore), French, German, Italian, Japanese, Korean, Portuguese, Spanish, Vietnamese, and other languages will be supported.

And Apple’s Newsroom in Ireland offers information on the Apple Intelligence rollout in the EU:

Mac users in the EU can access Apple Intelligence in U.S. English with macOS Sequoia 15.1. This April, Apple Intelligence features will start to roll out to iPhone and iPad users in the EU. This will include many of the core features of Apple Intelligence, including Writing Tools, Genmoji, a redesigned Siri with richer language understanding, ChatGPT integration, and more.

It’s a shame it’s going to be another six months before EU customers can take advantage of Apple Intelligence features on their iPhones and iPads, but it’s nonetheless good to hear when it will happen.

It’s also worth noting that the timing of other pieces of Apple Intelligence is unclear. There is still no word on precisely when Siri will gain knowledge of your personal context or perform actions in apps on your behalf, for instance. Even so, today’s reveal is more than Apple usually shares, which is both nice and a sign of the importance the company places on these features.


Apple’s Commitment to AI Is Clear, But Its Execution Is Uneven

The day has finally arrived. iOS 18.1, iPadOS 18.1, and macOS 15.1 are all out and include Apple’s first major foray into the world of artificial intelligence. Of course, Apple is no stranger to AI and machine learning, but it became the narrative that the company was behind on AI because it didn’t market any of its OS features as such. Nor did it have anything resembling the generative AI tools from OpenAI, Midjourney, or a host of other companies.

However, with today’s OS updates, that has begun to change. Each update released today includes a far deeper set of new features than any other ‘.1’ release I can remember. Not only are the releases stuffed with a suite of artificial intelligence tools that Apple collectively refers to as Apple Intelligence, but there are a bunch of other new features that Niléane has written about, too.

The company is tackling AI in a unique and very Apple way that goes beyond just the marketing name the features have been given. As users have come to expect, Apple is taking an integrated approach. You don’t have to use a chatbot to do everything from proofreading text to summarizing articles; instead, Apple Intelligence is sprinkled throughout Apple’s OSes and system apps in ways that make them convenient to use with existing workflows.

If you don't want to use Apple Intelligence, you can turn it off with a single toggle in each OS's settings.

If you don’t want to use Apple Intelligence, you can turn it off with a single toggle in each OS’s settings.

Apple also recognizes that not everyone is a fan of AI tools, so they’re just as easy to ignore or turn off completely from System Settings on a Mac or Settings on an iPhone or iPad. Users are in control of the experience and their data, which is refreshing since that’s far from given in the broader AI industry.

The Apple Intelligence features themselves are a decidedly mixed bag, though. Some I like, but others don’t work very well or aren’t especially useful. To be fair, Apple has said that Apple Intelligence is a beta feature. This isn’t the first time that the company has given a feature the “beta” label even after it’s been released widely and is no longer part of the official developer or public beta programs. However, it’s still an unusual move and seems to reveal the pressure Apple is under to demonstrate its AI bona fides. Whatever the reasons behind the release, there’s no escaping the fact that most of the Apple Intelligence features we see today feel unfinished and unpolished, while others remain months away from release.

Still, it’s very early days for Apple Intelligence. These features will eventually graduate from betas to final products, and along the way, I expect they’ll improve. They may not be perfect, but what is certain from the extent of today’s releases and what has already been previewed in the developer beta of iOS 18.2, iPadOS 18.2, and macOS 15.2 is that Apple Intelligence is going to be a major component of Apple’s OSes going forward, so let’s look at what’s available today, what works, and what needs more attention.

Read more