Federico Viticci

9591 posts on MacStories since April 2009

Federico is the founder and Editor-in-Chief of MacStories, where he writes about Apple with a focus on apps, developers, iPad, and iOS productivity. He founded MacStories in April 2009 and has been writing about Apple since. Federico is also the co-host of AppStories, a weekly podcast exploring the world of apps, Unwind, a fun exploration of media and more, and NPC: Next Portable Console, a show about portable gaming and the handheld revolution.


Opening iOS Is Good News for Smartwatches

Speaking of opening up iOS to more types of applications, I enjoyed this story by Victoria Song, writing at The Verge about the new EU-mandated interoperability requirements that include, among other things, smartwatches:

This is a big reason why it’s a good thing that the European Commission recently gave Apple marching orders to open up iOS interoperability to other gadget makers. You can read our explainer on the nitty gritty of what this means, but the gist is that it’s going to be harder for Apple to gatekeep iOS features to its own products. Specific to smartwatches, Apple will have to allow third-party smartwatch makers to display and interact with iOS notifications. I’m certain Garmin fans worldwide, who have long complained about the inability to send quick replies on iOS, erupted in cheers.

And this line, which is so true in its simplicity:

Some people just want the ability to choose how they use the products they buy.

Can you imagine if your expensive Mac desktop had, say, some latency if you decided to enter text with a non-Apple keyboard? Or if the USB-C port only worked with proprietary Apple accessories? Clearly, those restrictions would be absurd on computers that cost thousands of dollars. And yet, similar restrictions have long existed on iPhones and the iOS ecosystem, and it’s time to put an end to them.

Permalink

On Apple Allowing Third-Party Assistants on iOS

This is an interesting idea by Parker Ortolani: what if Apple allowed users to change their default assistant from Siri to something else?

I do not want to harp on the Siri situation, but I do have one suggestion that I think Apple should listen to. Because I suspect it is going to take quite some time for the company to get the new Siri out the door properly, they should do what was previously unthinkable. That is, open up iOS to third-party assistants. I do not say this lightly. I am one of those folks who does not want iOS to be torn open like Android, but I am willing to sign on when it makes good common sense. Right now it does.

And:

I do not use Gemini as my primary LLM generally, I prefer to use ChatGPT and Claude most of the time for research, coding, and writing. But Gemini has proved to be the best assistant out of them all. So while we wait for Siri to get good, give us the ability to use custom assistants at the system level. It does not have to be available to everyone, heck create a special intent that Google and these companies need to apply for if you want. But these apps with proper system level overlays would be a massive improvement over the existing version of Siri. I do not want to have to launch the app every single time.

As a fan of the progressive opening up of iOS that’s been happening in Europe thanks to our laws, I can only welcome such a proposal – especially when I consider the fact that long-pressing the side button on my expensive phone defaults to an assistant that can’t even tell which month it is. If Apple truly thinks that Siri helps users “find what they need and get things done quickly”, they should create an Assistant API and allow other companies to compete with them. Let iPhone users decide which assistant they prefer in 2025.

Some people may argue that other assistants, unlike Siri, won’t be able to access key features such as sending messages or integrating with core iOS system frameworks. My reply would be: perhaps having a more prominent placement on iOS would actually push third-party companies to integrate with the iOS APIs that do exist. For instance, there is nothing stopping OpenAI from integrating ChatGPT with the Reminders app; they have done exactly that with MapKit, and if they wanted, they could plug into HomeKit, HealthKit, and the dozens of other frameworks available to developers. And for those iOS features that don’t have an API for other companies to support…well, that’s for Apple to fix.

From my perspective, it always goes back to the same idea: I should be able to freely swap out software on my Apple pocket computer just like I can thanks to a safe, established system on my Apple desktop computer. (Arguably, that is also the perspective of, you know, the law in Europe.) Even Google – a company that would have all the reasons not to let people swap the Gemini assistant for anything else – lets folks decide which assistant they want to use on Android. And, as you can imagine, competition there is producing some really interesting results.

I’m convinced that, at this point, a lot of people despise Siri and would simply prefer pressing their assistant button to talk to ChatGPT or Claude – even if that meant losing access to reminders, timers, and whatever it is that Siri can reliably accomplish these days. (I certainly wouldn’t mind putting Claude on my iPhone and leaving Siri on the Watch for timers and HomeKit.) Whether it’s because of superior world knowledge, proper multilingual abilities (something that Siri still doesn’t support!), or longer contextual conversations, hundreds of millions of people have clearly expressed their preference for new types of digital assistance and conversations that go beyond the antiquated skillset of Siri.

If a new version of Siri isn’t going to be ready for some time, and if Apple does indeed want to make the best computers for AI, maybe it’s time to open up that part of iOS in a way that goes beyond the (buggy) ChatGPT integration with Siri.

Permalink

App Store Vibes

Bryan Irace has an interesting take on the new generation of developer tools that have lowered the barrier to entry for new developers (and sometimes not even developers) when it comes to creating apps:

Recent criticism of Apple’s AI efforts has been juicy to say the least, but this shouldn’t distract us from continuing to criticize one of Apple’s most deserving targets: App Review. Especially now that there’s a perfectly good AI lens through which to do so.

It’s one thing for Apple’s AI product offerings to be non-competitive. Perhaps even worse is that as Apple stands still, software development is moving forward faster than ever before. Like it or not, LLMs—both through general chat interfaces and purpose-built developer tools—have meaningfully increased the rate at which new software can be produced. And they’ve done so both by making skilled developers more productive while also lowering the bar for less-experienced participants.

And:

I recently built a small iOS app for myself. I can install it on my phone directly from Xcode but it expires after seven days because I’m using a free Apple Developer account. I’m not trying to avoid paying Apple, but there’s enough friction involved in switching to a paid account that I simply haven’t been bothered. And I used to wrangle provisioning profiles for a living! I can’t imagine that I’m alone here, or that others with less tribal iOS development knowledge are going to have a higher tolerance for this. A friend asked me to send the app to them but that’d involve creating a TestFlight group, submitting a build to Apple, waiting for them to approve it, etc. Compare this to simply pushing to Cloudflare or Netlify and automatically having a URL you can send to a friend or share via Twitter. Or using tools like v0 or Replit, where hosting/distribution are already baked in.

Again, this isn’t new—but being able to build this much software this fast is new. App distribution friction has stayed constant while friction in all other stages of software development has largely evaporated. It’s the difference between inconvenient and untenable.

Perhaps “vibe coding” is the extreme version of this concept, but I think there’s something here. Creating small, low-stakes apps for personal projects or that you want to share with a small group of people is, objectively, getting easier. After reading Bryan’s post – which rightfully focuses on the distribution side of apps – I’m also wondering: what happens when the first big service comes along and figures out a way to bypass the App Store altogether (perhaps via the web?) to allow “anyone” to create apps, completely cutting out Apple and its App Review from the process?

In a way, this reminds me of blogging. Those who wanted to have an online writing space 30 years ago had to know some of the basics of hosting and HTML if they wanted to publish something for other people to read. Then Blogger came along and allowed anyone – regardless of their skill level – to be read. What if the same happened to mobile software? Should Apple and Google be ready for this possibility within the next few years?

I could see Google spin up a “Build with Gemini” initiative to let anyone create Android apps without any coding knowledge. I’m also reminded of this old Vision Pro rumor that claimed Apple’s Vision team was exploring the idea of letting people create “apps” with Siri.

If only the person in charge of that team went anywhere, right?

Permalink

Choosing Optimism About iOS 19

I loved this post by David Smith on his decision to remain optimistic about Apple’s rumored iOS 19 redesign despite, well, you know, everything:

Optimism isn’t enthusiasm. Enthusiasm is a feeling, optimism is a choice. I have much less of the enthusiastic feelings these days about my relationship to Apple and its technologies (discussed here on Under the Radar 312), but I can still choose to optimistically look for the positives in any situation. Something I’ve learned as I’ve aged is that pessimism feels better in the moment, but then slowly rots you over time. Whereas optimism feels foolish in the moment, but sustains you over time.

I’ve always disliked the word “enthusiast” (talk about a throwback), and I’ve been frequently criticized for choosing the more optimistic approach in covering Apple over the years. But David is right: pessimism feels better in the short term (and performs better if you’re good at writing headlines or designing YouTube thumbnails), but is not a good long-term investment. (Of course, when the optimism is also gone for good…well, that’s a different kind of problem.)

But back to David’s thoughts on the iOS 19 redesign. He lists this potential reason to be optimistic about having to redesign his apps:

It would provide a point of differentiation for my app against other apps who wouldn’t adopt the new design language right away (either large companies which have their own design system or laggards who wouldn’t prioritize it).

He’s correct: the last time Apple rolled out a major redesign of iOS, they launched a dedicated section on the App Store which, on day one, featured indie apps updated for iOS 7 such as OmniFocus, Twitterrific, Reeder 2, Pocket Casts 4, and Perfect Weather. It lasted for months. Twelve years later1, I doubt that bigger companies will be as slow as they were in 2013 to adopt Apple’s new design language, but more agile indie developers will undoubtedly have an advantage here.

He also writes:

Something I regularly remind myself as I look at new Apple announcements is that I never have the whole picture of what is to come for the platform, but Apple does. They know if things like foldable iPhones or HomeKit terminals are on the horizon and how a new design would fit in best with them. If you pay attention and try to read between the lines they often will provide the clues necessary to “skate where the puck is going” and be ready when new, exciting things get announced subsequently.

This is the key point for me going into this summer’s review season. Just like when Apple introduced size classes in iOS 8 at WWDC 2014 and launched Slide Over and Split View multitasking for the iPad (alongside the first iPad Pro) the next year, I have to imagine that changes in this year’s design language will pave the way for an iPhone that unfolds into a mini tablet, a convertible Mac laptop, App Intents on a dedicated screen, or more. So while I’m not enthusiastic about Apple’s performance in AI or developer relations, I choose to be optimistic about the idea that this year’s redesign may launch us into an exciting season of new hardware and apps for the next decade.


  1. Think about it this way: when iOS 7 was released, the App Store was only five years old. ↩︎
Permalink

The iPad’s “Sweet” Solution

In working with my iPad Pro over the past few months, I’ve realized something that might have seemed absurd just a few years ago: some of the best apps I’m using – the ones with truly desktop-class layouts and experiences – aren’t native iPad apps.

They’re web apps.

Before I continue and share some examples, let me clarify that this is not a story about the superiority of one way of building software over another. I’ll leave that argument to developers and technically inclined folks who know much more about programming and software stacks than I do.

Rather, the point I’m trying to make is that, due to a combination of cost-saving measures by tech companies, Apple’s App Store policies over the years, and the steady rise of a generation of young coders who are increasingly turning to the web to share their projects, some of the best, most efficient workflows I can access on iPadOS are available via web apps in a browser or a PWA.

Read more


On Apple Offering an Abstraction Layer for AI on Its Platforms

Source: Apple.

Source: Apple.

I’ve been thinking about Apple’s position in AI a lot this week, and I keep coming back to this idea: if Apple is making the best consumer-grade computers for AI right now, but Apple Intelligence is failing third-party developers with a lack of AI-related APIs, should the company try something else to make it easier for developers to integrate AI into their apps?

Gus Mueller, creator of Acorn and Retrobatch, has been pondering similar thoughts:

A week or so ago I was grousing to some friends that Apple needs to open up things on the Mac so other LLMs can step in where Siri is failing. In theory we (developers) could do this today, but I would love to see a blessed system where Apple provided APIs to other LLM providers.

Are there security concerns? Yes, of course there are, there always will be. But I would like the choice.

The crux of the issue in my mind is this: Apple has a lot of good ideas, but they don’t have a monopoly on them. I would like some other folks to come in and try their ideas out. I would like things to advance at the pace of the industry, and not Apple’s. Maybe with a blessed system in place, Apple could watch and see how people use LLMs and other generative models (instead of giving us Genmoji that look like something Fisher-Price would make). And maybe open up the existing Apple-only models to developers. There are locally installed image processing models that I would love to take advantage of in my apps.

The idea is a fascinating one: if Apple Intelligence cannot compete with the likes of ChatGPT or Claude for the foreseeable future, but third-party developers are creating apps based on those APIs, is there a scenario in which Apple may regain control of the burgeoning AI app ecosystem by offering their own native bridge to those APIs?

Read more


The M3 Ultra Mac Studio for Local LLMs

Speaking of the new Mac Studio and Apple making the best computers for AI: this is a terrific overview by Max Weinbach about the new M3 Ultra chip and its real-world performance with various on-device LLMs:

The Mac I’ve been using for the past few days is the Mac Studio with M3 Ultra SoC, 32-core CPU, 80-core GPU, 256GB Unified Memory (192GB usable for VRAM), and 4TB SSD. It’s the fastest computer I have. It is faster in my workflows for even AI than my gaming PC (which will be used for comparisons below; it has an Intel i9 13900K, RTX 5090, 64GB of DDR5, and a 2TB NVMe SSD).

It’s a very technical read, but the comparison between the M3 Ultra and a vanilla (non-optimized) RTX 5090 is mind-blogging to me. According to Weinbach, it all comes down to Apple’s MLX framework:

I’ll keep it brief; the LLM performance is essentially as good as you’ll get for the majority of models. You’ll be able to run better models faster with larger context windows on a Mac Studio or any Mac with Unified Memory than essentially any PC on the market. This is simply the inherent benefit of not only Apple Silicon but Apple’s MLX framework (the reason we can efficiently run the models without preloading KV Cache into memory, as well as generate tokens faster as context windows grow).

In case you’re not familiar, MLX is Apple’s open-source framework that – I’m simplifying – optimizes training and serving models on Apple Silicon’s unified memory architecture. It is a wonderful project with over 1,600 community models available for download.

As Weinbach concludes:

I see one of the best combos any developer can do as: M3 Ultra Mac Studio with an Nvidia 8xH100 rented rack. Hopper and Blackwell are outstanding for servers, M3 Ultra is outstanding for your desk. Different machines for a different use, while it’s fun to compare these for sport, that’s not the reality.⁠⁠

There really is no competition for an AI workstation today. The reality is, the only option is a Mac Studio.

Don’t miss the benchmarks in the story.

Permalink

Is Apple Shipping the Best AI Computers?

For all the criticism (mine included) surrounding Apple’s delay of various Apple Intelligence features, I found this different perspective by Ben Thompson fascinating and worth considering:

What that means in practical terms is that Apple just shipped the best consumer-grade AI computer ever. A Mac Studio with an M3 Ultra chip and 512GB RAM can run a 4-bit quantized version of DeepSeek R1 — a state-of-the-art open-source reasoning model — right on your desktop. It’s not perfect — quantization reduces precision, and the memory bandwidth is a bottleneck that limits performance — but this is something you simply can’t do with a standalone Nvidia chip, pro or consumer. The former can, of course, be interconnected, giving you superior performance, but that costs hundreds of thousands of dollars all-in; the only real alternative for home use would be a server CPU and gobs of RAM, but that’s even slower, and you have to put it together yourself. Apple didn’t, of course, explicitly design the M3 Ultra for R1; the architectural decisions undergirding this chip were surely made years ago. In fact, if you want to include the critical decision to pursue a unified memory architecture, then your timeline has to extend back to the late 2000s, whenever the key architectural decisions were made for Apple’s first A4 chip, which debuted in the original iPad in 2010. Regardless, the fact of the matter is that you can make a strong case that Apple is the best consumer hardware company in AI, and this week affirmed that reality.

Anecdotally speaking, based on the people who cover AI that I follow these days, it seems there are largely two buckets of folks who are into local, on-device models: those who have set up pricey NVIDIA rigs at home for their CUDA cores (the vast minority); and – the undeniable majority – those who run a spectrum of local models on their Macs of different shapes and configurations (usually, MacBook Pros). If you have to run high-end, performance-intensive local models for academic or scientific workflows on a desktop, the M3 Ultra Mac Studio sounds like an absolute winner.

However, I’d point out that – again, as far as local, on-device models are concerned – Apple is not shipping the best possible hardware on smartphones.

While the entire iPhone 16 lineup is stuck on 8 GB of RAM (and we know how memory-hungry these models can be), Android phones with at least 12 GB or 16 GB of RAM are becoming pretty much the norm now, especially in flagship territory. Even better in Android land, what are being advertised as “gaming phones” with a whopping 24 GB of RAM (such as the ASUS ROG Phone 9 Pro or the RedMagic 10 Pro) may actually make for compelling pocket computers to run smaller, distilled versions of DeepSeek, LLama, or Mistral with better performance than current iPhones.

Interestingly, I keep going back to this quote from Mark Gurman’s latest report on Apple’s AI challenges:

There are also concerns internally that fixing Siri will require having more powerful AI models run on Apple’s devices. That could strain the hardware, meaning Apple either has to reduce its set of features or make the models run more slowly on current or older devices. It would also require upping the hardware capabilities of future products to make the features run at full strength.

Given Apple’s struggles, their preference for a hybrid on-device/server-based AI system, and the market’s evolution on Android, I don’t think Apple can afford to ship 8 GB on iPhones for much longer if they’re serious about AI and positioning their hardware as the best consumer-grade AI computers.

Permalink

Notes on the Apple Intelligence Delay

Simon Willison, one of the more authoritative independent voices in the LLM space right now, published a good theory on what may have happened with Apple’s delay of Apple Intelligence’s Siri personalization features:

I have a hunch that this delay might relate to security.

These new Apple Intelligence features involve Siri responding to requests to access information in applications and then perform actions on the user’s behalf.

This is the worst possible combination for prompt injection attacks! Any time an LLM-based system has access to private data, tools it can call and potentially malicious instructions (like emails and text messages from untrusted strangers) there’s a risk that an attacker might subvert those tools and use them to damage or exfiltration a user’s data.

Willison has been writing about prompt injection attacks since 2023. We know that Mail’s AI summaries were (at least initially?) sort of susceptible to prompt injections (using hidden HTML elements), as were Writing Tools during the beta period. It’s scary to imagine what would happen with a well-crafted prompt injection when the attack’s surface area becomes the entire assistant directly plugged into your favorite apps with your data. But then again, one has to wonder why these features were demoed at all at Apple’s biggest software event last year and if those previews – absent a real, in-person event – were actually animated prototypes.

On this note, I disagree with Jason Snell’s idea that previewing Apple Intelligence last year was a good move no matter what. Are we sure that “nobody is looking” at Apple’s position in the AI space right now and that Siri isn’t continuing down its path of damaging Apple’s software reputation, like MobileMe did? As a reminder, the iPhone 16 lineup was advertised as “built for Apple Intelligence” in commercials, interviews, and Apple’s website.

If the company’s executives are so certain that the 2024 marketing blitz worked, why are they pulling Apple Intelligence ads from YouTube when “nobody is looking”?

On another security note: knowing Apple’s penchant for user permission prompts (Shortcuts and macOS are the worst offenders), I wouldn’t be surprised if the company tried to mitigate Siri’s potential hallucinations and/or the risk of prompt injections with permission dialogs everywhere, and later realized the experience was terrible. Remember: Apple announced an App Intents-driven system with assistant schemas that included actions for your web browser, file manager, camera, and more. Getting any of those actions wrong (think: worse than not picking your mom up at the airport, but actually deleting some of your documents) could have pretty disastrous consequences.

Regardless of what happened, here’s the kicker: according to Mark Gurman, “some within Apple’s AI division” believe that the delayed Apple Intelligence features may be scrapped altogether and replaced by a new system rebuilt from scratch. From his story, pay close attention to this paragraph:

There are also concerns internally that fixing Siri will require having more powerful AI models run on Apple’s devices. That could strain the hardware, meaning Apple either has to reduce its set of features or make the models run more slowly on current or older devices. It would also require upping the hardware capabilities of future products to make the features run at full strength.

Inference costs may have gone down over the past 12 months and context windows may have gotten bigger, but I’m guessing there’s only so much you can do locally with 8 GB of RAM when you have to draw on the user’s personal context across (potentially) dozens of different apps, and then have conversations with the user about those results. It’ll be interesting to watch what Apple does here within the next 1-2 years: more RAM for the same price on iPhones, even more tasks handed off to Private Cloud Compute, or a combination of both?

We’ll see how this will play out at WWDC 2025 and beyond. I continue to think that Apple and Google have the most exciting takes on AI in terms of applying the technology to user’s phones and apps they use everyday. The only difference is that one company’s announcements were theoretical, and the other’s are shipping today. It seems clear now that Apple got caught off guard by LLMs while they were going down the Vision Pro path, and I’ll be curious to see how their marketing strategy will play out in the coming months.