The iPad’s “Sweet” Solution

In working with my iPad Pro over the past few months, I’ve realized something that might have seemed absurd just a few years ago: some of the best apps I’m using – the ones with truly desktop-class layouts and experiences – aren’t native iPad apps.

They’re web apps.

Before I continue and share some examples, let me clarify that this is not a story about the superiority of one way of building software over another. I’ll leave that argument to developers and technically inclined folks who know much more about programming and software stacks than I do.

Rather, the point I’m trying to make is that, due to a combination of cost-saving measures by tech companies, Apple’s App Store policies over the years, and the steady rise of a generation of young coders who are increasingly turning to the web to share their projects, some of the best, most efficient workflows I can access on iPadOS are available via web apps in a browser or a PWA.

Read more


Where’s Swift Assist?

Last June at WWDC, Apple announced Swift Assist, a way to generate Swift code using natural language prompts. However, as Tim Hardwick writes for MacRumors, Swift Assist hasn’t been heard from since then:

Unlike Apple Intelligence, Swift Assist never appeared in beta. Apple hasn’t announced that it’s been delayed or cancelled. The company has since released Xcode 16.3 beta 2, and as Michael Tsai points out, it’s not even mentioned in the release notes.

Meanwhile, developers have moved on, adopting services like Cursor, which does much of what was promised with Swift Assist, if not more. A similar tool built specifically for Swift projects and Apple’s APIs would be a great addition to Xcode, but it’s been nine months, and developers haven’t heard anything more about Swift Assist. Apple owes them an update.

Permalink

Podcast Rewind: Tech Ultimatums, Samsung’s Wild Prototype Handheld, and Our Gaming Origin Stories

Enjoy the latest episodes from MacStories’ family of podcasts:

AppStories

This week, Federico and I share our self-imposed tech deadlines for the hardware and software they use.

This episode is sponsored by:

  • Memberful – Easy-to-Use Reliable Membership Software

NPC: Next Portable Console

Brendon, Federico, and I are back for another week of handheld news, including a tiny bit of Switch 2 news, an up and down week for Retroid, DS handhelds inch forward, Samsung wonders if thumbholes are the perfect complement to thumbsticks, and AYANEO decides thumbsticks aren’t worth the trouble. Plus, Brendon shares NextUI and the 8BitDo Ultimate 2 Controller.

NPC XL

This week, Federico, Brendon, and I take listeners on a tour of our handheld and console gaming histories.

Read more


On Apple Offering an Abstraction Layer for AI on Its Platforms

Source: Apple.

Source: Apple.

I’ve been thinking about Apple’s position in AI a lot this week, and I keep coming back to this idea: if Apple is making the best consumer-grade computers for AI right now, but Apple Intelligence is failing third-party developers with a lack of AI-related APIs, should the company try something else to make it easier for developers to integrate AI into their apps?

Gus Mueller, creator of Acorn and Retrobatch, has been pondering similar thoughts:

A week or so ago I was grousing to some friends that Apple needs to open up things on the Mac so other LLMs can step in where Siri is failing. In theory we (developers) could do this today, but I would love to see a blessed system where Apple provided APIs to other LLM providers.

Are there security concerns? Yes, of course there are, there always will be. But I would like the choice.

The crux of the issue in my mind is this: Apple has a lot of good ideas, but they don’t have a monopoly on them. I would like some other folks to come in and try their ideas out. I would like things to advance at the pace of the industry, and not Apple’s. Maybe with a blessed system in place, Apple could watch and see how people use LLMs and other generative models (instead of giving us Genmoji that look like something Fisher-Price would make). And maybe open up the existing Apple-only models to developers. There are locally installed image processing models that I would love to take advantage of in my apps.

The idea is a fascinating one: if Apple Intelligence cannot compete with the likes of ChatGPT or Claude for the foreseeable future, but third-party developers are creating apps based on those APIs, is there a scenario in which Apple may regain control of the burgeoning AI app ecosystem by offering their own native bridge to those APIs?

Read more


Metallica Is Coming to the Apple Vision Pro

Apple revealed a new Immersive Video title for the Vision Pro. As announced at SXSW today, Vision Pro users will be treated to a live performance of three Metallica songs: “Whiplash,” “One,” and “Enter Sandman” on March 14th.

According to Metallica’s press release:

This project marks a new foray into immersive technology, using ultra-high-resolution 180-degree video and Spatial Audio to give fans unprecedented access from vantage points as close up as the Snake Pit to wide-angle views. It brings the live show to a whole new level, and to achieve this, Apple built a custom stage plot featuring 14 Apple Immersive Video cameras using a mix of stabilized cameras, cable-suspended cameras, and remote-controlled camera dolly systems that moved around the stage.

For its part, Apple released a trailer for the video on YouTube:

along with an interview by Zane Lowe with Metallica’s Lars Ulrich:

Today’s Metallica news follows the recent Immersive Video announcements of VIP: Yankee Stadium and Bono: Stories of Surrender. It’s great to see new content coming to the Vision Pro, especially live concerts and sports, which are a perfect matches for the format.


The M3 Ultra Mac Studio for Local LLMs

Speaking of the new Mac Studio and Apple making the best computers for AI: this is a terrific overview by Max Weinbach about the new M3 Ultra chip and its real-world performance with various on-device LLMs:

The Mac I’ve been using for the past few days is the Mac Studio with M3 Ultra SoC, 32-core CPU, 80-core GPU, 256GB Unified Memory (192GB usable for VRAM), and 4TB SSD. It’s the fastest computer I have. It is faster in my workflows for even AI than my gaming PC (which will be used for comparisons below; it has an Intel i9 13900K, RTX 5090, 64GB of DDR5, and a 2TB NVMe SSD).

It’s a very technical read, but the comparison between the M3 Ultra and a vanilla (non-optimized) RTX 5090 is mind-blogging to me. According to Weinbach, it all comes down to Apple’s MLX framework:

I’ll keep it brief; the LLM performance is essentially as good as you’ll get for the majority of models. You’ll be able to run better models faster with larger context windows on a Mac Studio or any Mac with Unified Memory than essentially any PC on the market. This is simply the inherent benefit of not only Apple Silicon but Apple’s MLX framework (the reason we can efficiently run the models without preloading KV Cache into memory, as well as generate tokens faster as context windows grow).

In case you’re not familiar, MLX is Apple’s open-source framework that – I’m simplifying – optimizes training and serving models on Apple Silicon’s unified memory architecture. It is a wonderful project with over 1,600 community models available for download.

As Weinbach concludes:

I see one of the best combos any developer can do as: M3 Ultra Mac Studio with an Nvidia 8xH100 rented rack. Hopper and Blackwell are outstanding for servers, M3 Ultra is outstanding for your desk. Different machines for a different use, while it’s fun to compare these for sport, that’s not the reality.⁠⁠

There really is no competition for an AI workstation today. The reality is, the only option is a Mac Studio.

Don’t miss the benchmarks in the story.

Permalink

Is Apple Shipping the Best AI Computers?

For all the criticism (mine included) surrounding Apple’s delay of various Apple Intelligence features, I found this different perspective by Ben Thompson fascinating and worth considering:

What that means in practical terms is that Apple just shipped the best consumer-grade AI computer ever. A Mac Studio with an M3 Ultra chip and 512GB RAM can run a 4-bit quantized version of DeepSeek R1 — a state-of-the-art open-source reasoning model — right on your desktop. It’s not perfect — quantization reduces precision, and the memory bandwidth is a bottleneck that limits performance — but this is something you simply can’t do with a standalone Nvidia chip, pro or consumer. The former can, of course, be interconnected, giving you superior performance, but that costs hundreds of thousands of dollars all-in; the only real alternative for home use would be a server CPU and gobs of RAM, but that’s even slower, and you have to put it together yourself. Apple didn’t, of course, explicitly design the M3 Ultra for R1; the architectural decisions undergirding this chip were surely made years ago. In fact, if you want to include the critical decision to pursue a unified memory architecture, then your timeline has to extend back to the late 2000s, whenever the key architectural decisions were made for Apple’s first A4 chip, which debuted in the original iPad in 2010. Regardless, the fact of the matter is that you can make a strong case that Apple is the best consumer hardware company in AI, and this week affirmed that reality.

Anecdotally speaking, based on the people who cover AI that I follow these days, it seems there are largely two buckets of folks who are into local, on-device models: those who have set up pricey NVIDIA rigs at home for their CUDA cores (the vast minority); and – the undeniable majority – those who run a spectrum of local models on their Macs of different shapes and configurations (usually, MacBook Pros). If you have to run high-end, performance-intensive local models for academic or scientific workflows on a desktop, the M3 Ultra Mac Studio sounds like an absolute winner.

However, I’d point out that – again, as far as local, on-device models are concerned – Apple is not shipping the best possible hardware on smartphones.

While the entire iPhone 16 lineup is stuck on 8 GB of RAM (and we know how memory-hungry these models can be), Android phones with at least 12 GB or 16 GB of RAM are becoming pretty much the norm now, especially in flagship territory. Even better in Android land, what are being advertised as “gaming phones” with a whopping 24 GB of RAM (such as the ASUS ROG Phone 9 Pro or the RedMagic 10 Pro) may actually make for compelling pocket computers to run smaller, distilled versions of DeepSeek, LLama, or Mistral with better performance than current iPhones.

Interestingly, I keep going back to this quote from Mark Gurman’s latest report on Apple’s AI challenges:

There are also concerns internally that fixing Siri will require having more powerful AI models run on Apple’s devices. That could strain the hardware, meaning Apple either has to reduce its set of features or make the models run more slowly on current or older devices. It would also require upping the hardware capabilities of future products to make the features run at full strength.

Given Apple’s struggles, their preference for a hybrid on-device/server-based AI system, and the market’s evolution on Android, I don’t think Apple can afford to ship 8 GB on iPhones for much longer if they’re serious about AI and positioning their hardware as the best consumer-grade AI computers.

Permalink

The ‘e’ Is for Elemental

Source: Apple.

Source: Apple.

For the past 10 days, I’ve been testing the iPhone 16e – but not in the way I typically test new hardware. You see, I didn’t buy the iPhone 16e to make calls, send email, surf the web, post to social media, or anything else, really. Instead, I got it for one thing: the camera.

Read more


DEVONthink: Store, Organize, and Work the Smart Way [Sponsor]

DEVONthink 3 is the all-in-one solution to organizing and retrieving documents across macOS, iOS, and iPadOS. By supporting numerous file formats, it’s the perfect solution to bring your work together in one powerful hub.

What sets DEVONthink apart is its many information retrieval features. The app learns from the way you use it, so it can automatically classify and tag your documents. Plus, with advanced Boolean operators and smart groups of saved searches, it’s simple to create refined searches that you can return to whenever you need them.

DEVONthink has lightning-fast search, too. It uses advanced AI to go beyond simple keyword searches, finding the most relevant documents instantly.

The app is also the perfect companion for web-based research. You can automatically pull from RSS feeds, save webpages in a variety of formats, including Markdown, and then highlight, comment, and take notes. The app can even suggest connections between documents and provide contextual recommendations using its integrated AI.

With OCR technology built in, DEVONthink is also perfect for going paperless. Just scan in your documents, and DEVONthink takes care of making them text-searchable.

Everything in your DEVONthink database can sync between multiple Macs, as well as iPhones and iPads. The app also uses industry-grade encryption to secure your documents and offers easy export of documents in multiple formats.

There’s never been a better time to bring order to your information. DEVONtechnologies is offering 10% off on all versions of DEVONthink to MacStories readers for a limited time. Visit DEVONtechnologies’ website today to learn more about how DEVONthink can help you gain the upper hand on your data and to take advantage of this great offer.

Our thanks to DEVONtechnologies for sponsoring MacStories this week.