This Week's Sponsor:

Washing Machine X9

Spring Clean Your Mac Effortlessly


AltStore PAL Releases AltStore Classic for EU iPhone Users

AltStore PAL, the alternative app store for iPhone users in the EU, celebrated its first anniversary today with a whopper of an update. AltStore PAL 2.2 now includes AltStore Classic as one of its catalog of apps. That’s right, a store within a store, which allows users in Europe to sideload hundreds of non-notarized apps.

If this sounds odd, it is, but there’s a method to the madness. AltStore Classic allows users to install DolphiniOS, an emulator that uses JIT (Just-In-Time) compilation, which is necessary to emulate more recent games and isn’t allowed by Apple on iOS out of the box. Alongside AltStore PAL 2.2, the AltStore team released StikDebug, an AltStore PAL app that allows any app sideloaded with AltStore Classic to use JIT.

I’m in the U.S., so I haven’t tried AltStore Classic, but judging from what I’ve seen on Reddit, JIT can make a big difference for emulators. EU users can read more about the update to AltStore PAL and AltStore Classic in the update’s release notes.

Permalink

Podcast Rewind: Calendar Apps, Switch Discoveries, and Alternatives to Terminal

Enjoy the latest episodes from MacStories’ family of podcasts:

AppStories

This week, Federico and I survey our favorite calendar apps, discussing the strengths and weaknesses of each.


NPC: Next Portable Console

This week on NPC, the trio analyzes how tariffs have disrupted Nintendo Switch 2 preorders in the U.S. Then Federico, Brendon, and I dissect new details that have emerged about the upcoming console, debate Nintendo’s game pricing, ponder what console exclusivity means in 2025, and share their excitement for the Retroid Pocket Flip 2.

NPC XL

In this episode of NPC XL, Federico, Brendon, and I dig into the details of Nintendo’s newly announced Zelda Notes companion app for the Switch 2. Also this week, I weigh in on the Ayn Odin 2 Portal TPU Grip and we discover that Federico is a Collector not a Builder.


Ruminate

This week I try yet another Kettle chip flavor, Robb bought more mystery Pringles, I play Prince of Persia on all the platforms, Robb bought a pen, and they both discuss terminal apps.

This episode is sponsored by:

  • Rogue Amoeba: Makers of incredibly useful audio tools for your Mac. Use the code MS2504 through the end of April to get 20% off Rogue Amoeba’s apps.

Read more


Apple Is Using Differential Privacy to Improve Apple Intelligence

Apple has been using differential privacy for nearly ten years to collect its users data in a way that isn’t traceable back to an individual. As Apple explains in a recent post on its Machine Learning Research site:

This approach works by randomly polling participating devices for whether they’ve seen a particular fragment, and devices respond anonymously with a noisy signal. By noisy, we mean that devices may provide the true signal of whether a fragment was seen or a randomly selected signal for an alternative fragment or no matches at all. By calibrating how often devices send randomly selected responses, we ensure that hundreds of people using the same term are needed before the word can be discoverable.

The company has used the technique to analyze everything from the popularity of emoji to what words to suggest with QuickType.

Now, Apple is using differential privacy to mine the data of users who have opted into sharing device analytics to improve Apple Intelligence. So far, the technique’s use has been limited to improving Genmoji, but in upcoming OS releases, it will be used for “Image Playground, Image Wand, Memories Creation and Writing Tools in Apple Intelligence, as well as in Visual Intelligence,” too.

The report explains that:

Building on our many years of experience using techniques like differential privacy, as well as new techniques like synthetic data generation, we are able to improve Apple Intelligence features while protecting user privacy for users who opt in to the device analytics program. These techniques allow Apple to understand overall trends, without learning information about any individual, like what prompts they use or the content of their emails. As we continue to advance the state of the art in machine learning and AI to enhance our product experiences, we remain committed to developing and implementing cutting-edge techniques to protect user privacy.

For Genmoji, this means collecting data on the most popular prompts used to create the emoji-like images. Apple explains that written content is more challenging but that it can use an LLM to generate synthetic data like emails. The synthetic data is then sent to users’ devices who have opted into device analytics to determine which data matches actual user data most closely and frequently, again using differential privacy to prevent individual device identification.

Using differential privacy to improve Apple Intelligence without directly scraping user data is clever, but it does make me wonder why something similar wasn’t used to generate Apple’s large language models that were trained on the contents of the Internet. Perhaps that’s not possible at the scale of an LLM, or maybe that initial model needs a level of precision that differential privacy doesn’t offer, but I think it’s fair to ask.

Permalink

Apple Announces Global Close Your Rings Day

April 24th marks the 10-year anniversary of the release of the Apple Watch. To mark the day, Apple has announced Global Close Your Rings Day, encouraging Apple Watch users to meet their daily fitness goals.

If you have an Apple Watch and close your Activity rings on April 24th, you’ll get a special limited edition award on your watch and animated stickers in Messages. Apple is also giving away a special pin that the company says is inspired by the award, starting April 24, while supplies last.

In a press release, Apple’s chief operating officer, Jeff Williams said:

Apple Watch has changed the way people think about, monitor, and engage with their fitness and health. A decade ago, we introduced Activity rings — and since then, Apple Watch has grown to offer an extensive set of features designed to empower every user. People write to us almost every day sharing how Apple Watch has made a difference in their life, from motivating them to move more throughout the day, to changing the trajectory of their health.

For more on the event and statistics on the benefits customers who close their rings have seen, you can read Apple’s press release here.


Washing Machine X9: Spring Clean Your Mac Effortlessly [Sponsor]

Spring is the perfect time for a deep clean—including your Mac! Mac Washing Machine X9, from Intego, the leader in macOS security, helps you declutter, optimize, and speed up your system with ease.

Clean Up Your Mac

Over time, junk files like caches, temporary data, and unused language files accumulate, taking up valuable space and slowing down performance. Washing Machine X9 identifies and removes them automatically, ensuring your Mac runs smoothly and efficiently.

Eliminate Duplicate Files

Without realizing it, you may have multiple copies of documents, photos, and videos scattered across your Mac. Washing Machine X9 scans your system, detects duplicates, and removes them in one click, freeing up storage instantly.

Organize Your Mac for Maximum Efficiency

If your desktop and folders feel like a chaotic mess, Washing Machine X9 has you covered. It automatically organizes files, sorts documents into the right folders, and even helps optimize your Dock by prioritizing frequently used apps.

A High-Performance Tool at a Limited-Time Price

MacStories readers get an exclusive 60% discount on the Mac Washing Machine X9 for only $19.99 (instead of $49.99). It’s faster and more affordable than CleanMyMac, and it delivers outstanding results. Compatible with macOS 10.13 to macOS Sequoia, including M4-chip Macs, this tool is backed by a 30-day money-back guarantee.

For complete Mac protection, upgrade to Intego Mac Premium Bundle X9 at $39.99 including antivirus, firewall, VPN, backup, and parental controls.

👉 Don’t miss out—give your Mac a second life today with Washing Machine X9 for just $19.99!

OUr thanks to Washing Machine X9 for sponsoring MacStories this week.


Podcast Rewind: Speedy Drives, Dire Wolves, Jon Hamm, and Cameras

Enjoy the latest episodes from MacStories’ family of podcasts:

Comfort Zone

Chris has an external drive faster than you’ve ever seen, Matt has a new email app he swears isn’t his new favorite cult, and the whole gang does their best to redesign iOS and iPadOS.

This episode is sponsored by:

Rogue Amoeba - Makers of incredibly useful audio tools for your Mac.


MacStories Unwind

This week, we consider whether the dire wolf has actually been “de-extincted” and recommend a show on Apple TV+ and an excellent four-movie bundle.

This episode is sponsored by:

Rogue Amoeba - Makers of incredibly useful audio tools for your Mac.


Magic Rays of Light

Sigmund and Devon highlight Your Friends & Neighbors starring Jon Hamm, share early highlights from Blackmagic Design at NAB 2025, and recap their immersive VIP tours of Yankee Stadium.

This episode is sponsored by:

Rogue Amoeba - Makers of incredibly useful audio tools for your Mac.

Read more


How Could Apple Use Open-Source AI Models?

Yesterday, Wayne Ma, reporting for The Information, published an outstanding story detailing the internal turmoil at Apple that led to the delay of the highly anticipated Siri AI features last month. From the article:

In November 2022, OpenAI released ChatGPT to a thunderous response from the tech industry and public. Within Giannandrea’s AI team, however, senior leaders didn’t respond with a sense of urgency, according to former engineers who were on the team at the time.

The reaction was different inside Federighi’s software engineering group. Senior leaders of the Intelligent Systems team immediately began sharing papers about LLMs and openly talking about how they could be used to improve the iPhone, said multiple former Apple employees.

Excitement began to build within the software engineering group after members of the Intelligent Systems team presented demos to Federighi showcasing what could be achieved on iPhones with AI. Using OpenAI’s models, the demos showed how AI could understand content on a user’s phone screen and enable more conversational speech for navigating apps and performing other tasks.

Assuming the details in this report are correct, I truly can’t imagine how one could possibly see the debut of ChatGPT two years ago and not feel a sense of urgency. Fortunately, other teams at Apple did, and it sounds like they’re the folks who have now been put in charge of the next generation of Siri and AI.

There are plenty of other details worth reading in the full story (especially the parts about what Rockwell’s team wanted to accomplish with Siri and AI on the Vision Pro), but one tidbit in particular stood out to me: Federighi has now given the green light to rely on third-party, open-source LLMs to build the next wave of AI features.

Federighi has already shaken things up. In a departure from previous policy, he has instructed Siri’s machine-learning engineers to do whatever it takes to build the best AI features, even if it means using open-source models from other companies in its software products as opposed to Apple’s own models, according to a person familiar with the matter.

“Using” open-source models from other companies doesn’t necessarily mean shipping consumer features in iOS powered by external LLMs. I’ve seen some people interpret this paragraph as Apple preparing to release a local Siri powered by Llama 4 or DeepSeek, and I think we should pay more attention to that “build the best AI features” (emphasis mine) line.

My read of this part is that Federighi might have instructed his team to use distillation to better train Apple’s in-house models as a way to accelerate the development of the delayed Siri features and put them back on the company’s roadmap. Given Tim Cook’s public appreciation for DeepSeek and this morning’s New York Times report that the delayed features may come this fall, I wouldn’t be shocked to learn that Federighi told Siri’s ML team to distill DeepSeek R1’s reasoning knowledge into a new variant of their ∼3 billion parameter foundation model that runs on-device. Doing that wouldn’t mean that iOS 19’s Apple Intelligence would be “powered by DeepSeek”; it would just be a faster way for Apple to catch up without throwing away the foundational model they unveiled last year (which, supposedly, had a ~30% error rate).

In thinking about this possibility, I got curious and decided to check out the original paper that Apple published last year with details on how they trained the two versions of AFM (Apple Foundation Model): AFM-server and AFM-on-device. The latter would be the smaller, ~3 billion model that gets downloaded on-device with Apple Intelligence. I’ll let you guess what Apple did to improve the performance of the smaller model:

For the on-device model, we found that knowledge distillation (Hinton et al., 2015) and structural pruning are effective ways to improve model performance and training efficiency. These two methods are complementary to each other and work in different ways. More specifically, before training AFM-on-device, we initialize it from a pruned 6.4B model (trained from scratch using the same recipe as AFM-server), using pruning masks that are learned through a method similar to what is described in (Wang et al., 2020; Xia et al., 2023).

Or, more simply:

AFM-server core training is conducted from scratch, while AFM-on-device is distilled and pruned from a larger model.

If the distilled version of AFM-on-device that was tested until a few weeks ago produced a wrong output one third of the time, perhaps it would be a good idea to perform distillation again based on knowledge from other smarter and larger models? Say, using 250 Nvidia GB300 NVL72 servers?

(One last fun fact: per their paper, Apple trained AFM-server on 8192 TPUv4 chips for 6.3 trillion tokens; that setup still wouldn’t be as powerful as “only” 250 modern Nvidia servers today.)

Permalink


A Peek Into LookUp’s Word of the Day Art and Why It Could Never Be AI-Generated

Yesterday, Vidit Bhargava, developer of the award-winning dictionary app LookUp, wrote on his blog about the way he hand-makes each piece of artwork that accompanies the app’s Word of the Day. While revealing that he has employed this practice every day for an astonishing 10 years, Vidit talked about how each image is made from scratch as an illustration or using photography that he shoots specifically for the design:

Each Word of the Day has been illustrated with care, crafting digital illustrations, picking the right typography that conveys the right emotion.

Some words contain images, these images are painstakingly shot, edited and crafted into a Word of the Day graphic by me.

I’ve noticed before that each Word of the Day image in LookUp seemed unique, but I assumed Vidit was using stock imagery and illustrations as a starting point. The revelation that he is creating these from scratch every single day was incredible and gave me a whole new level of respect for the developer.

The idea of AI-generated art (specifically art that is wholly generated from scratch by LLMs) is something that really sticks in my throat – never more so than with the recent rip-off of the beautiful, hand-drawn Studio Ghibli films by OpenAI. Conversely, Vidit’s work shows passion and originality.

To quote Vidit, “Real art takes time, effort and perseverance. The process is what makes it valuable.”

You can read the full blog post here.