Federico Viticci

9591 posts on MacStories since April 2009

Federico is the founder and Editor-in-Chief of MacStories, where he writes about Apple with a focus on apps, developers, iPad, and iOS productivity. He founded MacStories in April 2009 and has been writing about Apple since. Federico is also the co-host of AppStories, a weekly podcast exploring the world of apps, Unwind, a fun exploration of media and more, and NPC: Next Portable Console, a show about portable gaming and the handheld revolution.


The Many Purposes of Timeline Apps for the Open Web

Tapestry (left) and Reeder.

Tapestry (left) and Reeder.

Writing at The Verge following the release of The Iconfactory’s new app Tapestry, David Pierce perfectly encapsulates how I feel about the idea of “timeline apps” (a name that I’m totally going to steal, thanks David):

⁠⁠What I like even more, though, is the idea behind Tapestry. There’s actually a whole genre of apps like this one, which I’ve taken to calling “timeline apps.” So far, in addition to Tapestry, there’s ReederUnreadFeeeedSurf, and a few others. They all have slightly different interface and feature ideas, but they all have the same basic premise: that pretty much everything on the internet is just feeds. And that you might want a better place to read them.⁠⁠
[…]
These apps can also take some getting used to. If you’re coming from an RSS reader, where everything has the same format — headline, image, intro, link — a timeline app will look hopelessly chaotic. If you’re coming from social, where everything moves impossibly fast and there’s more to see every time you pull to refresh, the timeline you curate is guaranteed to feel boring by comparison.⁠⁠

I have a somewhat peculiar stance on this new breed of timeline apps, and since I’ve never written about them on MacStories before, allow me to clarify and share some recent developments in my workflow while I’m at it.

Read more


Six Colors’ Apple in 2024 Report Card

Average scores from the 2024 Six Colors report card. Source: [Six Colors](https://sixcolors.com/post/2025/02/apple-in-2024-the-six-colors-report-card/).

Average scores from the 2024 Six Colors report card. Source: Six Colors.

For the past 10 years, Six Colors’ Jason Snell has put together an “Apple report card” – a survey to assess the current state of Apple “as seen through the eyes of writers, editors, developers, podcasters, and other people who spend an awful lot of time thinking about Apple”.

The 2024 edition of the Six Colors Apple Report Card has been published, and you can find an excellent summary of all the submitted comments along with charts featuring average scores for the different categories here.

I’m grateful that Jason invited me to take part again and share my thoughts on Apple’s 2024. As you’ll see from my comments below, last year represented the end of an interesting transition period for me: after years of experiments, I settled on the iPad Pro as my main computer. Despite my personal enthusiasm, however, the overall iPad story remained frustrating with its peculiar mix of phenomenal M4 hardware and stagnant software. The iPhone lineup impressed me with its hardware (across all models), though I’m still wishing for that elusive foldable form factor. I was very surprised by the AirPods 4, and while Vision Pro initially showed incredible promise, I found myself not using it that much by the end of the year.

I’ve prepared the full text of my responses for the Six Colors report card, which you can find below.

Read more


Doing Research with NotebookLM

Fascinating blog post by Vidit Bhargava (creator of the excellent LookUp dictionary app) about how he worked on his master thesis with the aid of Google’s NotebookLM.

I used NotebookLM throughout my thesis, not because I was interested in it generating content for me (I think AI generated text and images are sloppy and classless); but because it’s a genuinely great research organization tool that provides utility of drawing connections between discreet topics and helping me understand my own journey better.

Make sure to check out the examples of his interviews and research material as indexed by the service.

As I explained in an episode of AppStories a while back, and as John also expanded upon in the latest issue of the Monthly Log for Club members, we believe that assistive AI tools that leverage modern LLM advancements to help people work better (and less) are infinitely superior to whatever useless slop generative tools produce.

Google’s NotebookLM is, in my opinion, one of the most intriguing new tools in this field. For the past two months, I’ve been using it as a personal search assistant for the entire archive of 10 years of annual iOS reviews – that’s more than half a million words in total. Not only can NotebookLM search that entire library in seconds, but it does so with even the most random natural language queries about the most obscure details I’ve ever covered in my stories, such as “When was the copy and paste menu renamed to edit menu?” (It was iOS 16.). It’s becoming increasingly challenging for me, after all these years, to keep track of the growing list of iOS-related minutiae; from a personal productivity standpoint, NotebookLM has to be one of the most exciting new products I’ve tried in a while. (Alongside Shortwave for email.)

Just today, I discovered that my read-later tool of choice – Readwise Reader – offers a native integration to let you search highlights with NotebookLM. That’s another source that I’m definitely adding to NotebookLM, and I’m thinking of how I could replicate the same Readwise Reader setup (highlights are appended to a single Google Doc) with Zapier and RSS feeds. Wouldn’t it be fun, for instance, if I could search the entire archive of AppStories show notes in NotebookLM, or if I could turn starred items from Feedbin into a standalone notebook as well?

I’m probably going to have to sign up for NotebookLM Plus when it launches for non-business accounts, which, according to Google, should happen in early 2025.

Permalink

“I Live My Life a Quarter Century at a Time”

Two days ago was the 25th anniversary of Steve Jobs unveiling the Aqua interface for Mac OS X for first time at Macworld Expo. James Thomson published a great personal retrospective on one particular item of the Aqua UI that was shown off at the event: the original dock.

The version he showed was quite different to what actually ended up shipping, with square boxes around the icons, and an actual “Dock” folder in your user’s home folder that contained aliases to the items stored. I should know – I had spent the previous 18 months or so as the main engineer working away on it. At that very moment, I was watching from a cubicle in Apple Cork, in Ireland. For the second time in my short Apple career, I said a quiet prayer to the gods of demos, hoping that things didn’t break. For context, I was in my twenties at this point and scared witless.

James has told this story before, but there are new details I wasn’t familiar with, as well as some links worth clicking in the full story.

Permalink

NVIDIA Announces GeForce NOW Support Coming to Safari on Vision Pro Later This Month

With a press release following an otherwise packed keynote at CES (which John and Brendon, my NPC co-hosts, attended in person last night), NVIDIA announced that their streaming service GeForce NOW is going to natively support the Apple Vision Pro…well, sort of.

There aren’t that many details in NVIDIA’s announcement, but the gist of it is that Vision Pro users will be able to stream games by visiting the GeForce NOW website when a new version launches “later this month”.

Get immersed in a new dimension of big-screen gaming as GeForce NOW brings AAA titles to life on Apple Vision Pro spatial computers, Meta Quest 3 and 3S and Pico virtual- and mixed-reality headsets. Later this month, these supported devices will give members access to an extensive library of games to stream through GeForce NOW by opening the browser to play.geforcenow.com when the newest app update, version 2.0.70, starts rolling out later this month.

This is all NVIDIA said in their announcement, which isn’t much, but we can speculate on a few things based on the existing limitations of visionOS.

For starters, the current version of Safari on visionOS does not support adding PWAs to the visionOS Home Screen. Given that the existing version of GeForce NOW requires saving a web app to begin the setup process, this either means that a) NVIDIA knows a visionOS software update in January will add the ability to save web apps or b) GeForce NOW won’t require that additional step to start playing on visionOS. The latter option seems more likely.

Second, as we covered last year, there is a workaround to play with GeForce NOW on visionOS, and that is the Nexus⁺ app. I’ve been using the Nexus⁺ app on my Vision Pro to stream Indiana Jones and other games from the cloud, and while the resolution is good enough1, what bothers me is the lack of HDR and Spatial Audio support (which should work with the Web Audio API in Safari for visionOS 2.0) in GeForce NOW when accessed from Nexus⁺’s built-in web browser.

The Nexus⁺ app supports ultra-wide aspect ratios, but HDR is nowhere to be found.

The Nexus⁺ app supports ultra-wide aspect ratios, but HDR is nowhere to be found.

With all this in mind, I’m going to guess that, at a minimum, NVIDIA will support a PWA-free installation method in Safari for visionOS. I’m less optimistic about HDR and Spatial Audio, but as I gravitate more and more toward cloud streaming rather than local PC streaming2, I’d be happily proven wrong here.

My only question is: with the App Store’s “new” rules, why isn’t NVIDIA making a native GeForce NOW app for Apple platforms?


  1. I’d love to know from people who know more about this stuff than I do whether Safari 18’s support for the WebRTC HEVC RFC 7789 RTP Payload Format makes a difference for GeForce NOW streaming or not. ↩︎
  2. I’m actually thinking about selling my 4090 FE GPU in an effort to go all-in on cloud streaming and SteamOS in lieu of Windows in 2025. But this is a future topic for NPC↩︎

iPad Pro for Everything: How I Rethought My Entire Workflow Around the New 11” iPad Pro

My 11" iPad Pro.

My 11” iPad Pro.

For the past two years since my girlfriend and I moved into our new apartment, my desk has been in a constant state of flux. Those who have been reading MacStories for a while know why. There were two reasons: I couldn’t figure out how to use my iPad Pro for everything I do, specifically for recording podcasts the way I like, and I couldn’t find an external monitor that would let me both work with the iPad Pro and play videogames when I wasn’t working.

This article – which has been six months in the making – is the story of how I finally did it.

Over the past six months, I completely rethought my setup around the 11” iPad Pro and a monitor that gives me the best of both worlds: a USB-C connection for when I want to work with iPadOS at my desk and multiple HDMI inputs for when I want to play my PS5 Pro or Nintendo Switch. Getting to this point has been a journey, which I have documented in detail on the MacStories Setups page.

This article started as an in-depth examination of my desk, the accessories I use, and the hardware I recommend. As I was writing it, however, I realized that it had turned into something bigger. It’s become the story of how, after more than a decade of working on the iPad, I was able to figure out how to accomplish the last remaining task in my workflow, but also how I fell in love with the 11” iPad Pro all over again thanks to its nano-texture display.

I started using the iPad as my main computer 12 years ago. Today, I am finally able to say that I can use it for everything I do on a daily basis.

Here’s how.

Read more


The Strange Case of Apple Intelligence’s iPhone-only Mail Sorting Feature

Tim Hardwick, writing for MacRumors, on a strange limitation of the Apple Intelligence rollout earlier this week:

Apple’s new Mail sorting features in iOS 18.2 are notably absent from both iPadOS 18.2 and macOS Sequoia 15.2, raising questions about the company’s rollout strategy for the email management system.

The new feature automatically sorts emails into four distinct categories: Primary, Transactions, Updates, and Promotions, with the aim of helping iPhone users better organize their inboxes. Devices that support Apple Intelligence also surface priority messages as part of the new system.

Users on iPhone who updated to iOS 18.2 have the features. However, iPad and Mac users who updated their devices with the software that Apple released concurrently with iOS 18.2 will have noticed their absence. iPhone users can easily switch between categorized and traditional list views, but iPad and Mac users are limited to the standard chronological inbox layout.

This was so odd during the beta cycle, and it continues to be the single decision I find the most perplexing in Apple’s launch strategy for Apple Intelligence.

I didn’t cover Mail’s new smart categorization feature in my story about Apple Intelligence for one simple reason: it’s not available on the device where I do most of my work, my iPad Pro. I’ve been able to test the functionality on my iPhone, and it’s good enough: iOS occasionally gets a category wrong, but (surprisingly) you can manually categorize a sender and train the system yourself.

(As an aside: can we talk about the fact that a bunch of options, including sender categorization, can only be accessed via Mail’s…Reply button? How did we end up in this situation?)

I would very much prefer to use Apple Mail instead of Spark, which offers smart inbox categorization across platforms but is nowhere as nice-looking as Mail and comes with its own set of quirks. However, as long as smart categories are exclusive to the iPhone version of Mail, Apple’s decision prevents me from incorporating the updated Mail app into my daily workflow.

Permalink

Apple Intelligence in iOS 18.2: A Deep Dive into Working with Siri and ChatGPT, Together

The ChatGPT integration in iOS 18.2.

The ChatGPT integration in iOS 18.2.

Apple is releasing iOS and iPadOS 18.2 today, and with those software updates, the company is rolling out the second wave of Apple Intelligence features as part of their previously announced roadmap that will culminate with the arrival of deeper integration between Siri and third-party apps next year.

In today’s release, users will find native integration between Siri and ChatGPT, more options in Writing Tools, a smarter Mail app with automatic message categorization, generative image creation in Image Playground, Genmoji, Visual Intelligence, and more. It’s certainly a more ambitious rollout than the somewhat disjointed debut of Apple Intelligence with iOS 18.1, and one that will garner more attention if only by virtue of Siri’s native access to OpenAI’s ChatGPT.

And yet, despite the long list of AI features in these software updates, I find myself mostly underwhelmed – if not downright annoyed – by the majority of the Apple Intelligence changes, but not for the reasons you may expect coming from me.

Some context is necessary here. As I explained in a recent episode of AppStories, I’ve embarked on a bit of a journey lately in terms of understanding the role of AI products and features in modern software. I’ve been doing a lot of research, testing, and reading about the different flavors of AI tools that we see pop up on almost a daily basis now in a rapidly changing landscape. As I discussed on the show, I’ve landed on two takeaways, at least for now:

  • I’m completely uninterested in generative products that aim to produce images, video, or text to replace human creativity and input. I find products that create fake “art” sloppy, distasteful, and objectively harmful for humankind because they aim to replace the creative process with a thoughtless approximation of what it means to be creative and express one’s feelings, culture, and craft through genuine, meaningful creative work.
  • I’m deeply interested in the idea of assistive and agentic AI as a means to remove busywork from people’s lives and, well, assist people in the creative process. In my opinion, this is where the more intriguing parts of the modern AI industry lie:
    • agents that can perform boring tasks for humans with a higher degree of precision and faster output;
    • coding assistants to put software in the hands of more people and allow programmers to tackle higher-level tasks;
    • RAG-infused assistive tools that can help academics and researchers; and
    • protocols that can map an LLM to external data sources such as Claude’s Model Context Protocol.

I see these tools as a natural evolution of automation and, as you can guess, that has inevitably caught my interest. The implications for the Accessibility community in this field are also something we should keep in mind.

To put it more simply, I think empowering LLMs to be “creative” with the goal of displacing artists is a mistake, and also a distraction – a glossy facade largely amounting to a party trick that gets boring fast and misses the bigger picture of how these AI tools may practically help us in the workplace, healthcare, biology, and other industries.

This is how I approached my tests with Apple Intelligence in iOS and iPadOS 18.2. For the past month, I’ve extensively used Claude to assist me with the making of advanced shortcuts, used ChatGPT’s search feature as a Google replacement, indexed the archive of my iOS reviews with NotebookLM, relied on Zapier’s Copilot to more quickly spin up web automations, and used both Sonnet 3.5 and GPT-4o to rethink my Obsidian templating system and note-taking workflow. I’ve used AI tools for real, meaningful work that revolved around me – the creative person – doing the actual work and letting software assist me. And at the same time, I tried to add Apple’s new AI features to the mix.

Perhaps it’s not “fair” to compare Apple’s newfangled efforts to products by companies that have been iterating on their LLMs and related services for the past five years, but when the biggest tech company in the world makes bold claims about their entrance into the AI space, we have to take them at face value.

It’s been an interesting exercise to see how far behind Apple is compared to OpenAI and Anthropic in terms of the sheer capabilities of their respective assistants; at the same time, I believe Apple has some serious advantages in the long term as the platform owner, with untapped potential for integrating AI more deeply within the OS and apps in a way that other AI companies won’t be able to. There are parts of Apple Intelligence in 18.2 that hint at much bigger things to come in the future that I find exciting, as well as features available today that I’ve found useful and, occasionally, even surprising.

With this context in mind, in this story you won’t see any coverage of Image Playground and Image Wand, which I believe are ridiculously primitive and perfect examples of why Apple may think they’re two years behind their competitors. Image Playground in particular produces “illustrations” that you’d be kind to call abominations; they remind me of the worst Midjourney creations from 2022. Instead, I will focus on the more assistive aspects of AI and share my experience with trying to get work done using Apple Intelligence on my iPhone and iPad alongside its integration with ChatGPT, which is the marquee addition of this release.

Let’s dive in.

Read more


Apple Frames 3.3 Adds Support for iPhone 16 and 16 Pro, M4 iPad Pro, and Apple Watch Series 10 (feat. An Unexpected Technical Detour)

Apple Frames 3.3 supports all the new devices released by Apple in 2024.

Apple Frames 3.3 supports all the new devices released by Apple in 2024.

Well, this certainly took longer than expected.

Today, I’m happy to finally release version 3.3 of Apple Frames, my shortcut to put screenshots inside physical frames of Apple devices. In this new version, which is a free update for everyone, you’ll find support for all the new devices Apple released in 2024:

  • 11” and 13” M4 iPad Pro
  • iPhone 16 and iPhone 16 Pro lineup
  • 42mm and 46mm Apple Watch Series 10

To get started with Apple Frames, simply head to the end of this post (or search for Apple Frames in the MacStories Shortcuts Archive), download the updated shortcut, and replace any older version you may have installed with it. The first time you run the shortcut, you’ll be asked to redownload the file assets necessary for Apple Frames, which is a one-time operation. Once that’s done, you can resume framing your screenshots like you’ve always done, either using the native Apple Frames menu or the advanced API that I introduced last year.

So what took this update so long? Well, if you want to know the backstory, keep on reading.

Read more